#X's implementation of its latest #moderation decision, to include the term "c*sgender" in their (undoubtedly rancid) collection of swears, curses, and other nasty words, is IMO warranted, as we shouldn't allow the gradual outing of normal persons from an evermore hostile internet.
The main proponents of a successful internet should be its moderation tools. While a manually maintained list is sufficient for most usecases, with the growth and ever increasing need of an #accessible internet, we should also look for inclusion of hateful terms identified by automatic tools, such as sentence sentiment analysis, methinks.
I have to admit I generally enjoy slack. But in any community there are going to be assholes. One thing I dislike about Slack is the inability to #mute or #block people. Yes some of them utterly deserve it, and I don’t think I should have to tolerate their virtriol in the normal course of my day.
So instead I must manually stop myself from reading their words and absorbing their attitude. Because if I speak out about then I’m the jerk.
Interessant, dass eine Institution da jetzt erst drauf kommt. Manche hier wissen das schon seit Jahren.
Weiss auch nicht, aber was neues ist das jetzt nicht. Meta war schon immer ein Höllenloch, wer sich da rumtreibt, lebt einfach hinterm Mond was Content Moderration, Meldemöglichkeiten und Sanktionierungen von Accounts/Postings angeht.
Als Francis Haugen vor dem EU Parlament aussagte, hätten die dort eigtl schon alles wissen können. Da brauchts nicht erst noch ne eigene Untersuchung. Meta ist das Befinden der Nutzer:innen völlig egal, denen gehts halt einfach nur um Profit. 🤷♀️
Je viens de découvrir le principe de "Public Calme" sur Mastodon. Je ne sais pas sur quels critères se repose le principe de calme sur Mastodon mais pourquoi pas 🤔
If you get a #followRequest without your account being locked, the #admins of your #instance have paid attention in advance and warn you about a suspicious account. You can still allow it as a follower, but the one today was indeed extremely suspicious and rubbish.
Friendly reminder: To thank your admins and instance for all their work you can #donate! Find the possibilities on your instance's page. The #Fediverse is #adfree but good #moderation/building instances costs time/money. And - tada:
#MastoAdmin#Moderation approaches:
How to deal with accusations and rumors based on outside drama?
This is a tricky one.
Suppose a user is accused of something that is generally frowned upon or worse. However, this has supposedly happened irl, not in your instance, and also the user has a good track record of being active and generating content that doesn't violate the rules at your instance or other instances. There is also no threat to your userbase (as opposed to, for example, in the case of a doxxer).
What is the best approach?:
1- Suspend the user
2- Kick them out (allow them to migrate away, although this just pushes the problem back to the options below)
3- Let them stay and any disagreeing remote instances suspend the user on their side
4- Let them stay and any disagreeing remote instances suspend your whole instance
IMHO, the only sustainable option is option 3, unless it carries a realistic threat to other users. It's not feasible for small instances to be part-time detectives and unless the content posted itself is problematic or there are other reasons, service mentality should prevail if we don't want to push users to super large instances that don't care.
Any concerned remote users or instances are of course free to block on their side.
A whole instance block is not warranted since the instance itself is not spammy or allowing any problematic content, but merely decides not being capable of making a fair assessment of off-platform rumors.
@Wander If I didn't have any evidence, this'd just end up with a note in the spreadsheet and an occasional narrow-eyed peering at the user in question to see if any alarm bells start ringing.
Has anyone considered a Fediverse moderator exchange program?
Basically you'd have a moderator come onto your team and see what your moderation challenges are, how you handle them, etc.
This would be especially useful to marginalized communities which have poor representation on the Fediverse, such as Black instances, Jewish instances, and so on.
@crossgolf_rebel
Nur mal über eine Handreichung oder einen kleinen Workshop für eventuelle neue Mitglieder im eigenen Team und dann ist die Aktivenzahl wieder stark zurück gegangen und hat sich auf einem für uns 2 so gut handlebaren Niveau eingepegelt, dass das Thema wieder vom Tisch gefallen ist. Aber eigentlich wäre das schon nochmal wichtig. Hmmmm @serge
A Brazilian Supreme Court judge has opened an inquiry into Elon Musk after the multi-billionaire said he would reactivate accounts on the social media platform X, formerly Twitter, that the judge had ordered to be blocked.
I’m pretty happy with how #moderation tools for #PieFed are coming along!
Moderators can:
delete & edit anything in community
ban people from community, and unban them.
review reports about content in that community
mark a report as resolved / ignored.
When a report is resolved or ignored, all reports regarding that content are also resolved. So if something receives 150 reports then mods won’t need to click 150 times to resolve all reports. Ignored reports stop all future reports from being accepted.
The person who created the community can appoint other moderators.
Reports federate to and from #Lemmy so if a PieFed user reports some content that came from a Lemmy instance the moderators on the Lemmy instance will be notified about the content being reported.
There’s still more to be done with federation of bans, a moderation log, etc. But it’s shaping up nicely!
Following the #Oversight Board’s rec, #Meta also agreed to no longer remove digitally created #media if it doesn’t violate any other rules, but the company will attach a label saying that the #content has been #altered. Starting next month, the company will start to apply “#MadeWithAI” labels on content it determines is #AI or when people disclose they are uploading #AIgenerated content.
In Feb, #Meta unveiled plans to develop a system to identify #AIgenerated#content that users create using services from other #tech companies which have agreed to embed an AI identifier or a watermark.
Meta’s #AI policy will probably be a welcome development to #civil society groups & experts who have been warning that AI-generated #misinformation is already proliferating online during a pivotal #election year.
The Sublinks team has written up a little survey, which we feel is both thorough and inclusive. It covers a wide range of topics, such as user privacy, and community engagement, along with trying to gauge things that are difficult when moderating.
Got a #report about a user saying things that were upsetting people, noticed it was from another instance, realized that I had access to the actual account that had sent it. Not sure what this is or why, didn't even realize #ActivityPub supported signed reports (I'd honestly like to get more), but obviously that's a risky thing if they didn't know it was happening and didn't know why.