I have to admit I generally enjoy slack. But in any community there are going to be assholes. One thing I dislike about Slack is the inability to #mute or #block people. Yes some of them utterly deserve it, and I don’t think I should have to tolerate their virtriol in the normal course of my day.
So instead I must manually stop myself from reading their words and absorbing their attitude. Because if I speak out about then I’m the jerk.
Interessant, dass eine Institution da jetzt erst drauf kommt. Manche hier wissen das schon seit Jahren.
Weiss auch nicht, aber was neues ist das jetzt nicht. Meta war schon immer ein Höllenloch, wer sich da rumtreibt, lebt einfach hinterm Mond was Content Moderration, Meldemöglichkeiten und Sanktionierungen von Accounts/Postings angeht.
Als Francis Haugen vor dem EU Parlament aussagte, hätten die dort eigtl schon alles wissen können. Da brauchts nicht erst noch ne eigene Untersuchung. Meta ist das Befinden der Nutzer:innen völlig egal, denen gehts halt einfach nur um Profit. 🤷♀️
Je viens de découvrir le principe de "Public Calme" sur Mastodon. Je ne sais pas sur quels critères se repose le principe de calme sur Mastodon mais pourquoi pas 🤔
If you get a #followRequest without your account being locked, the #admins of your #instance have paid attention in advance and warn you about a suspicious account. You can still allow it as a follower, but the one today was indeed extremely suspicious and rubbish.
Friendly reminder: To thank your admins and instance for all their work you can #donate! Find the possibilities on your instance's page. The #Fediverse is #adfree but good #moderation/building instances costs time/money. And - tada:
#MastoAdmin#Moderation approaches:
How to deal with accusations and rumors based on outside drama?
This is a tricky one.
Suppose a user is accused of something that is generally frowned upon or worse. However, this has supposedly happened irl, not in your instance, and also the user has a good track record of being active and generating content that doesn't violate the rules at your instance or other instances. There is also no threat to your userbase (as opposed to, for example, in the case of a doxxer).
What is the best approach?:
1- Suspend the user
2- Kick them out (allow them to migrate away, although this just pushes the problem back to the options below)
3- Let them stay and any disagreeing remote instances suspend the user on their side
4- Let them stay and any disagreeing remote instances suspend your whole instance
IMHO, the only sustainable option is option 3, unless it carries a realistic threat to other users. It's not feasible for small instances to be part-time detectives and unless the content posted itself is problematic or there are other reasons, service mentality should prevail if we don't want to push users to super large instances that don't care.
Any concerned remote users or instances are of course free to block on their side.
A whole instance block is not warranted since the instance itself is not spammy or allowing any problematic content, but merely decides not being capable of making a fair assessment of off-platform rumors.
Has anyone considered a Fediverse moderator exchange program?
Basically you'd have a moderator come onto your team and see what your moderation challenges are, how you handle them, etc.
This would be especially useful to marginalized communities which have poor representation on the Fediverse, such as Black instances, Jewish instances, and so on.
A Brazilian Supreme Court judge has opened an inquiry into Elon Musk after the multi-billionaire said he would reactivate accounts on the social media platform X, formerly Twitter, that the judge had ordered to be blocked.
I’m pretty happy with how #moderation tools for #PieFed are coming along!
Moderators can:
delete & edit anything in community
ban people from community, and unban them.
review reports about content in that community
mark a report as resolved / ignored.
When a report is resolved or ignored, all reports regarding that content are also resolved. So if something receives 150 reports then mods won’t need to click 150 times to resolve all reports. Ignored reports stop all future reports from being accepted.
The person who created the community can appoint other moderators.
Reports federate to and from #Lemmy so if a PieFed user reports some content that came from a Lemmy instance the moderators on the Lemmy instance will be notified about the content being reported.
There’s still more to be done with federation of bans, a moderation log, etc. But it’s shaping up nicely!
The Sublinks team has written up a little survey, which we feel is both thorough and inclusive. It covers a wide range of topics, such as user privacy, and community engagement, along with trying to gauge things that are difficult when moderating.
Got a #report about a user saying things that were upsetting people, noticed it was from another instance, realized that I had access to the actual account that had sent it. Not sure what this is or why, didn't even realize #ActivityPub supported signed reports (I'd honestly like to get more), but obviously that's a risky thing if they didn't know it was happening and didn't know why.
I come home to find my feeds/streams flooded with vitriolic politicial drivel emanating from a group to which I belong that states emphatically that it doesn't tolerate off-topic traffic.
The stated purpose of this Fediverse group on my home Friendica server is stated as follows:
This is a Friendica group dedicated to #Fediverse news. What are the advantages of a group over a hashtag?Groups can do things that hashtags can't. For example, groups:
are moderated
can re-share content
can speak as a group
Joining and contributing to a Friendica group is easy. To share your posts to @Fediverse News, follow these steps:
The @Fediverse News group will then re-share your post
This is an actively moderated group. Be sure to stay on topic, or your posts will be removed.
As per the instructions for this Friendica / Fediverse group, I'm notifying the moderation team by CC'ing the following address with this complaint and request to remove the vicious hate that's been spewing into the group here all day long while I've been away working:
People who sign up for a Fediverse News site should not be subjected to hatred being fomented, propagated, and bantered about with respect to unrelated matters, such as (abominable) off-topic, political vitriol.
) Posting announcements concerning the onboarding and subesquent federating nature of a public figure's account on threads.net is a relevant matter to the Fediverse, Fediverse Technology, and Fediverse News.
) acerbic comentary, name calling, ad hominem, and libel, as has consumed the group today, is not - those posts are a cause of severe harm and should most certainly, IMO be removed as per the terms/rules quoted above.
) The level of cacophony and pejorative hate speech permitted to continue throughout the day today is shameful. This is not the place to engage or encourage such juvenile behavior, let alone permit it to foment and spread across the Fediverse as it has today!
Moderation Team: Thank you, in advance, for taking your time to address and resolve this matter, returning this group to the decorum it usually enjoys with people conversing and observing the principles of civil discourse.
We just expanded the section in our #SubduedSocial server rules to conform to the #IFTAS guidelines against targeted misgendering and deadnaming in the fediverse.
The accts signal a potential tactical shift in how #China aims to #influence American #politics, w/more of a willingness to target specific candidates & parties, including Biden.
In an echo of #Russia’s #influence campaign, #China appears to be trying to harness #partisan#divisions to undermine the #Biden admin’s policies, despite recent efforts by the 2 countries to lower the temperature in their relations.
Some of the #Chinese accounts impersonate fervent #Trump fans, including one [still allowed] on #X that purported to be “a father, husband & son” who was “#MAGA all the way!!”
The accts mocked #Biden’s age & shared fake images of him in a prison jumpsuit, or claimed that Biden was a Satanist pedophile QAnon [#propaganda] while promoting #Trump’s #MAGA slogan.
“I’ve never seen anything along those lines at all before,” said Elise Thomas, a snr analyst at the Institute for Strategic Dialogue, a nonprofit research org that uncovered some of the fake accts….
the reticence of some to block #threads is indeed disconcerting, but there is a solution
there are #moderation standards for responsible servers. if enough bigotry, hate, trolling etc emanates from a server, the problem is moved from blocking accounts, to blocking the server itself
we all know what is going to come from threads
and the inevitable outlined above will just play out
and those servers that choose not to block, will find themselves eaten alive by the toxicity that will result
LGBTQ+ advocacy group GLAAD published a report which claims Meta's moderation system allows anti-trans content on Facebook, Instagram and Threads to "flourish."
Just noticed TikTok has a feature called “folded comments” that may appear at the very end of a videos comment section. It hides crappy comments but does not delete or exclude them.
These aren’t necessarily flagged comments, just ones their algorithm thinks would offend you.
This is part of their optional (enabled by default) anti bullying protection.
This one in particular was hiding anti gay sentiments.
OC Banning spam accounts
Banning spam accounts on kbin.social is a cumbersome affair....