maelduin13, to mastodon

Question for

Is it possible to block / remove accounts based on external sources they link to?

For example if somebody links to the substack of a well-known Kremlin propagandist, is it possible to flag an account for action? I guess a form of white-/black-listing

Obviously a policy of this nature would be tricky as somebody might be linking for reference and not for endorsement.

But hypothetically is it possible?

cc @trumpet


andi1984, to fediverse

I donated to the @elk and @thisismissem for the work on the e.g. on and tooling.

Thank you for all your work for and on the fediverse!

skyfaller, to trustandsafety
@skyfaller@jawns.club avatar

What resources exist for recruiting, adding, onboarding, and training in online spaces, especially volunteer ? I also welcome resources specific to Mastodon , but many lessons for moderating other online communities should translate.

Also, while practical guides to moderation itself are important, I would especially value any insight into supporting the human moderators. How can mods work together effectively, and avoid conflict & burnout?

343max, to random German

Willkommen in der selbstverschuldeten Irrelevanz!

kkarhan,
@kkarhan@mstdn.social avatar

@jr @343max also wie alle die https://fedi.garden/about-this-site/ entsprechen...

dass / sich weigert dutchzuführen sieht man deren Bereitschaft [Trans-] Genozidale Gruppen und Hassrede zu hosten...

aral, to fediverse
@aral@mastodon.ar.al avatar

He’s making a list,
He’s checking it twice,
He’s going to decide,
Who’s naughty or nice…
Zuckerberg is coming to town.

gersande, to trustandsafety
@gersande@silvan.cloud avatar

"At its shining moment, Twitter was like the Tower of Babel before it fell." From Israel vs. Hamas threats to Donald Trump’s “wild” posts, Del Harvey helped make the platform’s hardest content moderation calls for 13 years. Then she left in 2021 … and disappeared. https://www.wired.com/story/del-harvey-twitter-trust-and-safety-breaks-her-silence/

doctormo, to trustandsafety
@doctormo@floss.social avatar

This video about might be interesting to moderators. It doesn't go as deep as I've seen conversations get here, but it is an interesting addition.

https://www.youtube.com/watch?v=2IrG68YTMjo

tasket, to fediverse

One thing that FIDONet did not have:

Automatic federation with any other entity, no questions asked. IMO, there should be minimum requirements to federate a multi-user instance, such as completion of a course and signing a public statement.
@jerry @Gargron

tasket,

@jerry

OK, why were my replies to @ErrataRob erased without any trace or sign of ... more significantly while he was posturing as a free speech absolutist victim?

hankg, to fediverse

Question for current Relatica testers (or potential future users): on the client side filtering do you think it would be better if the app remembered if you unhid a post/comment?

Screenshot of the post a user toggled to be shown even though filtered.

FedicaHQ, to news
@FedicaHQ@m.fedica.com avatar

Supreme Court to consider giving First Amendment protections to social media posts.

The Supreme Court has agreed to hear five cases ending in 2024, that collectively give the court the opportunity to reexamine the nature of content moderation.

Thoughts?


https://newhampshirebulletin.com/2023/11/29/supreme-court-to-consider-giving-first-amendment-protections-to-social-media-posts/

RonaldTooTall, to internet

Supreme Court will decide on First Amendment protections for social media.

The court has the opportunity to reexamine the nature of content moderation and the constitutional limitations of the government regarding speech on platforms such as Facebook, X, TikTok, and Instagram.

https://chicago.suntimes.com/2023/11/28/23977943/first-amendment-free-speech-social-media-facebook-tiktok-instagram-x-supreme-court-the-conversation

strypey, to fediverse
@strypey@mastodon.nzoss.nz avatar

Mute Post

Clicking on it prevents a post from appearing in a thread when viewed on the web version of the server hosting my account. Eg;

A reply to my post uses an offensive term in the heat of the moment. I don't want to feed the trolls by replying, but I don't want people thinking I endorse that kind of hateful language. So I click Mute Post, and and anyone who views my account on Mastodon.nzoss.nz won't see it.

cragsand, to Twitch
@cragsand@mastodon.social avatar

Outsourcing moderation entirely to AI, while it may save some money, is not a good idea... full article below.

1500 votes! Thank you all for your support! 💜
The more of you who lend your voices, the more you show that this is a real issue that is worth correcting!

Looks like my article regarding this got some traction and votes!
Thanks to streamer @PaymoneyWubby for bringing attention to it.

https://twitch.uservoice.com/forums/951706-moderation/suggestions/47475527-global-ai-automod-does-not-understand-context

cragsand,

@cragsand
Discussion regarding Twitch moderation AI spread to Reddit where I clarified some questions that arose:

Since this global AI AutoMod remains an undocumented "feature" of Twitch chat from a while back a lot of the conclusions I've listed in the thread are based on deduction from watching active chatters get suspended and tell their stories on Discord and social media.

Most can luckily get their account reinstated after appealing but it relies on having an actual human look at the timestamp of the VOD and take their time to figure out what actually happened as well as get the complete context of what was going on on stream when it occurred. I've seen many apologies from Twitch moderation sent in emails after appealing, but if you get unbanned, an apology or stay banned seems mostly random.

Being banned like this will also make it much less likely that you want to participate and joke around in chat in the future, leading to a much worse chatting experience.

I see some discussions are arguing that all AI flagged moderation events are actually reviewed by humans (but poorly) and this is a possibility. Because of the lack of transparency from Twitch regarding how this works it's very difficult to know for sure how these reviews are done. A manual report in combination with an AI flag is almost certainly a ban. One thing is sure though, and that is that too much power is given to AI to judge in these cases.

Seeing as permanent suspensions from accounts who have had active paying subscriptions for YEARS on Twitch can be dished out in seconds, either those reviewing are doing a lousy job, or its mostly done by AI. Even worse, if those reviewing are underpaid workers who get paid by "number of cases solved per hour" there is little incentive for them to take their time to gather more context when reviewing.

It's likely that if Twitch gets called out for doing this, they have little incentive to admit it as it may even be in violation of consumer regulations in some countries. Getting a response that they "Will oversee their internal protocol for reviewing" may be enough of a win which results in them actually turning this off. Since there is no transparency we can't really know for sure.

A similar thing happened on YouTube at the start of 2023, where they went through all old videos speech-to-text transcripts and issues strikes retroactively. It got a lot of old channels to disappear, especially those with hours of VOD content where something could get picked up and flagged by AI. For the communities I'm engaged in, it meant relying less on YouTube for saving Twitch VODs. It was brought up by MoistCritical about a year ago since it also affected monetization of old videos.

cragsand,
@cragsand@mastodon.social avatar

Here it is: "Spirit AI" using "proactive" technology.

I fear this has backfired to instead mean presumed guilt until proven innocent. Makes me think of Minority Report. The road to hell is paved with good intentions and all that I guess.

From a 2022 Twitch blog post:
https://safety.twitch.tv/s/article/An-Update-on-Our-Work-to-Stamp-out-Targeted-Hate?language=en_US

cragsand, (edited )
@cragsand@mastodon.social avatar

Thank you everyone who helped bring attention to this!
But looks like it was shut down by Twitch.

They misinterpreted the whole thing, perhaps intentionally. AI is being used to flag/ban without considering context and I hope that this is reconsidered.

"Twitch Global AI AutoMod does not understand context."

dada, to trustandsafety French
@dada@diaspodon.fr avatar

Hop, je viens de bannir une nouvelle instance depuis diaspodon.fr.

N'hésitez pas à aller dans le /about de votre instance pour y vérifier ce qu'elle bloque et pourquoi.

ctrlxcvz, to random

I wanted to try the default mastodon social instance to get the hang of it - and I know there’s all kind of specific-interest ones like for art, music, tech, games, etc. but does anyone out there have some recommendations for other decent, all-purpose ones?

marqle,

@HistoPol @AlliFlowers @ctrlxcvz

There is a really easy way to navigate that issue. Use a popular instance as an alternative server.

This may change, but for now, Pleroma and its branches are far more relaxed about content. Most instances operate a for all

Just be ready for the and don't do it if you have a sensitive disposition

mastodonmigration, (edited ) to random
@mastodonmigration@mastodon.online avatar

Mastodon, we have a problem.

@Teri_Kanefield: I am beginning to think I waste my time and energy on social media... I came back to read my comments... and really, I feel ill.

@Popehat: ...find me on Bluesky... less like I'm likely to get screamed at...

@mmasnick: Same is mostly true for me.

@mattblaze: Getting close to joining you... The yelling here is getting to be a bit much.

[Edit to clarify NOT all are considering leaving]

strypey,
@strypey@mastodon.nzoss.nz avatar

It seems that to be effective, a Block relies entirely on the Block being respected by the remote server hosting the blocked account. If that server is self-hosted by the operator of that account, it's hardly going to respect Block requests.

Why can't the server hosting the account doing the blocking identify a request to see a post from the blocked account, and refuse that request?

@nicol @KevinMarks

cragsand, (edited ) to Twitch
@cragsand@mastodon.social avatar

AI based is causing issues for streamers on .

When any single line of text can be taken to mean something horrific out of context, streamers are seeing their most active chatters getting permanently suspended without any human intervention.

I wrote an article to hopefully change this. Maybe it can at least lead to further discussion regarding the matter.

Outsourcing to , while it may save money, is NOT a good idea.

Full article: https://twitch.uservoice.com/forums/951706-moderation/suggestions/47475527-global-ai-automod-does-not-understand-context

maiwann, to trustandsafety French
@maiwann@framapiaf.org avatar

Ok j'arrive pour et je vais parler demain-samedi de :

  • à 11h, retour d'expérience après 4 ans à modérer framapiaf
  • Coin-coin pour Framasoft à 15h avec le copain @booteille !

https://framablog.org/2023/11/14/reprenons-du-terrain-sur-le-web-toxique-bilan-2023-de-framasoft/

A demain !

PS : Et Dimanche les copaines parlent à 11h de !

thisismissem, to trustandsafety
@thisismissem@hachyderm.io avatar

For all those who'd been wondering about FSEP and where it's at, @nivenly have just published an update: FSEP is on hold, pending the return of the original maintainer, or until a new maintainer can be found:

https://nivenly.org/blog/2023/11/14/updates-for-october-and-november/#federation-safety-enhancement-project-fsep

schizanon, to twitter
@schizanon@mas.to avatar

Of all the things I lost when I lost , "Not interested in this post" is the one I miss the most.

Edent, to internet
@Edent@mastodon.social avatar

🆕 blog! “Why did Usenet fail?”

This is annecdata - not a serious academic study. Adjust your expectations accordingly. When I first got online, the World Wide Web was still in its infancy - so CompuServe was my gateway to the Internet. I loved their well organised chat room. A couple of clicks and I could be discussing Babylon 5 with […]

👀 Read more: https://shkspr.mobi/blog/2023/06/why-did-usenet-fail/

doboprobodyne,
@doboprobodyne@mathstodon.xyz avatar

@memory @Edent @davidgerard

That's my question. How would one exploit this learning, if starting up a fediverse server now, with minimal resources?

I feel like including ads to raise resources for moderation might be one thing... Huge drawbacks of course.

Any other ideas?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • tacticalgear
  • mdbf
  • InstantRegret
  • magazineikmin
  • Youngstown
  • thenastyranch
  • rosin
  • slotface
  • Durango
  • cubers
  • ngwrru68w68
  • anitta
  • cisconetworking
  • GTA5RPClips
  • modclub
  • tester
  • khanakhh
  • everett
  • provamag3
  • osvaldo12
  • Leos
  • normalnudes
  • ethstaker
  • megavids
  • lostlight
  • All magazines