jo, to trustandsafety
@jo@wetdry.world avatar

#X's implementation of its latest decision, to include the term "c*sgender" in their (undoubtedly rancid) collection of swears, curses, and other nasty words, is IMO warranted, as we shouldn't allow the gradual outing of normal persons from an evermore hostile internet.

The main proponents of a successful internet should be its moderation tools. While a manually maintained list is sufficient for most usecases, with the growth and ever increasing need of an internet, we should also look for inclusion of hateful terms identified by automatic tools, such as sentence sentiment analysis, methinks.

ctrl,

@jo wait wtf jo, what's happening to you

  • posted by otter
puppygirlhornypost,
@puppygirlhornypost@transfem.social avatar

@ctrl @jo jo is becoming an entrepreneur

ilumium, to trustandsafety
@ilumium@eupolicy.social avatar

The @EU_Commission continues its enforcement firework 🎆

"Commission request information from #X on decreasing content resources under the "

https://ec.europa.eu/commission/presscorner/detail/en/mex_24_2522

btaroli, to mute
@btaroli@federate.social avatar

I have to admit I generally enjoy slack. But in any community there are going to be assholes. One thing I dislike about Slack is the inability to or people. Yes some of them utterly deserve it, and I don’t think I should have to tolerate their virtriol in the normal course of my day.

So instead I must manually stop myself from reading their words and absorbing their attitude. Because if I speak out about then I’m the jerk.

So exhausting. Be better, .

onlytina, to meta German
@onlytina@todon.eu avatar

Interessant, dass eine Institution da jetzt erst drauf kommt. Manche hier wissen das schon seit Jahren.
Weiss auch nicht, aber was neues ist das jetzt nicht. Meta war schon immer ein Höllenloch, wer sich da rumtreibt, lebt einfach hinterm Mond was Content Moderration, Meldemöglichkeiten und Sanktionierungen von Accounts/Postings angeht.
Als Francis Haugen vor dem EU Parlament aussagte, hätten die dort eigtl schon alles wissen können. Da brauchts nicht erst noch ne eigene Untersuchung. Meta ist das Befinden der Nutzer:innen völlig egal, denen gehts halt einfach nur um Profit. 🤷‍♀️

https://netzpolitik.org/2024/facebook-und-instagram-eu-kommission-untersucht-desinformation-und-drosselung-politischer-inhalte/

https://chaos.social/

ajussak, to mastodon French
@ajussak@piaille.fr avatar

Je viens de découvrir le principe de "Public Calme" sur Mastodon. Je ne sais pas sur quels critères se repose le principe de calme sur Mastodon mais pourquoi pas 🤔

Michkineflo,
@Michkineflo@piaille.fr avatar

@ajussak Pas plus de 2 points d'exclamation par pouet 😁

ajussak,
@ajussak@piaille.fr avatar

@Michkineflo Un peu limité comme filtre 😅

schizanon, to mastodon
@schizanon@mastodon.social avatar

If you block or mute someone on Mastodon and then they migrate to a new instance, does your block or mute migrate too?

NatureMC, (edited ) to KindActions
@NatureMC@mastodon.online avatar

If you get a without your account being locked, the of your have paid attention in advance and warn you about a suspicious account. You can still allow it as a follower, but the one today was indeed extremely suspicious and rubbish.

Friendly reminder: To thank your admins and instance for all their work you can ! Find the possibilities on your instance's page. The is but good /building instances costs time/money. And - tada:

NatureMC,
@NatureMC@mastodon.online avatar
Wander, to trustandsafety
@Wander@packmates.org avatar

approaches:
How to deal with accusations and rumors based on outside drama?

This is a tricky one.
Suppose a user is accused of something that is generally frowned upon or worse. However, this has supposedly happened irl, not in your instance, and also the user has a good track record of being active and generating content that doesn't violate the rules at your instance or other instances. There is also no threat to your userbase (as opposed to, for example, in the case of a doxxer).

What is the best approach?:

1- Suspend the user
2- Kick them out (allow them to migrate away, although this just pushes the problem back to the options below)
3- Let them stay and any disagreeing remote instances suspend the user on their side
4- Let them stay and any disagreeing remote instances suspend your whole instance

IMHO, the only sustainable option is option 3, unless it carries a realistic threat to other users. It's not feasible for small instances to be part-time detectives and unless the content posted itself is problematic or there are other reasons, service mentality should prevail if we don't want to push users to super large instances that don't care.

Any concerned remote users or instances are of course free to block on their side.

A whole instance block is not warranted since the instance itself is not spammy or allowing any problematic content, but merely decides not being capable of making a fair assessment of off-platform rumors.

Am I off track?

ifixcoinops,
@ifixcoinops@retro.social avatar

@Wander If I didn't have any evidence, this'd just end up with a note in the spreadsheet and an occasional narrow-eyed peering at the user in question to see if any alarm bells start ringing.

serge, to fediverse
@serge@babka.social avatar

Has anyone considered a Fediverse moderator exchange program?

Basically you'd have a moderator come onto your team and see what your moderation challenges are, how you handle them, etc.

This would be especially useful to marginalized communities which have poor representation on the Fediverse, such as Black instances, Jewish instances, and so on.

Would this be interesting to anyone else?

crossgolf_rebel,

@ueckueck
schon mal an so was gedacht?
@serge

ueckueck,
@ueckueck@dresden.network avatar

@crossgolf_rebel
Nur mal über eine Handreichung oder einen kleinen Workshop für eventuelle neue Mitglieder im eigenen Team und dann ist die Aktivenzahl wieder stark zurück gegangen und hat sich auf einem für uns 2 so gut handlebaren Niveau eingepegelt, dass das Thema wieder vom Tisch gefallen ist. Aber eigentlich wäre das schon nochmal wichtig. Hmmmm
@serge

utopiarte, to trustandsafety
schizanon, to twitter
@schizanon@mastodon.social avatar

A Brazilian Supreme Court judge has opened an inquiry into Elon Musk after the multi-billionaire said he would reactivate accounts on the social media platform X, formerly Twitter, that the judge had ordered to be blocked.

https://www.bbc.com/news/articles/cv2ygp5pdqlo

rimu, to trustandsafety
@rimu@mycrowd.ca avatar

I’m pretty happy with how tools for are coming along!

Moderators can:

  • delete & edit anything in community
  • ban people from community, and unban them.
  • review reports about content in that community
  • mark a report as resolved / ignored.

When a report is resolved or ignored, all reports regarding that content are also resolved. So if something receives 150 reports then mods won’t need to click 150 times to resolve all reports. Ignored reports stop all future reports from being accepted.

The person who created the community can appoint other moderators.

Reports federate to and from so if a PieFed user reports some content that came from a Lemmy instance the moderators on the Lemmy instance will be notified about the content being reported.

There’s still more to be done with federation of bans, a moderation log, etc. But it’s shaping up nicely!

https://piefed.social/post/80650

image/png

Nonilex, to meta
@Nonilex@masto.ai avatar

expands labeling policies as 2024 presidential nears
New ‘’ labels will be applied to posts the company detects have been generated by AI Starting May 2024, Meta will identify sounds, images & videos generated by on its amid widespread concern that a rise in “” posts will mislead .

https://www.washingtonpost.com/technology/2024/04/05/meta-ai-labels/

Nonilex,
@Nonilex@masto.ai avatar

Following the Board’s rec, also agreed to no longer remove digitally created if it doesn’t violate any other rules, but the company will attach a label saying that the has been . Starting next month, the company will start to apply “” labels on content it determines is or when people disclose they are uploading content.

Nonilex,
@Nonilex@masto.ai avatar

In Feb, unveiled plans to develop a system to identify that users create using services from other companies which have agreed to embed an AI identifier or a watermark.
Meta’s policy will probably be a welcome development to society groups & experts who have been warning that AI-generated is already proliferating online during a pivotal year.

sublinks, to fediverse
@sublinks@utter.online avatar

https://discuss.online/post/6776820

The Sublinks team has written up a little survey, which we feel is both thorough and inclusive. It covers a wide range of topics, such as user privacy, and community engagement, along with trying to gauge things that are difficult when moderating.

Raccoon, to mastodon
@Raccoon@techhub.social avatar

Weird bug in and interaction, thought / , , and / people might want to see this.

Got a about a user saying things that were upsetting people, noticed it was from another instance, realized that I had access to the actual account that had sent it. Not sure what this is or why, didn't even realize supported signed reports (I'd honestly like to get more), but obviously that's a risky thing if they didn't know it was happening and didn't know why.

Does anyone know what this is?

Screenshot of reply: they're on Akkoma, don't know why it happened, and don't seem to have experience with how Mastodon handles multi-instance reporting.
Screenshot of my reply, explaining the situation. See post above for the gist.

devnull,
@devnull@crag.social avatar

@Raccoon why would you want to know who sent the report? I'm confused as to why this is an issue.

Knowing the sender would be important to heading off report spam, too...

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • DreamBathrooms
  • modclub
  • tacticalgear
  • GTA5RPClips
  • magazineikmin
  • thenastyranch
  • Youngstown
  • InstantRegret
  • rosin
  • slotface
  • everett
  • kavyap
  • provamag3
  • cubers
  • ngwrru68w68
  • normalnudes
  • cisconetworking
  • khanakhh
  • osvaldo12
  • Durango
  • ethstaker
  • tester
  • anitta
  • megavids
  • Leos
  • lostlight
  • All magazines