thisismissem,
@thisismissem@hachyderm.io avatar

Via @jdp23, the Senator behind the STOP CSAM bill, which outlaws many forms of encryption and doesn't actually stop CSAM, read that WaPo article about CSAM on the fediverse and tweeted about it:

https://twitter.com/SenatorDurbin/status/1683562063270928384

olives,
@olives@qoto.org avatar

@thisismissem @jdp23 I wrote about this before. I can go further, if you like.

This "study" is absolute garbage.

For instance, it scans around half a million posts to find 100 "potential" hits, and on sites which don't use one particular tool.

He then acts as if this faux pas is the "end of the world", even though mainstream social media is known to be objectively worse than the fediverse in sheer number of cases.

He also uses Google's algorithms which have been known to misclassify computer generated images. While that might not be to your liking, it is extremely misleading to suggest that this is that.

It is also not unlikely that some of these posts might be spammy / automation based and which hit a large number of hashtags.

Also, he cherry-picks one particular site (which has recently been under heavy fire from fediverse admins) when other similar sites, even with similar policies, aren't seen to be troublesome in the same way.

Also, some cherry-picked posts shown in screenshots are labelled as having been posted almost a year ago, and statistics are ever so conveniently missing on this.

Also, if he wanted to help admins with a pertinent issue, he could have reached out to them privately, rather than cherry-picking posts here and there to try to humiliate them.

Also, this very same person has previously made tweets in opposition to Facebook deploying end-to-end encryption in FB Messenger.

He also seems to want Facebook to essentially run the fediverse in the name of "saving the children", or to run every image through a Microsoft hosted service (a PRISM / NSA partner).

Problematically here, some of these services are not even based in the U.S., even if they were, services have First / Fourth Amendment rights, and the argument is in the quality of moderation / communications, not a lack of moderation.

It's not tenable to hold every service liable for a small amount of misuse, nor it is proportionate to do so, especially when someone's free expression is taken into consideration.

Also, a bad actor could just run their own dedicated service in violation of the law. If they're so determined to flout the law, they could well do so.

Also, these services are known to take actual child porn down, he admitted as much, often within hours, however, because it wasn't taken down "immediately", it becomes a "scandal".

thisismissem,
@thisismissem@hachyderm.io avatar

@olives @jdp23 We are talking about the same thing right? This report? https://purl.stanford.edu/vb515nd6874

112 posts of CSAM, and 554 posts that are potentially CSAM or child sex-trafficking is too much

Even if 87% are from "alt fediverse" or "defediverse" instances, that still leaves 15 posts of CSAM and 72 of potential CSAM/child sexual abuse that are on the main fediverse that haven't been reported or left unaddressed.

On the main fediverse, any number greater than 0 is unacceptable. We must do better

thisismissem,
@thisismissem@hachyderm.io avatar

@olives @jdp23 using Microsoft PhotoDNA, Google's SafeSearch APIs, and Thorn's service for detecting CSAM is in fact an industry standard when in comes to trust and safety on user generated content.

You might not like that they're US based or not know these tools, but we can surely work towards tools that work for the fediverse and within a privacy framework.

We don't yet have data on how quickly reports of CSAM or similar content are actioned on. Ideally we prevent publishing CSAM up front.

thisismissem,
@thisismissem@hachyderm.io avatar

@olives @jdp23 Also, at the end of the day, if you want to run a small instance, and you know your members are absolutely not going to post any content that's illegal (e.g., CSAM), then you don't have to use any of those tools to scan for potentially harmful content.

But, other admins may go "yeah, I'd rather play it safe", and then employ tools to assist them in moderation.

thisismissem,
@thisismissem@hachyderm.io avatar

@olives @jdp23 further, the report itself doesn't actually name and shame instances at all.

The information about "which" instances came later, because instance admins wanted to make sure they were blocking instances spreading CSAM, as to avoid that making its way onto their servers.

jdp23,

To me, several things are true simultaneously:

  • the report called attention to a problem that Mastodon collectively hasn't paid enough attention to, and had some useful suggestions for improving moderation tools

  • by eliding important details, including that the source of much of the CSAM material has been known for this since 2017, is widely defederated, and that reject_media was developed specifically in 2017 specifically to deal with this problematic instance (and does so effectively for sites that turn it on), it painted an inaccurate picture of the situation.

  • focusing only on the report's shortcomings shifts attention away from real problems, including that Mastodon installations by default don't block instances that are known sources of CSAM, that Mastodon gGmbH hasn't prioritized addressing this or improving moderation tools, and that the mobile apps and SpreadMastodon direct newcomers to a site where the moderators don't take action on clearly illegal content. Mastodon gGmbH's has a track record of not prioritizing user safety, and it's a huge problem. Hopefully the reaction to this report leads to positive changes.

  • then again, the report doesn't take a "positive deviance" approach of looking at what works (tier0 blocklists, existing mechanisms like silencing and reject_media) and the possibilities for making a decentralized approach work. Instead the report concludes that centralization will be required, and suggests collaboration with Threads and others "to help bring the trust and safety benefits currently enjoyed by centralized platforms to the wider Fediverse ecosystem." But wait a second, trust and safety SUCKS for most people on Threads, why won't these supposed "benefits" lead to the same situation in the fediverse?

@thisismissem @olives

thisismissem,
@thisismissem@hachyderm.io avatar

@jdp23 @olives I think I agree on that last point, in that centralisation is bad & not the correct approach, that's why we're trying to figure out how to make tech that was previously only existent in centralised spaces available to the decentralized social web.

Early approaches will almost certainly be centralised to some degree, whilst we work with partners to evolve systems towards decentralisation.

thisismissem,
@thisismissem@hachyderm.io avatar

@jdp23 @olives but I will say that the fediverse has a LOT to learn from centralised social media as to trust & safety (even if we do some things differently already)

As it is, we're seeing new fediverse software being launched in a mainstream way without attention paid to even the most basic of moderation tools, and that's a huge problem.

thisismissem,
@thisismissem@hachyderm.io avatar

@jdp23 @olives So far the fediverse's way of tackling CSAM & other horrendous content has been defederation, rather than prevention; we've usually been moderating in a reactive manner, rather than proactively.

olives,
@olives@qoto.org avatar

@jdp23 @thisismissem As far as I know, the real child porn issue is far more recent. It was initially defederated by instances for containing objectionable content.

What's frustrating to me (about that instance, not you) is that other similar instances moderate such content better in a bid to offer a space for free artistic expression.

At the end of the day, I don't think these kinds of activities are good for any instance. It drags everyone down.

Some degree of collaboration might help here. Even if someone wants to have a different standard from another, it would be nice to have a baseline (i.e. real child porn should have no place anywhere).

While I suspect David is cherry-picking some of these posts (given the dates), it would be nice to see some of these other iffy accounts go away too.

The spammy ones which hint at engaging in illicit activity. I'm not sure if you've had to deal with them, however, a few instances have had to.

If they're appearing elsewhere (i.e. on the main fediverse), then that is quite a disturbing development. These accounts are also present on mainstream social media.

thisismissem,
@thisismissem@hachyderm.io avatar

@olives @jdp23 yeah, probably the first step to dealing with them is to start collecting data & signals on content that may be harmful & starting to either process that content for review or to build better automated tools based on the learnings — the real thing highlighted in the report is that large instances are currently flying blind, which is dangerous for everyone

jdp23,

@olives Thanks for the correction on the timing. It's long enough ago that I don't remember, I thought that back in 2017 there were reports of stuff that would be illegal in the US and/or Western Europe (as well as the content that was acceptable by Japanese standards but objectionable elsewhere) but maybe it was just concerns. More recently I've heard that site and another site owned by the same corporation are essentially unmoderated, not sure how long that's been the case for. In any case this is stuff that should have been in the report!

Agreed about the value of a baseline and collaboration, but sometimes the source of the problems isn't interested in collaborating. It's weird because it's a corporate-owned instance, you'd think they'd have incentive to deal with it! More positively the parent corporation has been taking steps to deal with CSAM on their other sites so hopefully this report will convince them to pay attention to their federated sites as well.

@thisismissem Totally agreed that the fediverse has a lot to learn about trust and safety, including from centralized media. But also, centralized media's approach to trust and safety has discrimination baked in at multiple levels, relies on exploitative arrangements with outsourced content moderators, leads to results that favor authoritarian governments and ethnonationalists, and embodies a "surveillance is safety" approach. Admittedly at the scale they're dealing with it's hard to picture other approaches that wouldn't destroy their business model, but that doesn't make it a good template for the fediverse!

So it's certainly worth learning from the things they do well, and there's clearly at least some technology that can be adapted to apply in the decentralized space without bringing the other bad effects -- the report's recommendation for grayscale and blurring technology for moderators is a clear example of this.

In terms of earlier approaches being centralized to some degree ... I can certainly understand the argument for making incremental improvements on the way to the more difficult decentralized solution, but there are also arguments the other way.

Centralized systems tend to reinforce themselves, and there are several factors that make it likely to happen in this case as well:

  • Intiial centralized solutions will be useful but imperfect, so resources will go to improving them (instead of creating decentrailzed solutions); and as the centralized solutions improve, the bar to get equivalently-good decentralized solutions becomes higher and higher.

  • whoever's got a stake in the initial centralized solutions has a strong incentive to to favor continuing down that path instead of shifting to a decentralized solution.

  • Meta's arrival, and the approach they're taking of working with large instance admins and platform providers who also run large instances, is likely to magnify these dynamics. There are reports that they'll require agreements from instances that federate with them, which certainly makes sense -- they'd be remiss not to! But it would be very natural to put in specific requirements for using some of the initial centralized tools and services, which become additional roadblocks for future decentralized tools. And it would also be natural for them to provide some of these services themselves, and/or partner with OpenAI or wholever to provide services ... in either case, these services become another way for them to harvest data.

So it's complex, and it'll be interesting to see how things play out.

thisismissem,
@thisismissem@hachyderm.io avatar

@jdp23 I'm not gunna respond to everything individually, but just wanna say that I'm directly in the room with the folks working on these matters and doing very strong vocalisation of things that are discriminatory, particularly to LGBTQI+ folks and Sex Workers, and as a possible implementor of said moderation tools, you can guarantee I'll fight as hard as possible towards decentralized approaches, even once we've something roughly working that's centralised.

thisismissem,
@thisismissem@hachyderm.io avatar

@jdp23 At least there's a seat being made available at the tablet now, and we can hopefully learn from centralised platforms mistakes (in our eyes) and from what they did really well.

olives,
@olives@qoto.org avatar

@thisismissem @jdp23 Ideally, the less of this content the better, although I tend to worry about the process (removing it at all costs).

That is one area where I haven't been very fond of the large platforms.

As for the statistics, both of them are potential. I don't think he looks at the hits for legal reasons, although I'd lean towards probably for the first one.

The second one is for an algorithm of poorer repute (it's far newer and fuzzier).

Hope that helps clarify what I mean.

thisismissem,
@thisismissem@hachyderm.io avatar

@olives @jdp23 yeah, it's really that first one that's needing action for now, critically, as that's for known images of CSAM, the rest are all just potentially CSAM/child trafficking, which is important to look at but not automatically removable matter.

Second would be more like a signal amongst other factors that leads to manual review & takedowns.

emma,

@thisismissem @jdp23 the dumb white cis christian fucks want everything locked down.

Melpomene,

@thisismissem @jdp23 it's so tiring to keep track of all of these blatantly anti-privacy bills. They've been told no year after year, but still they push this nonsense because they know that eventually, they'll probably squeak through.

thisismissem,
@thisismissem@hachyderm.io avatar

@Melpomene @jdp23 yeah, eventually they will, it's just like how state-based age verification laws are a testing bed for federal pornography bans and criminal prosecutions of pornographers and platforms.

Melpomene,

@thisismissem @jdp23 yep. If someone is selling a bill as "for the children" or "because terrorists" then we should assume it is a bad bill unless strong evidence suggests otherwise.

det,

@thisismissem @jdp23 Ugh, that sucks. As we point out to policymakers every chance we get, none of our findings from any of our child safety projects would be fixed by breaking encryption. It's all from public data! From my colleague @riana:

https://cyberlaw.stanford.edu/blog/2023/04/stop-csam-act-anti-encryption-stalking-horse

thisismissem,
@thisismissem@hachyderm.io avatar

@det @jdp23 @riana yeah, but that's what we get with sensationalist headlines, which I think is why it's very important for us to be very clear on the narrative used by media on the research & ask for corrections if it's misleading

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • khanakhh
  • magazineikmin
  • mdbf
  • GTA5RPClips
  • everett
  • rosin
  • Youngstown
  • tacticalgear
  • slotface
  • ngwrru68w68
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • tester
  • JUstTest
  • ethstaker
  • cubers
  • osvaldo12
  • cisconetworking
  • Durango
  • InstantRegret
  • normalnudes
  • Leos
  • modclub
  • anitta
  • provamag3
  • megavids
  • lostlight
  • All magazines