EU plan to force messaging apps to scan for CSAM risks millions of false positives, experts warn.
A controversial push by European Union lawmakers to legally require messaging platforms to scan citizens’ private communications for child sexual abuse material could lead to millions of false positives per day, hundreds of security and privacy experts warned in an open letter Thursday.
Fijn dat er ein-de-lijk in de mainstream media aandacht voor komt. Wel jammer dat weer de nadruk ligt op de scheidslijn privacy en security. Het gaat hier in eerste instantie juist om het verbreken van veel security waarvoor je een klein beetje security terug zou krijgen (niet bewezen).
So let me get this right: Europol wants to not have E2EE because it would prevent them finding CSAM / CSE (among other things), but Europol also doesn't have a CyberTipline API for organisations to report CSAM / CSE to law enforcement?
So you aren't receiving reports of CSAM / CSE efficiently from existing services, but arguing you want to violate everyone's privacy to "find CSAM / CSE"
This one is sooo good. I recommend this to anyone playing with #AI to understand the biases and the complexities. Oh and the discussion of alt text is amazing.
Inside LAION-5B, an AI training dataset of 5B+ images that has been unavailable for download after researchers found 3,000+ instances of #CSAM in December 2023.
Nicht nur per #Chatkontrolle möchte die #EU "sexuellen Missbrauch von Kindern" bekämpfen. Mit einer neuen #Richtlinie will die Kommission Betroffene stärken und technologischen Entwicklungen gerecht werden – inklusive „Künstlicher Intelligenz“ und #Deepfakes. Der Überblick.
Die Regierung in #Kanada will mit dem vorgeschlagenen "Online Harms Act" u.a. #CSAM und #bildbasierteGewalt eindämmen (letzteres definiert als "intimate content communicated without consent"). Plattformen sollen Risken ermitteln und mindern, Meldefunktionen stärken und Transparenzberichte liefern (klingt nach Digital Services Act, #DSA).
Centralised moderation teams often lack the context to know what they're looking at. Fediverse admins each take care of a small, well-defined bit of overall moderation; the bit that affects accounts on their server. They know what's acceptable in their community.
"Mastodon users probably aren’t aware of CSAM on the platform unless it leaks into their federated timelines. This can happen when a fellow user on their instance follows an account posting CSAM. Ways to handle this problem are few. Though users who follow CSAM-disseminating accounts can be suspended from an instance by administrators, they can easily set up a new account on another..."