br00t4c, to random
@br00t4c@mastodon.social avatar

"CSAM generated by AI is still CSAM," DOJ says after rare arrest

https://arstechnica.com/?p=2025952

redhotcyber, to Wisconsin Italian
@redhotcyber@mastodon.bida.im avatar
kubikpixel, to security German
@kubikpixel@chaos.social avatar

EU plan to force messaging apps to scan for CSAM risks millions of false positives, experts warn.

A controversial push by European Union lawmakers to legally require messaging platforms to scan citizens’ private communications for child sexual abuse material could lead to millions of false positives per day, hundreds of security and privacy experts warned in an open letter Thursday.

🙄 https://techcrunch.com/2024/05/02/eu-csam-scanning-council-proposal-flaws/


#security #csam #eu #europe #false #sexual #cybersecurity #wrong #FalseInformation

glynmoody, to random
@glynmoody@mastodon.social avatar

EU plan to force messaging apps to scan for risks millions of false positives, experts warn - https://techcrunch.com/2024/05/02/eu-csam-scanning-council-proposal-flaws/ of course, as we have been saying for years...

ErikJonker, to privacy
@ErikJonker@mastodon.social avatar
ErikJonker, to Bulgaria
@ErikJonker@mastodon.social avatar

An open letter on the position of scientists and researchers on the
recently proposed changes to the EU’s proposed Child Sexual Abuse Regulation. As of the 1st May 2024, the letter has been signed by 254 scientists and researchers from 33 countries. (among them @xot )
https://nce.mpi-sp.org/index.php/s/eqjiKaAw9yYQF87

helma, to CSS Dutch
@helma@mastodon.social avatar

Fijn dat er ein-de-lijk in de mainstream media aandacht voor komt. Wel jammer dat weer de nadruk ligt op de scheidslijn privacy en security. Het gaat hier in eerste instantie juist om het verbreken van veel security waarvoor je een klein beetje security terug zou krijgen (niet bewezen).

@Schneier_rss

https://nos.nl/artikel/2518925-kinderporno-opsporen-met-detectiesoftware-op-mobiel-experts-tegen-eu-wetsvoorstel

helma, (edited ) to CSS Dutch
@helma@mastodon.social avatar

Am I really the only one being alarmed by the "we could but we won't" level in this promise?
This is literally why some want our communications scanned. None of us have it, but they could train AI with it? How? 🤯

And where is the promise to report when they scrape-pirate it? 🤯

"They promise to ensure training datasets do not contain CSAM, to avoid datasets with a high risk of including CSAM, and to remove CSAM imagery or links to CSAM from data sources."

https://www.theverge.com/2024/4/23/24138356/ai-companies-csam-thorn-training-data

thisismissem, to trustandsafety
@thisismissem@hachyderm.io avatar

Stanford Cyber Policy Center: What Works and What Doesn't About the System for Reporting Child Sex Abuse Material Online

https://www.youtube.com/watch?v=TQghoPmB7Ro

thisismissem, to random
@thisismissem@hachyderm.io avatar

So let me get this right: Europol wants to not have E2EE because it would prevent them finding CSAM / CSE (among other things), but Europol also doesn't have a CyberTipline API for organisations to report CSAM / CSE to law enforcement?

So you aren't receiving reports of CSAM / CSE efficiently from existing services, but arguing you want to violate everyone's privacy to "find CSAM / CSE"

Did I get that right?

ppatel, (edited ) to ai
@ppatel@mstdn.social avatar

Models
All
The
Way
Down

This one is sooo good. I recommend this to anyone playing with to understand the biases and the complexities. Oh and the discussion of alt text is amazing.

Inside LAION-5B, an AI training dataset of 5B+ images that has been unavailable for download after researchers found 3,000+ instances of in December 2023.

https://knowingmachines.org/models-all-the-way

ErikJonker, (edited ) to ai Dutch
@ErikJonker@mastodon.social avatar

Just wow...amazing website/visualization about LAION-5B , a large dataset a lot of generative AI models are trained on.
https://knowingmachines.org/models-all-the-way

sebmeineck, to Bulgaria German
@sebmeineck@mastodon.social avatar

Nicht nur per möchte die "sexuellen Missbrauch von Kindern" bekämpfen. Mit einer neuen will die Kommission Betroffene stärken und technologischen Entwicklungen gerecht werden – inklusive „Künstlicher Intelligenz“ und . Der Überblick.

https://netzpolitik.org/2024/reaktion-auf-deepfakes-das-will-die-neue-eu-richtlinie-gegen-sexuellen-missbrauch/

thisismissem, (edited ) to fediverse
@thisismissem@hachyderm.io avatar

If you're concerned about CSAM in the Fediverse, then you'll want to read these notes: https://fediforum.org/2024-03/session/3-c/

Also, @iftas are working on a service to detect & alert when CSAM uploaded to your server by your users. We're hard at work on this!

br00t4c, to maryland
@br00t4c@mastodon.social avatar
br00t4c,
@br00t4c@mastodon.social avatar
glynmoody, to PornHub
@glynmoody@mastodon.social avatar

A Stopped Millions From Searching for Child Abuse Videos - https://www.wired.com/story/pornhub-chatbot-csam-help/ interesting approach - could also be used to warn young people when they share material...

sebmeineck, to random German
@sebmeineck@mastodon.social avatar

Die Regierung in will mit dem vorgeschlagenen "Online Harms Act" u.a. und eindämmen (letzteres definiert als "intimate content communicated without consent"). Plattformen sollen Risken ermitteln und mindern, Meldefunktionen stärken und Transparenzberichte liefern (klingt nach Digital Services Act, ).

https://www.canada.ca/en/canadian-heritage/news/2024/02/backgrounder--government-of-canada-introduces-legislation-to-combat-harmful-content-online-including-the-sexual-exploitation-of-children.html

itnewsbot, to medical
@itnewsbot@schleuss.online avatar

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders - Enlarge (credit: Bloomberg / Contributor | Bloomberg)

A judge ... - https://arstechnica.com/?p=2005359

xyhhx, to Matrix
@xyhhx@fosstodon.org avatar

you know, i really want to like . i really do. in fact, it's the only group orient IM i use. but it's lack of decent mod tools is pathetic.

communities are being harassed with and and they can't do anything about it. i'm actually ashamed that i recommend it to people.

this isn't new, and they've lost huge communities to it. i'm actually fucking pissed. this should be number one priority, no excuses.

@matrix @matthew @element

do something

itnewsbot, to telegram
@itnewsbot@schleuss.online avatar

Backdoors that let cops decrypt messages violate human rights, EU court says - Enlarge / Building of the European Court of Human Rights in Strasbourg ... - https://arstechnica.com/?p=2003350 #europeancourtofhumanrights #europeancourtofjustice #endtoendencryption #europeancommission #privatemessaging #freedomofspeech #europeanunion #onlineprivacy #end-to-end #terrorism #telegram #policy #russia #email #csam #e2ee

itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar

Cops bogged down by flood of fake AI child sex images, report says - Enlarge (credit: SB Arts Media | iStock / Getty Images Plus)

L... - https://arstechnica.com/?p=2000368

itnewsbot, to Instagram
@itnewsbot@schleuss.online avatar

Zuckerberg says sorry for Meta harming kids—but rejects payments to families - Enlarge / Mark Zuckerberg discussed Meta's approaches to child safety a... - https://arstechnica.com/?p=2000247 #x

strypey, (edited ) to trustandsafety

"While large platforms with robust trust & safety teams are able to be more discerning in their moderation..."

, , , 2023

https://cyber.fsi.stanford.edu/io/news/common-abuses-mastodon-primer

Are they though?

Centralised moderation teams often lack the context to know what they're looking at. Fediverse admins each take care of a small, well-defined bit of overall moderation; the bit that affects accounts on their server. They know what's acceptable in their community.

(1/3)

strypey,

"Mastodon users probably aren’t aware of CSAM on the platform unless it leaks into their federated timelines. This can happen when a fellow user on their instance follows an account posting CSAM. Ways to handle this problem are few. Though users who follow CSAM-disseminating accounts can be suspended from an instance by administrators, they can easily set up a new account on another..."

, , , 2023

https://cyber.fsi.stanford.edu/io/news/common-abuses-mastodon-primer

(1/2)

br00t4c, to cryptocurrency
@br00t4c@mastodon.social avatar

CSAM and Cryptocurrency: On-chain Analysis Suggests CSAM Vendors May Benefit from Privacy Coins like Monero and Other Obfuscation Measures

https://www.chainalysis.com/blog/csam-cryptocurrency-monero-instant-exchangers-2024/

admin, to mastodon
@admin@hear-me.social avatar

Ugh. I had my first-ever notification, from Cloudflare, of 4 suspected (Child Sexual Abuse Material) media files.

I removed them from storage, without looking at them.

Does anyone know if it's possible to determine the origin of media files in Mastodon, without downloading the actual files from the cache?

I relay with a considerable number of servers and so it probably came in from one of them, but how would I know this?

@Gargron

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • GTA5RPClips
  • magazineikmin
  • InstantRegret
  • thenastyranch
  • cubers
  • Youngstown
  • ethstaker
  • slotface
  • mdbf
  • rosin
  • Durango
  • kavyap
  • DreamBathrooms
  • provamag3
  • khanakhh
  • tacticalgear
  • ngwrru68w68
  • cisconetworking
  • modclub
  • everett
  • osvaldo12
  • tester
  • anitta
  • Leos
  • normalnudes
  • megavids
  • lostlight
  • All magazines