cigitalgem,
@cigitalgem@sigmoid.social avatar

How to regulate ML...regulate the LLM and generative AI foundation models. Poisoned data can't be simply filtered out of behavior later.

https://arstechnica.com/tech-policy/2023/12/child-sex-abuse-images-found-in-dataset-training-image-generators-report-says/

cigitalgem, (edited )
@cigitalgem@sigmoid.social avatar

Oh look, this issue isn't new at all. Who would have guessed?? It's called poisoned data (and doing it intentionally is a well studied attack).

9.20.2022 from vice https://www.vice.com/en/article/93ad75/isis-executions-and-non-consensual-porn-are-powering-ai-art

But for real ancient history, see BIML's risk analysis from 1.20.2020
https://berryvilleiml.com/results/ara.pdf

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • tacticalgear
  • JUstTest
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • anitta
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines