cigitalgem, 5 months ago How to regulate ML...regulate the LLM and generative AI foundation models. Poisoned data can't be simply filtered out of behavior later. #MLsec https://arstechnica.com/tech-policy/2023/12/child-sex-abuse-images-found-in-dataset-training-image-generators-report-says/
How to regulate ML...regulate the LLM and generative AI foundation models. Poisoned data can't be simply filtered out of behavior later. #MLsec
https://arstechnica.com/tech-policy/2023/12/child-sex-abuse-images-found-in-dataset-training-image-generators-report-says/
cigitalgem, 5 months ago (edited 5 months ago) Oh look, this issue isn't new at all. Who would have guessed?? It's called poisoned data (and doing it intentionally is a well studied attack). #ML #MLsec 9.20.2022 from vice https://www.vice.com/en/article/93ad75/isis-executions-and-non-consensual-porn-are-powering-ai-art But for real ancient history, see BIML's risk analysis from 1.20.2020 https://berryvilleiml.com/results/ara.pdf
Oh look, this issue isn't new at all. Who would have guessed?? It's called poisoned data (and doing it intentionally is a well studied attack). #ML #MLsec
9.20.2022 from vice https://www.vice.com/en/article/93ad75/isis-executions-and-non-consensual-porn-are-powering-ai-art
But for real ancient history, see BIML's risk analysis from 1.20.2020 https://berryvilleiml.com/results/ara.pdf
Add comment