Well well well... Andreessen Horowitz admits, in no uncertain terms, that if they had to compensate artists for using their art to train their #aiart ripoff models, that their investments wouldn't be worth it.
Absolutely damning admission which spells out in no uncertain terms that these firms KNOW they are ripping artists off.
UPDATE: Seems it already happened. Content from 2014-2023 has already been shared. So your opt-out will just be honoured going forward. If and how already sent content will be handled by the receiving 3rd party after you opt-out remains a mystery
Tales from the jar side: JVM Weekly is great, AI tools for Java devs, Spring office hours, and the usual assortment of toots and skeets, by @kenkousen #Java#AI#GPT#midjourney#spring
Since their arrival, generative AI models and their trainers have demonstrated their ability to download any online content for model training. For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will. Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.
In an effort to address this power asymmetry, we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.
Ihr Lieben, ich brauche bzw. wuensche mir abermals euer Feedback, welches ihr hier 👉 https://t.ly/hallo mit einer kurzen Sprachnachricht hinterlassen koennt.