“A fox crossing a residential street. The fox has a human face. There are autumn leaves on the ground, terraced houses in the the background, and a slight mist.”
It's just ignoring most of my prompt (as well as really struggling with what foxes look like). I've tried many iterations and variations, they're all like this.
Well well well... Andreessen Horowitz admits, in no uncertain terms, that if they had to compensate artists for using their art to train their #aiart ripoff models, that their investments wouldn't be worth it.
Absolutely damning admission which spells out in no uncertain terms that these firms KNOW they are ripping artists off.
God bless #stablediffusion. Look at this potential rework of the X-Box interface. It's beautiful. Someone get this to Microsoft. @shanselman put this in front of your work friends. Someone get Steve Balmer on the line. This is important. Some art designer please fine tune this with some real icons and fonts and stuff. This could be your magnum opus.
I tried to get SD-XL to generate an image of a frog with its eyes closed. It refused. I even cranked up the attention on closed to an absurd level, and it seemed to get sassy with me.
Since their arrival, generative AI models and their trainers have demonstrated their ability to download any online content for model training. For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will. Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.
In an effort to address this power asymmetry, we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.
Which is why I have now spent three days obsessing about low lighting in #StableDiffusion (and being hyperanxious about a therapist visit that was cancelled last moment).
Why don't I ever get cleaning hyperfocus? VACUUM AND MOP ALL THE THINGS?
This excellent comic on the history of Luddism by Tom Humberstone https://thenib.com/im-a-luddite led me to this site with folks developing 'Glaze' which is a thing artists are using to mess with AI trying to scrape their art. Check it out here:
Anyone have any good tricks for getting AI image generation models like #dalle or #stablediffusion to produce animals or people with three eyes? I was hoping to get a lemur with a typical “mind’s eye” third eye, but all the models seem to ignore the third eye condition no matter how frequently I specify it.
"Tiny Dream is a header only, dependency free #StableDiffusion 2 implementation written in C++ from scratch with primary focus on CPU performance and memory footprint. Tiny Dream runs reasonably fast on the average consumer hardware, require only 5.5 GB of RAM to execute, does not enforce Nvidia GPUs presence, and is designed to be embedded on larger codebases (host programs) with an easy to use C++ API. The possibilities are literally endless (or at least extend to the boundaries of Stable Diffusion's latent manifold)."
Frog with Eyes (NOT) Closed
I tried to get SD-XL to generate an image of a frog with its eyes closed. It refused. I even cranked up the attention on closed to an absurd level, and it seemed to get sassy with me.