Experimenting for the first time with self-hosted AI-generated images this evening on my Linux laptop to see what it can do: default #InvokeAI install using #StableDiffusion 1.5. Simple to setup if you're familiar with #Python venv and Pip.
Using just the CPU and giving it the prompt "bluebird", this is what it gave in 17 minutes (CPU only). Must admit that I'm pretty impressed.
#StableDiffusion's image-to-image mode is more interesting to me than it's text-to-image (which ignores too much of your prompt). Here I used my Duckface painting as the input, and gave it the prompt “do what you like”. It's managed a coherent, if rather less funny, image. I interpret it as a rather dark take on drunkenness.
“A fox crossing a residential street. The fox has a human face. There are autumn leaves on the ground, terraced houses in the the background, and a slight mist.”
It's just ignoring most of my prompt (as well as really struggling with what foxes look like). I've tried many iterations and variations, they're all like this.