Been playing with #stablediffusion with #controlnet lately. Its surprisingly addictive... Ive been leaving the seed on -1 and getting random new images in the same pose. Very entertaining
@msprout could you tell the difference? I made this in 60 secondes with #stablediffusion on M2Pro. It’s impressive to be able to run that locally. I also saw Vicuna made a 3B parameters that would be suitable for more embedded devices if quantized. Thinking about using this instead for my bot to have something non proprietary without all the BS from “OpenAI”.
I wonder how long it’ll take for fans of AI art to discover that it both has a specific aesthetic and that aesthetics eventually fall out of popular fashion?
He spends so much time Photoshopping the #StableDiffusion in-painting that the #GenerativeAI now represents about 40% of the whole workflow. Good quality art still requires the artist put in a lot of their own time and effort integrating the new tools into their overall vision.
In 1923, a cartoonist imagined that in 2023, an electric machine would generate ideas and draw them as cartoons automatically. With your Idea Dynamo linked up to your Cartoon Dynamo and an adequate supply of ink, this machine would create "hilarious" (?) cartoons like 'How to Torture Your Wife'. 🙄
Written, performed and illustrated by #AI. The other night I fell asleep watching tubi awoke to this streaming... Looks have been around since January, anyone got other examples of automated video production with this level of fidelity for lack of a better description. Curious to see other examples.
Showdown. How do the most popular text2image models react to a complicated prompt "stock photo of a small cat that is casting a shadow that is shaped like a lion, inner ferocity, strong". Name of model in alt text #AiArt#midjourney#StableDiffusion#Bing#deepfloydif