So much of the majority of AI art I see is terrible, bad looking and deformed. But with the right instruction's it doesn't have to be.
To accurately create fictional characters, this is probably the best method I've seen so far. There's work involved for training a model like this and not something you can just give a bunch of prompts and expect good results.
I started by gathering 64 screenshots of my 3D VRChat model from Blender in various positions and angles in different lighting while wearing select clothing of choice. Then I added proper tags describing each image in a respective text file.
Based on the training data and they keywords I specified, you can input various clothing alternatives including:
armor
jacket
shirt
barechest
Training took about 30 minutes using an RTX2080Ti GPU.
I wonder how long it’ll take for fans of AI art to discover that it both has a specific aesthetic and that aesthetics eventually fall out of popular fashion?
He spends so much time Photoshopping the #StableDiffusion in-painting that the #GenerativeAI now represents about 40% of the whole workflow. Good quality art still requires the artist put in a lot of their own time and effort integrating the new tools into their overall vision.
In 1923, a cartoonist imagined that in 2023, an electric machine would generate ideas and draw them as cartoons automatically. With your Idea Dynamo linked up to your Cartoon Dynamo and an adequate supply of ink, this machine would create "hilarious" (?) cartoons like 'How to Torture Your Wife'. 🙄
Written, performed and illustrated by #AI. The other night I fell asleep watching tubi awoke to this streaming... Looks have been around since January, anyone got other examples of automated video production with this level of fidelity for lack of a better description. Curious to see other examples.
I think I'm starting to learn how to use this. Instead of trying to prompt everything at one, I first prompted the background, then added a picture of a turtle and inpainted it to blend in style (not perfect, though). Then opened in gimp and deleted every shell part, added a city and again, inpainted to get the style. Then, another batch of img2img to blend the overall image a bit more and finally do the upscaling.
Testing a new style and also Tiled Diffusion, which supposedly allows you to tell the model where to put different things (in this case, a treehouse on the right). It's a bit finicky, though.
Been playing with #stablediffusion with #controlnet lately. Its surprisingly addictive... Ive been leaving the seed on -1 and getting random new images in the same pose. Very entertaining