En aquel Imperio, el Arte de la Cartografía logró tal Perfección que el mapa de una sola Provincia ocupaba toda una Ciudad, y el mapa del Imperio, toda una Provincia. Con el tiempo, estos Mapas Desmesurados no satisficieron y los Colegios de Cartógrafos levantaron un Mapa del Imperio, que tenía el tamaño del Imperio y coincidía puntualmente con él...
“A fox crossing a residential street. The fox has a human face. There are autumn leaves on the ground, terraced houses in the the background, and a slight mist.”
It's just ignoring most of my prompt (as well as really struggling with what foxes look like). I've tried many iterations and variations, they're all like this.
God bless #stablediffusion. Look at this potential rework of the X-Box interface. It's beautiful. Someone get this to Microsoft. @shanselman put this in front of your work friends. Someone get Steve Balmer on the line. This is important. Some art designer please fine tune this with some real icons and fonts and stuff. This could be your magnum opus.
I thought I would share my experience this evening with the group here seeing that I still excited as hell for getting this hodgepodge to work at all....
@mokazemi@avds
پرامپت منفی دقیقا برعکس پرامپته. چیزی که نمیخوای. مثلاً میخوای عکس واقعی از یک شخص بسازی. خوب پرامپتت که واضحه چیه. چی نمیخوای؟ عکس کارتونی یا انیمه، عضو بدن ناقص، انگشت زیاد، دفورمه، زشت، پای اضافه و....
باید یادت بمونه که «هوش مصنوعی» به شدت خنگه. همونطور که باید بهش بگی چیکار باید بکنه، باید بهش توضیح بدی چیکار نکنه.
پرامپت (مثبت و منفی) هم نیاز نیست جملات کامل باشن. کلمات کلیدی هم خوبن و بعضاً حتی بهتر از جملات جواب میدن.
مثلا:
photo of woman, green dress, sitting on bench
@danialbehzadi@mokazemi@avds
چتجیپیتی چندان به درد پرامپت نویسی استیبل دیفیوژن نمیخوره. اصلا فکر نکنم بدونه چیکار باید بکنه.
چون بستگی داره به چیزی که میخوای و همون چیزی که بخوای بهش بگی رو مستقیما به استیبل دیفیوژن میگی. مخصوصا اینکه اصلا نیازی به جمله سازی نیست و صرفا نیاز به استفاده از کلمات کلیدیه.
خودتی که میدونی مثلاً عکست نمیخوای دفرمه باشه، کراپ شده باشه، نور زیاد یا کم باشه، سه تا دست داشته باشه و...
I gave it "Black Circle" by Malevitch as an input.
Prompt: a black circle with a white background, by Kazimir Malevich, traditional japanese concept art, light-brown skin, moor, inspired by Shōzō Shimamoto, intrincate, by Ram Chandra Shukla, heavy pigment, nothingness, subject in the centre, gazeta, 000
"Tiny Dream is a header only, dependency free #StableDiffusion 2 implementation written in C++ from scratch with primary focus on CPU performance and memory footprint. Tiny Dream runs reasonably fast on the average consumer hardware, require only 5.5 GB of RAM to execute, does not enforce Nvidia GPUs presence, and is designed to be embedded on larger codebases (host programs) with an easy to use C++ API. The possibilities are literally endless (or at least extend to the boundaries of Stable Diffusion's latent manifold)."
Anyone have any good tricks for getting AI image generation models like #dalle or #stablediffusion to produce animals or people with three eyes? I was hoping to get a lemur with a typical “mind’s eye” third eye, but all the models seem to ignore the third eye condition no matter how frequently I specify it.
@adamchainz I think if I weren’t concerned with licenses I could just google image something up pretty easily, but I’m trying to keep all the images CC-0 or CC-BY so that when the deck is done I can open source it. I am considering trying to go on fiverr or something to outsource the illustrations, because they’re really quite time consuming even using generative AI, but unless I can find someone who will churn out a ton of quick illustrations or AI-generated images for pretty cheap, I think getting a hundred example sentences illustrated might be outside my budget for the project.
@pganssle Ah wow, awesome use case. My first job out of uni was at Memrise, which had the concept of “mems” - user generated reminder images based on some kind of association. Love to see the idea still being used. I’m a big Anki user too 😁
And yeah, AI generated art isn’t exactly fast when you have some idea…
@shepgo as expected, as I was just trying out random ideas. I particularly love the background which looks more like GZDoom skybox than actual distant mountains.
@WeavingWithAI lol, that's pretty funny. Whenever I download a new model/checkpoint I make a couple txt2img images without a prompt. A surprising number of models are "naughty by default". Hmm, maybe it's NOT surprising.
Yup, the pommel is too long unless the sword is like 6 feet or so, but she's actually holding it! It's really hard to get any image generation AI to get someone to hold pretty much anything. You get lots of weird pseudo knives combined with scabbards if you ask for a sword usually.
@echevarian420 It's a good direction to start with - the bold colors and strong shapes align well with the core concept. Look forward to seeing where you go with it!
@ronald Lykons' models DreamShaper and AbsoluteReality are both really good, as are Zovya's RPG Artist Tools and Photoreal. RevAnimated is another really good one. grab any of the highest rated models on Civitai and you can't go wrong. https://civitai.com/
If anyone wants to get upset about representation, I'm literally just toggling through every ethnicity and gender except (white girl | asian girl) because there's already enough #aiartwork of them out there. It's more than just an anime generator for socially impaired nerds.
This model is capable of creating some extremely elaborate hairstyles and jewelry. A lot of these end up having cybernetic components, feathers, or what looks like fully functioning bird wings..
Tried to get #stablediffusion to solve the p=np problem. Can some #mathematicians help explain these results? I wanted to be fair so I asked it in both orders.
Prompt: "pure red background if p=np, but a pure blue background if p!=np"
@blake@GIMP@krita_development It's a VERY understandable concern, but I believe that if we don't start building and embracing these tools to keep up we will be left completely behind. It's more than just generating art from scratch, it's removing objects from backgrounds, it's extending the image's frame wider than you shot, it's adding objects quickly and easily that you would have just gotten from a stock site, etc. etc.
Experimenting for the first time with self-hosted AI-generated images this evening on my Linux laptop to see what it can do: default #InvokeAI install using #StableDiffusion 1.5. Simple to setup if you're familiar with #Python venv and Pip.
Using just the CPU and giving it the prompt "bluebird", this is what it gave in 17 minutes (CPU only). Must admit that I'm pretty impressed.
Watch Stable Diffusion and Blender work like magic on the Thelio Major, featuring AMD’s zen 4 Threadripper! Here's a closeup of our demo for #CES2024 #stablediffusion#blender#Threadripper