Hey folks, I’ve been training Loras now for a while, and have some scripts I really like that I’ve been working with. However, I realized I haven’t been keeping up lately, so, is SDXL still the best for Loras? And by that I mean before with 1.5 and standard SDXL is the most accurate quality I’ve received....
This stuff moves so fast I really can’t keep up and a lot of the research posted here goes a bit over my head. I’m looking for something that doesn’t seem too out of the question given things like CLIPSeg. Is there some tool or library out there that will accept an image and a prompt and then generate a mask within the...
Tonight I tested out SD3 via the API. I sadly do not have as much time as with my Stable Cascade review, so there will only be four images for most tests (instead of 9), and no SD2 controls....
I’ve been having a go at using Stable Diffusion through Easy Diffusion. I made a png with alpha for img2img, but the transparency seems to be getting replaced with black, ruining the image. I was expecting the transparency to get replaced with noise. Are there any good fixes/workarounds?...
I find control net will sometimes not work at all with XL. Some times it will, but sometimes the image will just not be impacted by it at all. I don’t know why. Is this me, or a bigger problem with control net?
Fairly new to stable diffusion (have messed around with it on an off over the last year or so), what would be the best way to output at the above resolution? It’s for a desktop background that will stretch over multiple monitors (2 1280x1024 monitors and 1 2560x1080 monitor, plan to split the image in 3 and just crop off the...
Hello first time reaching out on this, but I have been struggling to create a good image that would include Patamon from digimon with Gladiolus off of final fantasy 15. I found a decent LORA for Gladiolus, but Patamon has been a bear of a task. Attempting a control net guided generation has failed in a number of ways. Here is...
So i have automattic running decently. Can anyone reccomend good tutorials on diff models, setups and workflows.? There is so much spammy shit out there its making me just get frustrated. Not a total noob to this but would like to build some good fundamental practices. Not for work, just want to make cool shit.
Is there a way to do automatic1111"inpaint at full resolution, padding" in comfyui? I’m under the impression that the Detailer in ImpactPack does this, but I can’t get it to generate anything at all because I don’t really understand it, and I can’t even tell if the errors I’m getting are due to my mistakes, or a bug,...
I tried doing this using Automatic1111 and some of my favourite custom checkpoints from Civitai. The result was garbage. I can train beautiful embeddings using the base SD 1.5 model, but not using any of my favourite checkpoints.
Hey all. I realize this is kinda tangential to SD, but it seems aipromtguide is stuck in a loading state on my desktop. The menu loads and but the site data doesn’t. This problem started after I reinstalled windows 10. I’ve tried both Brave and Edge, and the problem persists. Same thing with the VPN either on or off....