StableDiffusion

wolfshadowheart, in Just got an XT 7900 XTX running Stable Diffusion on Debian Trixie
wolfshadowheart avatar

Thank you for sharing! Doesn't fit my usecase but I'm glad to see awesome resources out there for others!

nottheengineer, in I think she knows how to fly this thing - aZovyaPhotoreal_v2

Does that huge negative prompt actually work? I found that I usually get better results if I keep the negative prompt shorter and turn up CFG scale to 8 or 9.

I also found that a high CFG scale is good at showing you which prompts work together and which ones don’t. If you crank CFG scale to 15-20 (which will naturally produce some abominations) but find that it still ignores part of your prompt, change that part specifically because all it does is confuse the model.

mack123,

It is an interesting question. I am not sure. The style is from a YouTube channel I followed to learn about automatic1111.

I included the expanded version of the style for duplication purposes. I get better results with it on and the sfw style certainly keeps the model behaved.

I have also had very good results without it. It seems that just casting a wide with a few batches will yield at least one or two useful images, regardless of using the styles.

I am still learning... Very early days.

mack123, in Turning Any Stable Diffusion Model Into an Inpainting Model

Useful to know. Thanks for sharing

okamiueru, in Stable Diffusion on AMD 6800XT - Ubuntu 22.04 - Experience so far and just how much faster it is than machineML on Windows

I’ve been running SD on AMD GPU and Linux since more or less the beginning. It’s been smooth sailing all the way. Not nearly as fast as some equally expensive RTX cards. But, it is what is is.

mack123,

Awesome, I am still finding my way and am happy if I simply don't crash. I don't have a frame of reference to compare to a Nvidia card for this, but it does seem like we have a little more work in getting things smooth with the AMD cards. I can't say that my speed is terrible. Most renders finish in reasonable time. I am simply amazed that we can do this on consumer grade hardware.

okamiueru, (edited )

Indeed. I’m in complete awe by this technology. It’s an amazing pass-time that tickles the creative side. As for getting an idea of how different cards and system compare, you can check out vladmandic.github.io/…/benchmark.html

I also have an 6800 XT, and the performance on that particular benchmark is around 9 it/s. Something like this looks to be a rough indication.

AMD Cards:

<pre style="background-color:#ffffff;">
<span style="color:#323232;">- 20 it/s:  7900 XTX
</span><span style="color:#323232;">- 10 it/s 6900 XT
</span><span style="color:#323232;">- 9 it/s 6800 XT
</span><span style="color:#323232;">- 7 it/s 6700 XT
</span><span style="color:#323232;">- 2 it/s RX Vega
</span>

NVIDIA Cards:

<pre style="background-color:#ffffff;">
<span style="color:#323232;">- 50 it/s RTX 4090
</span><span style="color:#323232;">- 25 it/s RTX 4080
</span><span style="color:#323232;">- 22 it/s RTX 3080 Ti
</span><span style="color:#323232;">- 11 it/s RTX 4060 Ti
</span>
mack123,

I run the tests out of interest. I am leaving some performance on the table due to my launch options, but I need those to avoid to many out of memory and other errors.

Normal Test:

  • 5.24 / 6.16 / 7.06
  • app:stable-diffusion-webui updated:2023-06-27 hash:394ffa7b url:https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/master
  • arch:x86_64 cpu:x86_64 system:Linux release:5.19.0-46-generic python:3.10.6
  • torch:2.0.1+rocm5.4.2 autocast half xformers: diffusers: transformers:4.25.1
  • device:AMD Radeon RX 6800 XT (1) hip:5.4.22803-474e8620 16GB
  • sub-quadratic medvram
  • v1-5-pruned-emaonly.safetensors [6ce0161689]

Extensive Test

  • 5.36 / 6.16 / 7.08 / 5.38 / 5.52
  • app:stable-diffusion-webui updated:2023-06-27 hash:394ffa7b url:https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/master
  • arch:x86_64 cpu:x86_64 system:Linux release:5.19.0-46-generic python:3.10.6
  • torch:2.0.1+rocm5.4.2 autocast half xformers: diffusers: transformers:4.25.1
  • device:AMD Radeon RX 6800 XT (1) hip:5.4.22803-474e8620 16GB
  • sub-quadratic medvram
  • v1-5-pruned-emaonly.safetensors [6ce0161689]
mack123,

Included my current numbers, any optimisation advice would be much appreciated 😉

okamiueru,

I think those numbers are roughly the same I get. It varies a bit from time to time. I also wouldn’t know how to improve it, to be honest.

mack123, (edited )

I managed to find an extra iteration or two without sacrificing to much stability.

6.06 / 7.59 / 9.11

Running with Doggettx selected as the optimiser in the optimiser config inside automatic1111.

I installed the google perftools as suggested in this thread

"sudo apt install libgoogle-perftools-dev"

And then added the following memory management options as suggested in this thread
by exporting: "export PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128"

Made the following changes to my web-user.sh

Uncommented the command line options as follows:
export COMMANDLINE_ARGS="--medvram --upcast-sampling"

And added the following lines to the end of the file

export LD_PRELOAD=libtcmalloc.so
export PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128

okamiueru,

Those are some great notes, thanks for sharing.

mack123,

No problem, backup and be careful. I think AMD still has a lot that can be done in the rocm drivers themselves. There should be gains left to make. Nvidia is not helping with their pricing either. Which should see more users on AMD. Hopefully better support for us. I am pleasantly surprised by what the card can do. I got it for gaming at 1440p, where it was the best bang for buck. The AI stuff is a cool bonus.

Cyzor, in [SDXL] Sometimes I eat things that aren't food.

Has the half-crazed, surreal look of The Far Side comic by Gary Larson.

kindenough, in Stable Diffusion on AMD 6800XT - Ubuntu 22.04 - Experience so far and just how much faster it is than machineML on Windows
kindenough avatar

I am on Windows, automatic1111 with directML and rendering is pretty fast. 7700x, 6750xt and 32GB at 4800mhz and a couple of m.2 drives. No xformers tho and some problems when upscaling in txt2image, but it renders prompts with default settings in 10 to 15 seconds. Fast enough for me. AMD has updated their Adrenalin drivers lately to have better directML performance.

Some things can take some time or aren't supported on AMD, but it's surely faster then my rtx 1070 and 1080 rigs wich performed adequately, except with training.

mack123,

For sure. It works, especially if you use Shark with the experimental driver, but the speed difference was an order of magnitude for the rocm compiled driver on linux. I am already needing more card though. The 6800xt 16gb ram is not enough.I am running on medium ram settings. I hear rocm support for windows is coming soon, so that will be interesting as well. There were some rumours earlier this year.

Voyajer, in QR Code Scene for kbin.social
Voyajer avatar

Nice, my phone picked it up immediately.

AJYoung, in Stable Diffusion on AMD 6800XT - Ubuntu 22.04 - Experience so far and just how much faster it is than machineML on Windows

What’s the ML version? I’d love to learn more!

mack123,

I was running this version: directML

It is dog slow compared to running under linux with rocM, but she runs ;-)

ToKrCZ, in Stable Diffusion on AMD 6800XT - Ubuntu 22.04 - Experience so far and just how much faster it is than machineML on Windows
ToKrCZ avatar

Yes, Linux rocks. I will be playing with my 6800 XT in Manjaro Linux over the weekend, I am mostly interested in running local LLMs via ExLLaMa.

mack123,

I am working my way there. I am interested in the gaming possibilities. NPC dialogue and so on. But I wanted to get the environment working first. I found more guides for stable diffusion. Now I can venture deeper knowing that rocm is working.

DreamyDolphin, in Girl's Night Out
DreamyDolphin avatar

Stable Diffusion using majicMIX v6

Aesthesiaphilia, in Yezhov's Revenge
babelspace, in Trio of Gamers
babelspace avatar

Nice. Like the combination of a more illustrative style with a scene you might see in every day life.

chamim, in Head of StabilityAI counting down to 2 on Twitter
chamim avatar

Sign me up for the tea-serving drone 😸

babelspace, in My first try with stable diffusion. I am not disappointed!
babelspace avatar

Dreamshaper is a nice model. Have you been using other AI generators previously?

sp3ctre,
sp3ctre avatar

Not yet, but I probably will in future. There is so much going on at the moment in the field of deep learning. Currently I try to focus on stable diffusion, since it's open source and gives very impressive results.

fupuyifi, in Time Traveler - So many LORA, LYCORIS, embeddings, etc.
fupuyifi avatar

Nice one, but check out the # of fingers on the left hand. Then again, future humans might have augmented themselves to have extra digits :)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • StableDiffusion
  • PowerRangers
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • everett
  • Youngstown
  • tacticalgear
  • slotface
  • ngwrru68w68
  • rosin
  • khanakhh
  • kavyap
  • tsrsr
  • InstantRegret
  • normalnudes
  • mdbf
  • Durango
  • GTA5RPClips
  • cubers
  • vwfavf
  • ethstaker
  • hgfsjryuu7
  • osvaldo12
  • tester
  • cisconetworking
  • modclub
  • Leos
  • anitta
  • All magazines