AMD ROCm on WINDOWS for STABLE DIFFUSION released SOON? 7x faster STABLE DIFFUSION on AMD/WINDOWS.

preview_player
Показать описание
What's the status of AMD ROCm on Windows - especially regarding Stable Diffusion?
Is there a fast alternative? We speed up Stable Diffusion with Microsoft Olive / ONNX. What about SDXL and ComfyUI?

Announcement ROCm on Windows:

PyTorch status and GitHub Windows issue:

Status ROCm components on Windows:

Windows issues and status MIOpen:

--

GIT:

Miniconda:

Stable Diffusion web UI with DirectML and MS Olive / ONNX:

SDXL model:

--

How to use Stable Diffusion XL locally with AMD ROCm. With AUTOMATIC1111 WebUI and ComfyUI on Linux:

How to create a bootable USB flash drive with Ubuntu Linux for GPT/UEFI.
Рекомендации по теме
Комментарии
Автор

I know of 4 people that have switched from AMD to Nvidia cards since all this A.I. craze started this year.

AMD really needs to sort this out because it's not doing them any favours when a lot of A.I. apps don't work with AMD hardware or you have to just through hoops to get them working.

The sad thing is about all this, A.I. actually perform really well on AMD cards but it's a pain getting it to work so many of us are switching to Nvidia because it's taking too long ROCm and all the other software stack needed for A.I. to just work, I'm an AMD fanboy and even I'm thinking of switching to Nvidia because of this long wait.

pauluk
Автор

I hope ROCm team and PyTorch will make it happen, I don't like to Radeon to be working on HIP-CUDA Translation. That will only will cause performance penalty to any Radeon GPU.

duladrop
Автор

I have been watching the space for quite a while. it is dissapointing to see such slow progress on the port of MIOpen when they should have been preparing for this for a long time. Although I hope it should speed things up, but I am probably going to buy an nvidia gpu in the future. Open sourced image gen is getting quite good

incription
Автор

how can i solve system can not find path specified

Sujal-owcj
Автор

i'm still prefer linux+ROCm, than ONNX/Olive. As you already know, DirectML memory management is really bad, and it's need to optimize checkpoint models (non standard). i'm using RX 6800, With ROCm 5.7, got 13 it/s on Vlad's SD Benchmark, memory management is really good, seems better than nVidia i think, because with 512x512 it's only use ~2GB of VRAM.

onigirimen
Автор

Thanks for the information.
I have a question, ROCm on Linux will still be faster than OLIVE / ONNX on Windows, right ?

Vayrn
Автор

After installing and trying to use it, I noticed that it runs on the CPU and the GPU doesn't work, so what step did I do wrong?

flsdwjs
Автор

Hey! So I wanted to learn using stable diffusion but I am lost right now I use rx6700xt and win 10 and dont know which path should I go with. Any suggestions? I dont want to switch my os to linux but is there a way to use both os at the same time?

Boozeman
Автор

Thanks for a great video. I've been running ComfyUI with 7900 XTX on windows for a while now. It's slow but it runs.

gilcd
Автор

its kind of frustrating to be on an onnx system for me... it seems like most of the text to image models arent avaible, or are ports which are kind of slow or are in need of a lot of extra work to get them actually running for a "beginner" like me.

parryhotter
Автор

Do extensions, lora controlnet work well with ROCm on linus? Should i buy rx 7900xtx or 4070ti(maybe the super one with 16Gb) for SD (both price the same in my country)

wwk
Автор

Hello and thank you for video. Do i need to install python manually before start doing commands in Anaconda promt? Because i don't install it since it was not mentioned and after starting webui.bat --init --recursive i get: Couldn't launch python

exit code: 9009

stderr:
"python" Is Not Recognized as an Internal or External Command, Operable Program or Batch File”

sacredhero
Автор

i am running comfyui on windows with my AMD 6800. i got comfyui indirectly by getting krita and krita-ai-diffusion. the installation process (krita + comfyui) was like 10 clicks in krita, thanks to their brilliant scripting for comfyui installation. no optimizations available though (as far as i can tell); your prompt takes ~10 seconds to run for me

VoiDukkha
Автор

Is it correct that ROCm can run CUDA code on AMD graphics cards in Linux environments?

Thank you.

Camilla_T
Автор

I have ComfyUI running with --DirectML argument but very slow, takes 90 seconds for 1024*1024 image on a RX 7800 XT, hopefully ROCm will support this GPU soon as well....maybe version 6. Have subscribed and look forward to a tutorial for ComfyUi and AMD soon.

CoderTronics
Автор

why u dont have KARRAS on your sampling methods? how can we add them?

ehsankholghi
Автор

So its strange that comfy ui does animations but stable diffusion will error out... I tried to get it working in Ubuntu but i just couldn't get it to work with the 6800xt... Kept throwing errors saying the GPU was unsupported so it had to run in cpu mode ...

Somespecial
Автор

There is directML SD on AMD with onnx that runs also around 7 times faster.
It works, but the UI is far from up to date regarding functionalities.
There is a guide in the AMD website.

erikschiegg
Автор

I had it running propperly a few days ago getting an image 512px per 3 sec on 7900xtx, just today it just went slow again reinstalled everything did everything the same and it still just wont do images fast it takes like 3 min now.... iv tried everything

matthewagius
Автор

Hello. I see you're engaging with the comments, perhaps you could help me. To use onnx/olive I followed your steps exactly. Once I reach the "webui.bat --onnx --backend directml" step and run it - after a while it gives me the error "Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check", and no way to continue from the same prompt. I can launch webui by running it with the "skip torch cuda test" in the arguments, but it then it does not have the onnx/olive tabs on the automatic1111 webui. If I try to launch it with --onnx --backend directml arguments it says that those are invalid arguments. What could be the issue? My gpu is RX6900XT and I've tried the directml version of A1111 previously, but since it was so slow and used up all the vram I stopped.

KriegKadaver