Stable Diffusion merging LoRA models on Google colab

preview_player
Показать описание
#stablediffusion #stablediffusiontutorial #stablediffusionlora

☕️ Please consider to support me in Patreon 🍻

👍 Google Colab notebook 👨‍💻

👩‍🦰 LoRA model 👇🏻

Welcome, everyone!

Learning how to merge LoRA models can be challenging, especially when using Google Colab.

But don't worry, if you're interested in stable diffusion and LoRA training, we've got you covered.

In this YouTube video, we'll teach you how to effectively merge LoRA models on Google Colab in just a few simple steps.

So, if you want to enhance your LoRA training and stable diffusion skills, stay tuned and let's get started!
Рекомендации по теме
Комментарии
Автор

how to train/insert/inject own photo of clothes, costume, dress

bluebeam
Автор

HI! Can I keep the respective model instance prompt when merging models? For example, I merge the models of clothes and pants, but I use this merged model when generating, but I only want to generate pants, not clothes. How can I do this?Thanks!

oxdulpc
Автор

thanks for the notebook mate, amazing 🥂 subscribe noww

maybe, next convert LoRA file from checkpoint? i tried some script but no one working fine on colab, or I just don't know how to use it 😅

IbnuMuzaeni
Автор

Is there anyway… we can do the installation and storage the model in g drive, which can run Colab pro there?
Cause every time we have to install again … seem painful…
Thanks for your videos…

graphiydesign
Автор

Hi, can you do a tutorial on how to train lora model on colab? Thanks

abs
Автор

What it means if I get this error during merging?

AssertionError: weights shape mismatch merging v1 and v2, different dims?

Ukonironnn
Автор

Uploaded two LoRA safetensor files when editing the cells, and got this error message - any idea on how to fix?
RuntimeError: self.size(-1) must be divisible by 2 to view Byte as Half
(different element sizes), but got 5017

jamesstewart
Автор

can you do lora traning with colab please

walidflux
Автор

AssertionError: weights shape mismatch merging v1 and v2, different dims? /

cwhkkol
Автор

Hola, logre hacer el nuevo modelo, (my-own-lora.safetensors) esta con el nombe que tu pusiste, pero el poner el prompt en Stable Diffusion me da error:
✔ Connected
Startup time: 161.6s (import torch: 8.1s, import gradio: 1.3s, import ldm: 3.0s, other imports: 56.2s, list SD models: 1.1s, setup codeformer: 13.9s, list builtin upscalers: 1.5s, load scripts: 23.9s, load SD checkpoint: 47.3s, create ui: 1.7s, gradio launch: 3.6s).
0% 0/20 [00:06<?, ?it/s]
Error completing request
Arguments: ('task(9zkr3s7ylwssjw0)', 'Woman', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, object at 0x7ff82c6a5240>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, 50) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 503, in process_images
res = process_images_inner(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 653, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 869, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning,
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 234, in launch_sampling
return func()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 152, in forward
devices.test_for_nans(x_out, "unet")
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/devices.py", line 152, in test_for_nans
raise NansException(message)
A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
aclaro lo ejecuto en Google Colab al Stable Diffusion

rcgr
Автор

Here's some of my merged results, using the notebook you shared, took me about 30 mins to get these results and tests, thank you very much!!

boricuapabaiartist