Faster Video Generation in A1111 using LCM LoRA | IP Adapter or Tile + Temporal Net Controlnet

preview_player
Показать описание
Computer Specs: RTX 30708GB Laptop GPU, 16GB RAM, nothing else matters.
Contents include:
Sample results
Generation using LCM LoRA + Tile+Temporal+Soft edge controlnet
Generation without controlnet using LCM LoRA
Generation with using LCM LoRA + IP Adapter and other control nets
Notes about Davinci resolve

- In this video, we will be using LCM LoRA in Automatic1111 in order to generate Videos 3 to 5 times Faster using Video to Video method in img2img.
- We will see LCM without Contorlnet, and some nice Control net combinations that may produce some interesting results.
- While LCM Sampler is not yet implemented in Automatic 1111, LCM LoRA still produces good results using Eular a sampler for example as we will see in this video.

🎯 Key Takeaways for quick navigation:
00:00:01 samples videos and intro
The video discusses using LCM in Automatic 1111 to generate videos 3 to 5 times faster, focusing on image-to-image video generation, which is simple and doesn't require extra extensions.
00:01:08 Getting frames using Davinci Resolve
Demonstrates using Davinci Resolve to generate video frames and prepare them for image-to-image generation.
00:02:40 LCM LoRA usage
It shows how to use LCM LoRA for image-to-image generation and adjust parameters like sampling steps, CFG scale, and control nets.
00:04:18 Controlnets (Tile, Temporal,Softedge)
Setting up Tile+ temporal net and soft edge or open pose for enhanced image control
00:06:51 Batch image processing
Cover generating frames, checking their quality, and using Topaz Photo Studio for image adjustments.
00:09:43 Davinci resolve compsition tips
Adjusting video speed, retime, and scaling settings in Davinci Resolve to enhance the final video quality is discussed, show optical flow for frame interpolation
00:14:19 IP Adapter usage
The video mentions using IP Adapter for more style transfer and control in image-to-image generation.
00:17:15 Conclusion on LCM LoRA
LCM LoRA is recommended for faster video and image generation, but it's noted that it may not work well with Anim Diff and requires more experimentation in A1111, and conclusion.

download LCM LoRAs
SD 1.5 LoRA model

SDXL LoRA model

download Temporal net Controlnet models

download IP Adapter

Now unfortunately, LCM didn’t work well with animatediff on Automatic 1111 in my tests, possibly it requires LCM Sampler and some updates, so I recommend sticking with ComfyUI for Animatediff, it also requires better GPU to be used with Controlnets which is important if you plan on controlling your video output.

So LCM LoRAs allows us to generate videos and images faster in stable diffusion, which is an worth trying and using

and

thanks to all authors who have created these amazing videos and for their hard work.

Рекомендации по теме
Комментарии
Автор

🎯 Key Takeaways for quick navigation:

00:00 🎬 *The video discusses using LCM in Automatic 1111 to generate videos 3 to 5 times faster, focusing on image-to-image video generation, which is simple and doesn't require extra extensions.*
01:12 🎨 *The video demonstrates using V Resolve and Photoshop to generate video frames and prepare them for image-to-image generation.*
02:48 🖼️ *It shows how to use LCM Laura for image-to-image generation and adjust parameters like sampling steps, CFG scale, and control nets.*
05:14 🧩 *Setting up temporal net and control net for enhanced image control is explained.*
06:52 ⚙️ *The video covers generating frames, checking their quality, and using Toas Photo Studio for image adjustments.*
09:39 🔄 *Adjusting video speed, retime, and scaling settings in V Resolve to enhance the final video quality is discussed.*
10:49 🔮 *The video mentions using IP Adapter for more style transfer and control in image-to-image generation.*
15:14 🤖 *LCM Laura is recommended for faster video and image generation, but it's noted that it may not work well with Anim Diff and requires experimentation.*

titusfx
Автор

How could I miss this gem of a video for so long. Thank you so much for this mate💛🤝😍

dreamzdziner
Автор

Another great video from you! Thanks a lot for sharing this, great in-depth info!

razvanmatt
Автор

This is such an awesome tutorial. Just found your channel. Excited to binge watch all of your videos. Thank you for sharing!

ohheyvoid
Автор

This is so good. ai imaging is so fascinating. Thanks for showing us how it works.

Marcel
Автор

영상을 부드럽게 만들수 있을까 했는데.. 여기에 방법이 있군요.. 당신은 최고입니다

aidgmt
Автор

Latest Animate Diff update added the LCM sampler.

RaysAiPixelClips
Автор

Wow! Incredible tutorial! So much care and precision. I’m sure this video took a while to make + running your experiments. Thank you!!

(Btw, how much VRAM do you have?)

FifthSparkGaming
Автор

super cool bro ! Thanks a lot !
please do one for comfyUI🙏

APOLOVAILS
Автор

You haven't mention about the model. Also what to put in the VAE folder??

souravmandal
Автор

Hi.If you don't want to wait for CN to be loaded and unloaded in Automattic, you can go to the settings and set the slider for CN-cache (I don't remember exactly), then you will have CN in memory all the time.But it takes more memory, but generation is faster. Also Optical Flow is in Deforum.And you will need to insert the input video into CN and into the "init" tab. Temporalnet 2 has also appeared. But in order to use it you need to configure something in Automatic.
Have a nice day

michail_
Автор

By the way from my own it kept iterating on the same picture for the whole set of frames was in the resolve outputs .... any idea wher it come from ?

breathandrelax
Автор

Hi there & thanks for the tut. Quick question. Why does the output image looks better on comfyUi?

tyalcin
Автор

tried with sd forge, it works perfectly..thanks man.. as you all have python installed you can use the module "moviepy" to get the frames of your videos but also generate the video with the generated images after

edit:
i wonder if there is a way to use it in txt2img so we can use openpose rather than softedge so we can have more freedom on what we want (like the environment)

Chronos-Aeon
Автор

it's possible to have LCM A1111, by adding some line codes in two files of a1111

breathandrelax
Автор

I'm not good with logic and prompts, but can you explained this exactly a1111 method on comfyui? thank you

sigitpermana
Автор

Good video. I was just wondering what it would be like to make animations with LCM lora. Do you know how an animation could be made with a specific face while preserving its hair, beard, eyebrow, lips, nose... would I have to make a lora (like you have in other video with Elon) or could I do it with an image?

joelandresnavarro
Автор

sir plzz tell me how create videos like bryguy

fortniteitemshopk
Автор

I am trying to create LoRA for characters and clothes seperately. I have seen both videos of LoRA clothes and character. Is there any sure shot settings to create LoRA for characters which gives best accuracy in the result image? Because I need to automate the LoRA character where I will just need to select 5 6 images of the person and rest process can be automated?
Same goes with clothes training LoRA? Can you suggest something to do so? Is it possible? I am training LoRA to get most realistic and accurate face but some faceswap results are better than generated images. Any Suggestions?

krupesh
Автор

Friend, you can do it with XL since it is very difficult to guide yourself if you use SD 1.5, basically it is doing everything differently from your video and I find many errors and deformed images

dragongaiden