ControlNET to Video - Stable Diffusion Automatic 1111 Tutorial

preview_player
Показать описание
Create Videos with ControlNET. Use Automatic 1111 to create stunning Videos with ease. This easy Tutorials shows you all settings needed. Use ControlNET to transfer the pose to any video frame and create consistent Videos.

#### Links from the Video ####

Support my Channel:

Рекомендации по теме
Комментарии
Автор

Wow. Props to whosoever was among the first to figure all this out. And kudos to you Olivio for making it comprehensible and for the step-by-step tutorial. I'm not ready for this technique yet, but will re-refer myself back to this video when I am! I seem to find myself repeatedly thanking you for such great videos...and this one is no exception. THANKS!

frankschannel
Автор

here is the text line forgotten in the description:

initial_noise_multiplier, img2img_color_correction

FulguroGeek
Автор

The consistency between each frame is incredible! wow, I'd love to see more of this!

joannot
Автор

amazing I’m really looking forward to the end of my work day so I can try this!
“reverse slash” in English is usually called back-slash… and the other one can sometimes be called forward-slash if there is danger of ambiguity
thanks for details!

j.j.maverick
Автор

The infomation density is so high, I have to pause to take note even, greate job and detail explaination of each step.

likechen
Автор

We have almost mastered in creating the realistic images but video is another game. But happy to see the improvement happening in video animations consistency. Hopefully soon we will get flawless frame interpolations.

dreamzdziner
Автор

I think you may have set your resize settings slightly wrong here. If you're using "Just resize", the dimensions you set should be very close to or the same aspect ratio as the input image, I find it's best just to set to "Crop and resize" and lose a few pixels from the edges. There's artistic reasons why you might want mismatched ratios, but if you're using low denoising and ControlNet, as well as a big mismatch, it's probably not going to rework the original image enough to get away from the squished look that came through in the final video.

Sometimes it's a bit annoying to try and translate aspect ratios from input images to the width and height sliders, there's an extension I'd recommend that's pretty useful for that though, "sd-webui-ar". Has common ratios like 3:2, 4:3, and you can add your own pretty easily too.

Great video though, I was wondering if this was possible already or I would have to wait for an update. I'm really interested in whether it's possible to combine ControlNet with the Ultimate SD Upscale extension, possibly wouldn't work with many of the models but Canny and HED should work.

JPLAVFX
Автор

The Reverse Flash joke really got me, well done sir

miklanglo
Автор

"Great tutorial! I've been wanting to try out ControlNet with automatic 1111, and your video explained it perfectly. I can't wait to give it a shot!"

creatorsmafia
Автор

It's awesome it can already be done with these few manual steps, and will become so incredibly easy with a dedicated UI.
Impressive results without specific optimizations to improve the coherency over time, super promising. Thanks for the convincing demo 👍👏

supercurioTube
Автор

Once again, you are a legend and one of the most valuable persons out there on AI image related educational content!

Hats off to your work sir

NeuralNimbus
Автор

Great video! I'm surprised you achieved such great results with that CFG value. Try it with 20-25

ixiTimmyixi
Автор

making a movie that looks like A Scanner Darkly would be trivial now, where it probably took them a lot of work back then.

westingtyler
Автор

sd_model_checkpoint, initial_noise_multiplier, img2img_color_correction
You need to completely restart SD afterwards, only refresh ui does nothing.

THENEONGRID
Автор

Olivio, thanks again for teaching us about this new feature. When I followed your instructions, I had an error: controlnet OSError: cannot write mode RGBA as JPEG. The solution was to convert the images to PNG. Perhaps some of the viewers may have had the same issue so I wanted to share this solution!

ysy
Автор

Hey, Olivio thanks for the tutorial (and the whole channel - you are amazing). Have I understood clearly that you are basically using ControlNET only to get the initial morph and from that point you only use Seed and Promt? So basically ControlNET doesn't help consistency from that point?

VovaIvanov-ywkf
Автор

Amazing ... thanks for sharing how to do this. It is getting harder and harder to keep up with the fast pace of developments ... and I love it!

bigal
Автор

excellent tutorial, exactly what I spent all of yesterday looking for :)

kaliyuga-AI
Автор

Thanks A LOT Olivio! I was stuck on the "Do not append detect map to output" option and got errors before! You made my day!
Can I also suggest you to use the depth map aware script in combination with the control net, to isolate better the subject from the background? ;)

samueleroncaglia
Автор

Huh, fascinating. Not sure I understand it all, but that's OK for now. One thing I did notice was the neck seam on the output - the rest of the blend was pretty damned good, but the seam was super visible when she moved.

DrakeBarrow