Guide to making Stable Diffusion Img2Img Videos

preview_player
Показать описание
Want to make videos using the Img2Img function of stable diffusion? Well, here is a quick guide! Just split your video into frames and use batch processing to create stylised videos :)

Contents:
0:00 Overview
054 Environment
1:28 FFMPEG info
2:27 Inference
11:30 Frames to video
11:40 Example videos

Links:
Make AI Art Move! Thin Plate Spline Motion Model - FREE!

Original Videos I used:
Рекомендации по теме
Комментарии
Автор

Legend! nice to be the first comment for a change lol. thank you for everything you do for us.

MUZIXHAVR
Автор

There are several people working on temporal coherence, I'm pretty sure we'll have a version that considers previous output frames in the not too distant future.

JamesPound
Автор

Can't help feeling that it would be easier to just use EbSynth - it would certainly give a more fluid look to the result. The method you've explained (very well btw!) would be perfect for creating consistent keyframes. I guess it depends on the result you're trying to achieve.
BTW, what did you use for the PIP? It's very, very good!
Thanks for a great tutorial.

GraemeHindmarsh
Автор

Very nice. Reminds me a lot of rotoscoping effects in the movie "A Scanner Darkly".

MarcRadermacher
Автор

That was almost as fun as a bathtub of baby hippos.
Okay, maybe not. Baby hippos are pretty gosh darn fun, but it was still really entertaining to watch and listen to.
Thanx

DoorknobHead
Автор

You're a legend. Thank you so much :)

TheDragonUpstairs
Автор

I think you might be the first YouTuber that made a video showing that the process and results are NOT entirely seamless - for that I thank you!

machinegunyelly
Автор

Your videos have helped me so so much dude thank you. How have you made your thin plate spline avatar so incredibly smooth? I'm getting quite a bit of warping when changing head position

musicbydavidd
Автор

You’re my favorite rodent of the nerdy kind. Gotta get my coffee. And have another video to catch up on from you. Maybe some of your tech knowledge will take in my brain, we all know I could use it lol. Here’s to bumping into you later ☕️

kariannecrysler
Автор

This was wonderful. I learn so much from your videos. I really appreciate all your time and effort.

WonderLady
Автор

I reckon the next stage is to implement a frame to frame interpolation trained on averaging out the changes the creat more realistic predictions of movement and less variation.

sharkeys
Автор

Thanks for the great guides and demonstrations! I too would like to ask how you've created your PIP? Perhaps another tutorial for you to make?

dvxc
Автор

I like running the frames trough the an app called flowframes which has interpolation ai drawing the missing frames to make transitions very smooth. Another trick I use is to make the initial video at a much slower frame rate and then interpolate.

zloboslav_
Автор

Hi, Nice video, By the way, How do you create the talking person on the bottom right corner ?

eranfeit
Автор

how you use art as a camera deep fake? please teach us or tell me here what the software.

gabrielverasr
Автор

You just taught me that maximalism is a thing. And that my aunt's is a maximalist which is evident if you saw her house.

HalkerVeil
Автор

Thank you, great tip “anime style”… brilliant

banzai
Автор

Thank you for informative video! Do you know anything about Stable Warpfusion? Is it another AI or version of SD or it is a model, embedding or promt?

fillill-
Автор

Very cool! Thanks for sharing. By the way, how do you do this avatar animation (in the lower right corner)?

cnawak
Автор

In my experience, it is also helpful to include target expressions into the original negative prompt. For example, you could include "anime art style" in the original negative prompt.
And in addition in the Features documentation of AUTOMATIC1111 stable-diffusion-webui you can find ( ) and [ ] as "Attention" marks for expressions that you really or barely need so see in the resulting image.

christianholl
join shbcf.ru