Ai Animation in Stable Diffusion

preview_player
Показать описание
The Method I use to get consistent animated characters with stable diffusion. BYO video and it's good to go!

For the companion PDF with all the links and comfyUI workflow.

Add LCM to Automatic 1111

You're awesome! Thanks for hanging out with me!
Рекомендации по теме
Комментарии
Автор

I van not find for the control net, why is that?

rilentles
Автор

Can you show us how to do this in ComfyUI? I decided to learn that instead of A1111 since it seems faster and more flexible. But I'm still a noob at it.

binyaminbass
Автор

It's amazing.
How possible is it to use SDXL together with a trained model over cheap greenscreen footage?

I want to create a cartoon style video with absolutely the minimum time effort and money.
You know that dream we all have...😂

🎉 amazing work!

ekke
Автор

For the first time in 6 decades, we see exactly what we want to achieve in 3D cartoon animation. We are watching closely and learning. We thank you for sharing

UtopiaTimes
Автор

I tried the tutorial but just wasn't as consistent as yours. much flickr.

themightyflog
Автор

you didnt tell us where to get eternal dark

omegablast
Автор

LMS test to LMS karras? best tutorial...no xiti talking

kanall
Автор

I get an error, when trying to change those .py files, also there might be an error in the instruction (Add this code at line 5, it should take up lines 5 to 18 once pasted.) when i paste this code i get more lines, 5-19

AI_mazing
Автор

thanks for the work🤩🤩🤩, I HAVE DOUBTS, IN CONTROLNET YOU USE diff_control_sd15_temporalnet_fp16.safetensors, but in your PDF but when you click on the controlnet model in your link it downloads the my question is which model to use, the or the

jonjoni
Автор

So basically. SD does not do animation still, you used other apps to render a video animation. If followed correctly that was blender. And if understanding you are just mapping texture over an animated character you created outside and render outside of SD. Which if following, It still doesn't do animation SD.

stableArtAI
Автор

You mentioned that you would share the model as it's not on civitai.
I can't find any link.

lithium
Автор

I cannot get these types of results at all on mine, but I use the same exact settings and lora as well. It just make sit look like my face has a weird filter on it. It won't make my guy cartoony at all

MisterWealth
Автор

where's the ethernaldarkmix_goldenmax model?

hurricanepirate
Автор

I don't see where and how to do the LCM install. I think you left a few things out.

themightyflog
Автор

so do u have to have davinci to do this or what? Its not really clear from teh vid

evokeaiemotion
Автор

Does anyone else not have a clue what he's using, where he got it from?

Dalin_B
Автор

how come when I start doing batch processing after getting a single image right it looks completely different? I'm using all the same settings and same seed, just adding the input and output directories and I'm getting a completely different looking result. (It's consistently different too. The single image one is always in a blue room and the batch ones are always in a forest for some reason.)

OmriDaxia
Автор

I don't see join an email list anywhere on your website.

Sinistar
Автор

Reminds me of the movie A Scanner Darkly, which used interpolated rotoscoping.

Skitskl
Автор

Thabks for rhe tutorial, never used those Control net inits before, been trying with Canny and OpenPose. This has been very useful. Any idea of how can we deflicker the animation without Davinci? Either something free of cheap. Thabks in advance.

vegacosphoto