Creating an uber-realistic video animation from an avatar with Stable Diffusion

preview_player
Показать описание
This tutorial will guide you through the process of creating an avatar with ReadyPlayerMe, animating it in Mixamo, building a 3d scene around it in Blender and feeding this scene into StableDiffusion Automatic1111 to create a video animation, using an uber-realistic custom model with the Deforum and ControlNet extensions.

To watch some more great music videos created with StableDiffusion, Unreal Engine and Blender, visit our YouTube music channel:

----------------------------

Chapters:
00:00 Intro
00:20 Creating an avatar with ReadyPlayerMe
01:45 Preparing the avatar for uploading it to Mixamo
03:00 Animating the avatar in Mixamo and importing it into Blender
05:41 Importing a 3d background scene from Sketchfab into blender and render the scene
10:19 Preparing Automatic1111 for a hyper-realistic render
12:13 Using the Deforum and ControlNet extension for rendering the animation
14:47 Creating the final video

Link to the ReadyPlayerMe website for creating free 3d avatars:

Link to the Mixamo website:

Link to Sketchfab for downloading free 3d models:

Downloading custom models for Stable Diffusion:

Deforum Extension - download and infos:

Controlnet Extension - download and infos:

Downloading models for the Controlnet extension:

Download Blender:

-----------------------------------------

Local installation guide for Automatic1111 on a Windows-PC:
and on a Mac with Apple Silicon:

Some great Youtube channels, covering Stable Diffusion:

#stablediffusion #automatic1111 #deforum #controlnet #blender #mixamo #tutorial #readyplayerme
Рекомендации по теме
Комментарии
Автор

I think that showing the outcome at the beginning of the video and only then starting explaining how it's done can be a kind of a "hook" for viewers. Keep it up!

STVinMotion
Автор

what in the jennifer lawrence is going on here?

darkgenix
Автор

Looks like the best way to do prepared this is to render the character against a green background and render the background separately. Then just key it out as normally.

FrankJonen
Автор

thank u so much for the tutorial and giving sources as wel

ireincarnatedasacontentcreator
Автор

Extremely informative and very well done. Thank you.

richardglady
Автор

Super video and excellent teaching. It even looks easy to do. I am going to try.🤩

EdnaKerr
Автор

Ive been and sat in that cafe in France(Saint-Guillhem-le-desert), one of the most beautiful places in the world!

jadonx
Автор

true future will come once stable diffusion will be able to output consistent result for both background and characters

AZSDXC
Автор

Thanks a lot for this awesome tutorial and all your work.
Would the ebsynth program not be a good step in the process to make the video look more consistent?

aribjarnason
Автор

This is incredible. But also insanely long and complicated. I hope someday they can make an AI that can generate this video all in 1 app. Something like ModelScope text to video synthesis 2. Or stable warpfusion. Or Runway Gen 1.

Also you made this very long and complex. I think you could just use a video game like Blade & Soul to create the avatar, dance, background scene, and just record a video of the video game. Then input the video into stable diffusion. Using a video game could have saved you 100 of those steps.

PRepublicOfChina
Автор

can you make another version where you show how to create custom person model, for example from midjourney image?

alexi_space
Автор

BIG FANX for this great, helpful complex, motivating and inspiring video!

coloryvr
Автор

You could export the depth map in blender to get an even better output.

pieterkirkham
Автор

Great intro to that topic. But did you also manage to get a good video result, where the costume stays the same, as well as the background? If so, is there any chance for a follow-up video?

RobertWildling
Автор

I think it Won't be to much longer Before text videos

BRAVENm
Автор

Awesome👍 please tell me how to create face expression in my model

santhoshmg
Автор

dude that was so helpful but for i have Q i start learning blender but i want see if my pc i suitable for it or not
the cpu is 10400f core i5
and gpu is 6900 xt 16g
is that good for making animation or shod i upgrade it??

tobygilbert-slew
Автор

Nice but I figure a 3d scan can work better and how does face dance make their stuff???

omnigeddon
Автор

Results are little bit mehh but i like the idea. Thanks.

thetest
Автор

Is it possible to remove the defective frames?

maertscisum