Create your own AI animated Avatar for Free!

preview_player
Показать описание
Ever wished you could animate a static image? Well, now you can, thanks to the Thin Plate Spline Motion Model! Bring your pictures to life with pre-trained models, or train your own, custom models and create more.

Contents:
0:00 Overview
1:45 My Environment
2:15 Anaconda creation
3:20 Run

Links:
Рекомендации по теме
Комментарии
Автор

Love the variety of programs you demonstrate, I have really wanted something like this for a while, tried lip2wav and stuff but this looks great!

prophetofthesingularity
Автор

Hearing your voice from that face broke my brain 😂

AngieDell
Автор

Very easy to create. It was a lot of fun. It actually can run on a CPU if you have patience. My computer is so old it has sand from the Pyramids on it and it still worked.

blogblocks
Автор

Amazing tool and video thanks dude ! 😃

gemini
Автор

thank you for this!!! appreciate your tutorials so much! there are so many possibilities that one can do with this video alone

daisymaize
Автор

I'm looking to do some body animation. From what I understand, I'd need to use the "ted" model to achieve the best results. But does that model essentially reproduce the sample video, or can I swap out a custom reference video? Also, any idea whether I could run it in a virtual environment through a service like Runpod (I'm on a Mac)? Many thanks for your great content!

GraemeHindmarsh
Автор

How can you output more than 512x512? If i do more than that the video breaks. Any idea?

janholecek
Автор

Not sure if you can help with this. But I installed the everything and enabaled the env on anaconda. But when I run "CUDA_VISIBLE_DEVICES=0 python demo.py --config config/vox-256.yaml --checkpoint checkpoints/vox.pth.tar --source_image assets/source.png --driving_video assets/driving.mp4" I get : 'CUDA_VISIBLE_DEVICES' is not recognized as an internal or external command,
operable program or batch file. Any ideas?

SyntheticVoices
Автор

Love these videos
Now if you could just show us how to build a timemachine next time, I might find the time to try them out :)

robertx
Автор

So I wanted to set this up so I could generate a face with Stable Diffusion and plug it into this. Nobody told me how insanely difficult it would be to get two Conda environments to stop interfering with each other 😂 It took me two hours and multiple re-install attempts to get this and SD working together without messing up each others' packages. But I finally did it! What I learned is it's important to have high-quality, stable, footage with good lighting, or the results come out pretty mediocre. And I don't have any good lighting in my house, so 😂

IceMetalPunk
Автор

hello, i get a error message stating " Could not find a backend to open `./assets/driving.mp4`` with iomode `r?`." any idea?

omarsharod
Автор

how can you change the resolution and make the output video more qualitativ

amirT
Автор

Any recommendations on formats and ratio for both source and driver ? All I'm seeing is a messy result. Quite horrific.

marcthenarc
Автор

If nerdy starts a chorus.
It's gonna be terrifying.
v-roids.

YandereShiki
Автор

Thanks for sharing this nerdy. Would this also work with illustrations? say I wanted to animate a character?

ashorii
Автор

As a fellow Ubuntu 22.04 user, may I ask you what RTX card along with what driver you're using and how it gets along with your CUDA toolkit? Did you experience any problems setting up your system? In my case, first Ubuntu didn't recognize my RTX 3060 at all. I installed the latest drivers and everything got even worse. Then I found out that I shouldn't install the latest drivers (even though they are recommended and "tested"), but an older version instead. Then I tried to install the CUDA toolkit and that messed up everything. I need to have the CUDA toolkit installed to use DreamBooth etc., but I'm afraid to change anything in my system/drivers setup :(

julianmahler
Автор

Hello, how do you manage to upscale the video with GFPGAN ? it seems made for pictures only. thx for the help !

MightyAtom
Автор

For context is the driving video of you speaking or does this software do lip sync like Wav2lip?

SyntheticVoices
Автор

Amazing. Is there any way to customise the size of the video or will it always be a square format?

wilwester
Автор

Ok? Ok. Yes? Yes. Yeah? Yeah! :D <3

srosh