EASY for 2025! Train your own HunYuan Video models with Diffusion-Pipe. No GPU! No Setup!

preview_player
Показать описание

This is a GAME CHANGER!

NOTE: In the video, I recommend using the ULTRA GPU on Mimic, and in the video I used the "Instant" choice. However, choosing "Bargain" cuts the price in half. I tried it this morning, and had my Lora in 2 hours, and it only cost $2, as opposed the $9 in my original test that took 6 hours!

The HunYuan video is among the best of the open source video models. It’s high quality, uncensored, and can run on a variety of consumer systems.

Recently we share the ability to use custom Lora files to have even more control over the video that is created. That includes adding people and motions that can be applied to generations.

Until NOW, the training of these models was a complete process that involved you doing a lot of technical setup, IF you have a GPU that would even support the process.

However, MimicPC just added Diffusion Pipe - the software used to train these models - to their library of apps. This means you’re truly just a few clicks away from creating your own custom HunYuan video Loras!

You won’t believe how easy it can be…

I posted a scaled down and cleaned up version of the workflow I used locally in the video. I'm sorry I can't help you with the MMaudio installation. I've suffered through it several times, but I'll leave the link to the Github as well. Again this is for you to run LOCALLY assuming you have the setup and can go through the MMaudio installation.

Workflow: Text to Video with 3 Lora Slots and MMaudio

00:00 Introduction to Open Source AI Video
00:25 Revisiting the Han Juan Model
00:58 Challenges in Training Hunyuan Loras
01:33 Introducing Mimic PC
02:12 Training Custom Hunyuan Lora Models
02:41 Examples and Results
05:03 Step-by-Step Training Guide
08:31 Testing and Fine-Tuning
12:20 Conclusion and Final Thoughts

👍 LIKE If you found this video valuable. 🙂
🥰 SHARE If you know someone who might enjoy this video.
⏬ DOWNLOAD or ADD This video to your PLAYLIST for easy access later.
💬 COMMENT Your thoughts and questions are welcome!

That's what keeps me going!

🔔 ​​And make sure to hit the NOTIFICATION BELL to stay updated! 🔔

🌎EXPLORE

🗓️ MY HISTORY
🎙️ Voice Over Artist 30+ years
📻 Broadcaster / Actor / Creative

❓ASK ME

📌 FOLLOW ME

🪙SUPPORT
If you want to support me, the best thing to do is to share the content… sharing is caring!
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):

Рекомендации по теме
Комментарии
Автор

I was very surprised to see my interface being used, I've never used mimic but it's cool that someone added it, I made a docker image that is compatible with runpod, vastai and I believe that this helped the integration in mimic, anyway very cool.

AIMasterAA
Автор

Nice! Runpod also has one as well. Same GUI. I used it to train a LoRA. Worked great.

john_blues
Автор

This is amazing. Just wish they come out with image to video soon.

peacetoall
Автор

after this video i made a secondary subscription to the channel. thnx Bob for sharing this great info!

taavetmalkov
Автор

"So I just sat back and I waited for technology to change.." Our time has come... and gone. Darn it. What a time to be alive. Work is dead. Long live work.

timothywcrane
Автор

Great and informative as usual. Thanks Bob. Can you make your workflow available outside of mimic? I us hunyuan locally and would like to see your upscale process.

ian
Автор

Great, yet another thing to try out... 😊

ArcticMindfulnessRetreat-sxnl
Автор

CAN YOU DO A VIDEO ON THE SONAUTO UPDATE....

lonetempo
Автор

Hi! That's indeed a cool new tool! But is there a way to create a video with the first frame manually specified? I.e. an image to video workflow. In fact, it would be great to have the ability to specify the end frame too. Is that possible?

qwetry-ju
Автор

Is that any different than normal flux lora training? We can do flux lora train on things like replicate

os
Автор

Do the models work well with FLUX? I can use it also in SDLX?

LucasDominguezzz
Автор

Excuse me, do you ever think we will be able to train loras for hunyuan on windows without needing linux or any rented gpus?

ezbaisalgado
Автор

OMG you fixed the white balance. Thank you :P ;)

MeMyselfandAlice-pq
Автор

Wow, surprised that you didn't need any sort of captioning with the images!

jjameson
Автор

id like to download the model and execute it in my local FLUX, is that possible? thnks

LucasDominguezzz
Автор

Comfyui needs a node to speed up the animations in something like 1.25x ASAP, these slow motionish videos are very telling that they were made by AI...

kleber
Автор

I just wish this was open source. I don't pay for stuff to play with unless its something useful.

removeme
Автор

So what about Mac?
PC only I suppose ?

timesm
Автор

i think it is better to train on bf16 model instead of fp8

PyruxNetworks
Автор

Did Tracy beat you up for putting a beard on her?

BizarreReality