Genmo AI Mochi 1 - The Best Open Source DiT Video Model By Far

preview_player
Показать описание
In this video, we check out the groundbreaking Genmo AI Mochi 1, the latest open-source video generation model, that is revolutionizing the industry. With a focus on high-fidelity motion and exceptional prompt adherence, Mochi 1 boasts a 10 billion parameter diffusion model that sets new standards in video generation. From fluid motion dynamics to impressive prompt fidelity, we explore the capabilities and advancements of Mochi 1, providing insights into how this model is reshaping visual realism in AI-generated videos.

Experience the future of video generation with Mochi 1 as we showcase the impressive motion quality, prompt adherence, and visual realism that this model offers. Discover how Genmo AI's commitment to open-source development is driving innovation and accessibility in the AI community, empowering users to experiment and create with this powerful tool. From hardware requirements to creative applications, we provide a comprehensive overview of Mochi 1 and its potential impact on the world of AI-generated content. Don't miss out on the opportunity to explore the possibilities of Mochi 1 and unleash your creativity with AI technology.

Genmo AI - Mochi 1

If You Like tutorial like this, You Can Support Our Work In Patreon:

#aivideogenerator #GenmoAI #Mochi1 #DiT
Рекомендации по теме
Комментарии
Автор

Mind blowing. The future is so bright with these open source models. Awesome vid.

insurancecasino
Автор

Pro tip: If you have a Microsoft Azure account, you can provision a Windows VM with four of these GPUs and only pay for the time they’re used.

JoeBurnett
Автор

It's a shame I can't run this, but I'm happy to see a large open model like this that I could use if I was willing to shell out $120k on GPUs. Hopefully we can get a quantized version that runs on a 4090

dishcleaner
Автор

High frame rates and high Resolution is the wrong optimization, imo. You can achieve both in Postproduktion with upscalers. To be useful on affordable GPU Clusters like tinybox, its better to optimze for low res but coherent realistic motion

mircorichter
Автор

If they can manage to quantize it that can run on a single 4090 at a resolution of somewhere like 720x480p or even better 1280x720 - it is always possible to upscale them in program like Topaz VideoAI or just another node within ComfyUI

Looks like this could be one promising local video model, unless Black Forest labs (maker of Flux) releases their model of SOTA and/or much needed quality improvement for CogVideoX

jihwoanahn
Автор

When I look at Genmo's website I see a lot of quite good user videos but often the prompt has literally nothing to do with the video. There is a prompt like: a butterfly on a leaf in lush green enviroment and the video is showing a car race in a city. Not convincing.

Michael_Moon
Автор

What gpu would quantized models require if you’d have to take a guess?

Guus
Автор

Can it do any accurate DNA or protein related video...???

JasonCummer
Автор

Can you do a video on running this model on a hosted platform like runpod that has the GPU req. I think that would be useful to people to try out

The_Python_Turtle
Автор

is it even worth talking about if it uses 4 h100s to use? open source or not only companies can use this

xbon
Автор

Company extracting money from ai service good luck 😂

ApexArtistX