OmniGen New Diffusion Model - You Might Not Need Photoshop Anymore

preview_player
Показать описание
OmniGen - A Diffusion Model Edit Image Like Photoshop But Only Text Prompt Needed

Today, I'm thrilled to bring you an exclusive look at OmniGen, the revolutionary new diffusion model that's changing the game in AI image generation! We're diving deep into this groundbreaking technology that combines the power of Microsoft's Phi 3 large language models with SDXL VAE to create something truly extraordinary. Unlike traditional diffusion models that rely on multiple components like ControlNet or IPAdapter, OmniGen offers an all-in-one solution that's blowing minds in the AI community!

In this comprehensive tutorial, I'm showing you everything from installation to practical applications. We'll explore how OmniGen can handle up to three reference images simultaneously, perform selective character editing, and execute complex image transformations with just simple text prompts. Whether you're interested in virtual try-ons, character consistency for AI movies, or advanced image editing, this video covers it all! Plus, I'm sharing my fixed version of the ComfyUI custom nodes that solves the common issues many users are facing.

Watch as we test various scenarios - from changing outfits and backgrounds to combining multiple characters in new scenes. I'll demonstrate how this tool outperforms traditional methods in virtual try-on applications and show you real examples of successful transformations. By the end of this video, you'll have a complete understanding of OmniGen's capabilities and how to implement them in your own projects. Don't miss out on this game-changing technology that's pushing the boundaries of AI image generation!

OmniGen

OmniGen-ComfyUI - TextPrompt Fix version , Forked by me

If You Like tutorial like this, You Can Support Our Work In Patreon:

Рекомендации по теме
Комментарии
Автор

I like how it can remove an object without a need of segment and masking.❤ Great vid.

kalakala
Автор

Which is better between the webui version and the comfyui version? Specifically for generation time on lower VRAM computers

kfxrich
Автор

According to my tests, this model is crap. The idea is great, but the model itself did not generate even one usable image. I already spent several hours, to get the thing going and generate tens of images. Smudges, inconsistent physics, crappy hands... I even got some gibberish text on top of one of the images... I tested with and without input images, it's the same... There was one image I was about to say I'm happy with, and then I noticed that it put too many buttons on the coat of a woman, even on places where there shouldn't be buttons.

Deadlious
Автор

I can't get omnigen to work. It keeps getting stuck on processing. I have a very fast computer. Im just doing text to image without reference and it still wont work. Any help? I also randomly just get "error" on the processing screen after a while.

inside_fighting
Автор

I tried the stand alone version and it sucks arss. Tried diff promps and it could not do a basic face swap.

BoomBillion
Автор

Hi and where to put the model and why it is not in the nodes tell me

yklandares
Автор

my omnigennode have latent, what shoud i do?

Meowjesticc
Автор

I noticed more frame by frame stuff on Civitai. Sadly it's mostly adult stuff, but I hope the process catches on because all low end PC or laptop owners will be able to make high quality vids as long as they can generate images based on video frames. More time and work, but that's the trade off. LOL. Awesome vid by the way.

insurancecasino
Автор

I'm following your video exactly and still can't use it because the node is still red with Import Failed. I've asked claude and others to try and figure this out, but it's still not working.

CharlieLee-lw
Автор

01:56 unbelievable how consistent style it can be created.

kalakala
Автор

Can this be installed on Stable Diffusion WebUI?

hicks
Автор

Hi, it seems to be very intersting, I would like to try it with ComfyUI, I don't understand how to load model and VAE, can you update the readme please ?

vincentgautier
Автор

I am not so sure why, but today is the second day I use the hugging face space for omnigen and got "error" in output.
I still haven't been able to make it work locally, but still figuring out. Hopefully is a me problem, and this error in their hugging space is not related.
Glad to see you talking about it! New sub

milycortes
Автор

got it working on AMD (linux). This is such a powerful yet simple node.

kkryptokayden
Автор

I dont know, but my comfyui stuck and it wont run.

Elaina-nnjd
Автор

How to run .txt in colab or how to open command in colab

Aibyda
Автор

I wished it could. I tried it a lot now and no result are es good compared to what I did with the same picture in photoshop. Even simple stuff, like remove sky leads to a new strange image. Only the example they have in the list are working well.

berhunt
Автор

OmniGen offers a convenient way to access a series of features that are typically available separately or through different ComfyUI workflows. It combines tools that function like Insightface, ControlNet, and inpainting, etc, into a single package, which is helpful for users who prefer not to wrestle with the complexities of ComfyUI noodles 😄

However, this convenience comes at a cost. The model's weights size of ~16 GB (partially loaded into VRAM) makes it either impractical or extremely slow on common GPUs (with 8 GB of VRAM). Additionally, to be completely honest, the functionality provided by OmniGen can be replicated using separate models outside of the package. There's almost nothing new here, as far as I can see.

Nonetheless, it is a commendable effort to merge everything a regular user needs in one place.

Dr.UldenWascht
Автор

I tried it, without image (10 minutes), with 1 image ...1 hour 😢

travelwithfaycal
Автор

Good video, but it's too slow, how to speed it up in low vram machine.

snipervicar