Use Llama3.2 to 'Chat' with Flux.1 in ComfyUI

preview_player
Показать описание
With a super-simple interface, using this workflow for Flux.1 and Llama 3.2 couldn't be easier! No need to scroll around or look at any complex nodes - just have a conversation and make some images :)

Try these prompt examples!
Add more flowers, please
Give everyone a silly hat!
What is outside the window?
What is the main character thinking about?

Even more workflows on patreon!

== Beginners Guides! ==

== More Flux.1 ==
Рекомендации по теме
Комментарии
Автор

This is the video ive been waiting for! Downloaded 3.2 like a week ago and sat on it. THE TIME HAS COME!

grahamulax
Автор

I quite like that complicated messy version of ComfyUI... makes me look clever knowing how to use it, if anyone sees me working on some images. :) Certainly give this a try once I fix my computer.

amkire
Автор

Waaaiiiit a second, you're telling me that comfyUI is now actually comfortable to use?...impressive.

ajedi
Автор

Hello! Been following you from the start, but this is straight up amazing.

urbanthem
Автор

Thanks! I thought that rat was some Gordon Freeman wannabe :D

devnull_
Автор

it almost looks like auto1111, well done.

quercus
Автор

Oh, Nerdy Rodent, 🐭🎵
he really makes my day, ☀😊
showing us AI, 💻🤖
in a really British way. ☕🎶

juanjesusligero
Автор

Whmm with my meager 4070ti 12gb vram. Wouldnt it be better to use guff lama in ram so the image gen doesnt compete with lama? Or does it load into ram every time you queue? Im guessing guffm might not be out yet for this model though.

Larimuss
Автор

can't get the APIllm general link to work, while the basic WF start with ollama from LLM-party is working, but there's so few explanation on how it works it's a pity
i had the error first loading Rodent WF but everything went in place after installing missing nodes

lucvaligny
Автор

I found this video "Ollama does Windows?!? Matt Williams" that helped get Ollama working, and I was able to use the workflow. I learned a lot getting it going.
"

rifz
Автор

Open Source is becoming amazing !
NR works in R&D at "The Mouse"?

MilesBellas
Автор

Great way for a nice workflow. But like many others have mentioned, it will not even open. Clean install ComfyUI, installed the packages mentioned on your page, but unfort. nothing happens. Any chance of a checkup?

PugAshen
Автор

I had some problems getting to work, I did and update and refresh but no go. In the end I gave chatgpt the output and asked it how to fix the errors. now I got it going, so maybe give that a try if your having problems.

freestylekyle
Автор

is there a setting to see the sampling progress as its happening so that you can cancel it if its not what you want? not sure if its the node sampler custom advanced that doesn't show you the progress

dkamhaji
Автор

if the LLM cant see the image, its no use for me.
nice workflow tho. but very unoptimized, the stable-diffusion is very basic

VASTimages
Автор

Can it be used with Textgen webui? Ollama is awful lol, no way to use it across a network, it wont load your already downladed llms with out converting them and duplicating them, pain to set it for a new folder.

I love your video's and find them informative, though it does seem your trying to turn comfyui into auto1111 lol complexity is not as much an enemy as tools that over-simplify can be... though perhaps thats just from personal standpoint.

DaveTheAIMad
Автор

This is sooo nerdy and weird. The workflow you show is nowhere to be found in my Comfy install when I browse the 4 templates that are offered. What miracle do you perform to load this new layout into the program?

fullflowstudios
Автор

Can I switch the LLAMA 3.2 and use another variant of the 3.2 models?

DezorianGuy
Автор

Hi, you got a subscriber here, congratulations for the amazing work, I have an issue after upscaling it doesn't looks perfect, the edges has some blurry edges, any idea how to solve it? I enabled High res... but it didn't fix the issue

NeptuneGadgetBR
Автор

so it works well, but it is loading this huge dev model every time... slowly on a 3090. is there some hidden setting to keep in loaded?

purposefully.verbose