Style Transfer Using ComfyUI - No Training Required!

preview_player
Показать описание
Visual style prompting aims to produce a diverse range of images while maintaining specific style elements and nuances. During the denoising process, they keep the query from original features while swapping the key and value with those from reference features in the late self-attention layers.

Their approach allows for the visual style prompting without any fine-tuning, ensuring that generated images maintain a faithful style.

My personal favourite so far - and yes, it works in ComfyUI too ;)

Want to help support the channel? Get workflows and more!

Links:

== More Stable Diffusion Stuff! ==
Рекомендации по теме
Комментарии
Автор

Is there are version for automatic 1111?

andyone
Автор

5:30 Have you seen Marigold depth yet? It's so super crisp and clean for most of the images I threw at it. Only downside is that whatever the base image is it will work best at 768x768, but you can rescale it back up to the base image size after Marigold does its magic.

Main
Автор

Hi! Please upload the ControlNet Depth example. The Exponential ML Github has taken the down : (

craizyai
Автор

Earlier, I installed the nodes but didn't get around to trying them out. Now, you're making me regret not giving them a go! 😂😂

ultimategolfarchives
Автор

My Nerdy friend 🤘🥰 seed starting this week for my salad garden 😁

kariannecrysler
Автор

hey there, im very new to comfyui and just learning about it. is there a way to get the workflow from you to tinker around with it?

thanks either way, nice overview and tutorial <3

CrazyFist
Автор

Looks better than IPAdapter, cool. Sometimes you don't have a dozen photo with something made from cloud to train style

attashemk
Автор

I think something got broken with COmfy ui extension 2 days ago because this is just not working

dogvandog
Автор

Would it work with batch sequencing for video? How about consistency?

twilightfilms
Автор

I tried the comfyui workflow from the github page and it didn't seem to do much at all until I realized it mostly seems very reliant on piggy backing off of the prompts, and gets very confused with anyhting beyond basic. If your reference image is vector art and you put in a person's name, it won't take the style at all and just gives a photo of the person.

bladechild
Автор

after installation i got "module for custom nodes due to the lack of NODE CLASS MAPPINGS.", can smbdy help with that

alex.shapemotion
Автор

can you build a workflow that has this style reference in it, for Video to Animation with unsampling?

Rachelcenter
Автор

Hey there, so I was also confused. For me it didnt work at all when I installed.
So I dug into the code and fixed it. I also added some new settings. The code has been merged a while ago, so definitely give it another shot!
If you do- Note that there are 3 blocks, each block can use the attention swapping, and each block can be configured to skip the swapping for the first n layers inside it (analogous to the paper). This is cool because it allows you to control just a bit better if there should be a little bit of content leakage, and also if the style should be a bit stronger or weaker.
Let me know if you have any issues or suggestions for change!

plexatic
Автор

Can't seem to get this to work with XDSL, can anyone confirm that it is still working with the updates?

DemShion
Автор

import keeps failing and when I try to install the reqs the triton or whatever fails

Omfghellokitty
Автор

I could not make this Workfolw from the video. Please put it free if possible.

mr.entezaee
Автор

Couple of years ago there was a website that allowed you to upload an image and apply that style to another image, so you could upload a plate of speghatti and then upload an image of your mate and you had a mate made of speghatti... this reminds me of that, gonna have to add that to comfyui (and fully watch this video) on my day off :)

GamingDaveUK
Автор

what extensions have you used for the BLIP nodes, please ? I have installed both comfy_clip_blip_node and ComfyUI_Pic2Story, but none show as yours :/

pmtrek
Автор

@Nerdy Rodent Great stuff. Request: on Patreon, can you release a version with a Canny Controlnet added to the depth Controlnet? I'm not yet at the stage of being able to do this myself...

contrarian
Автор

Thanks. I just tried it and am not getting the same results as you. Not even close. Images look mutilated... I've double, triple checked my work and reviewed the github. Seems to me like this is only working in extremely specific scenarios?

AnnisNaeemOfficial