ComfyUI With Meta Segment Anything Model 2 For Image And AI Animation Editing

preview_player
Показать описание
ComfyUI With Meta Segment Anything Model 2 For Image And AI Animation Editing

In this exciting video, we delve into the cutting-edge realm of artificial intelligence and computer vision with Meta's Segment Anything Model 2, also known as SAM 2. This next-generation AI model revolutionizes object segmentation, offering real-time capabilities for both images and videos. SAM 2's advanced architecture, featuring a unique memory mechanism, enables precise segmentation even in challenging scenarios like occlusions and reappearances. With its state-of-the-art performance, SAM 2 is a game-changer in the field of AI-driven object segmentation.

Previous Segement Anything 1 Multi Objects Editing Example :

Discover the power of SAM 2 as we guide you through implementing this groundbreaking technology in ComfyUI. From adding stunning effects to videos to tracking objects with ease, SAM 2 proves to be a versatile tool for content creators, researchers, and engineers alike. With SAM 2's open-source nature and accessibility under the Apache 2.0 license, it paves the way for innovation and experimentation in the AI community, from hobbyists to seasoned researchers. Join us as we explore the limitless possibilities of SAM 2 in enhancing image editing, video object tracking, and AI animations.

Unleash the potential of SAM 2 in your projects by following our step-by-step tutorial on integrating this powerful AI model into ComfyUI. Witness how SAM 2 collaborates seamlessly with other custom nodes and large language models to accurately segment objects in videos and images. Whether you're a novice looking to explore the world of AI or a seasoned professional seeking advanced segmentation tools, SAM 2 offers a user-friendly and efficient solution. Join us on this journey of exploration and innovation with SAM 2, the future of object segmentation in artificial intelligence.

If You Like tutorial like this, You Can Support Our Work In Patreon:

Рекомендации по теме
Комментарии
Автор

segment-anything-2
Save to ComfyUI/models/sam2

TheFutureThinker
Автор

its fun and interesting seeing the progress Kijai made with implementing this model in comfy. Great explanation @benji!

goodieshoes
Автор

Good video buddy, you have me opening comfy and updating workflows... seems like a real upgrade to impact SAM 1

Edit : i needed to change the security of my manager to weak to install this.

aivideos
Автор

Thanks, since Segment Anything you mentioned last time, I like to use it more than other SEG method.

crazyleafdesignweb
Автор

I was thinking this exact flow when Sam 2 was released.

The combination of both is dynamite. This could also be used with PaliGemma or a finetuned version of florence 2.

Awesome job. 🎉

santicomp
Автор

thansk for the tutorial, one question though : which is better ? Animatediff or mimic motion ?

thibaudherbert
Автор

Thanks, i will update my wf to try SAM2.

kalakala
Автор

How can we have IPadapter ignore the background and only change the style of the subjects?

thegtlab
Автор

Can you post video inpainting with sam2 sd 1.5 model please

DP-zwsb
Автор

On 4:00 you say "then you can load up segment anything2" but it doesn't show how you loaded the nodes? Could you please explain how you were able to go from the blank screen to the full node setup? I'm stumped on this step. Thank you!

adrivlogsgt
Автор

Is it possible to use the model to segment Stable Difusion / Midjourney deformed pictures? (multiple fingers, blurry flaces, etc)

antoniojoaocastrocostajuni
Автор

That's a first :/

"ComfyUI SAM2(Segment Anything 2) install failed: With the current security level configuration, only custom nodes from the "default channel" can be installed."

kallamamran
Автор

hi. is there a way to output/save only the orange? instead of the mask of the orange?

lionhearto
Автор

I tried this with several videos, some it worked great, florence tracked the dancer fine and sam2 masked it well, but others florence once again tracked well, but sam2 only masked part of the dancer, like their shorts. I'm not sure what causes this

weirdscix
Автор

It doesn't work good . Sometimes it doesn't segment good if there are several things

rkwybzh
Автор

Awesome videos on florence, thanks for your time creating these.
A quick question, when I use Florence for captions in an animateDiff and IPAdapters workflow, I get 2 results:
1- the final animation
2- the animation with the Florence captions.

For some reason the Florence captions is much faster even though it is set at the same frame rate (24fps) as the Video combine for the plain animation (without the captions).

Any idea why this is happening or how to fix it?
Thanks in advance 🙏

suzanazzz