Fantastic New ControlNet OpenPose Editor Extension & Image Mixing - Stable Diffusion Web UI Tutorial

preview_player
Показать описание

Playlist of #StableDiffusion Tutorials, Automatic1111 and Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Pix2Pix, Img2Img:

Easiest Way to Install & Run Stable Diffusion Web UI on PC by Using Open Source Automatic Installer:

How to use Stable Diffusion V2.1 and Different Models in the Web UI - SD 1.5 vs 2.1 vs Anything V3:

Transform Your Sketches into Masterpieces with Stable Diffusion #ControlNet AI - How To Use Tutorial:

Sketches into Epic Art with 1 Click: A Guide to Stable Diffusion ControlNet in Automatic1111 Web UI:

Guts - Berserk - Character LORA:

Lollypop Upscaler:

8 GB LoRA Training - Fix CUDA Version For DreamBooth and Textual Inversion Training By Automatic1111:

PoseMaker App on Hugging Face:

Used PNG file and pose:

0:00 Introduction to #OpenPose Editor, ControlNet, LoRA, Stable Diffusion, Automatic1111 Web UI
1:26 Introduction to image mixing by using ControlNet
1:50 Web UI and other extensions revisions used in this video
2:07 How to install OpenPose Editor extension of Automatic1111 Web UI
3:12 How to use OpenPose Editor
3:24 What each dot, stick means in OpenPose editor. Each body parts.
3:52 How to generate image by using OpenPose Editor and ControlNet
5:28 How to generate 2 or more characters by using Stable Diffusion and OpenPose Editor consistently
6:36 How to generate pose map from existing images by using OpenPose Editor
7:21 How to add more than 2 characters into the OpenPose Editor
7:35 What is add background feature of OpenPose Editor
7:59 More advanced Pose Maker application hosted on Hugging Face
9:05 How to generate Guts / Berserk character in Salt Bae pose
9:24 How to use downloaded CivitAI LoRA models in Automatic1111 Web UI
9:35 How to install Kohya-ss Additional Networks for LoRA models
9:50 Where to to downloaded LoRA model
10:20 How to activate and use downloaded custom LoRA Civit AI models
10:40 How to use PNG info feature to load parameters
11:02 How to generate the Guts Salt Bae image used in thumbnail
12:37 How to install and use different upscalers in Automatic1111 Web UI
13:35 How to mix 2 images by using ControlNet extension of Automatic1111 Web UI
14:30 How to obtain good mixed images by using HED
14:45 Important part / trick of image mixing in ControlNet
17:51 Ending speech very important

Stable Diffusion Automatic1111: This is a state-of-the-art image denoising method that uses a diffusion process to smooth out noise in images while preserving fine details. The method is based on a mathematical model that describes how noise spreads in an image over time, and it uses this model to remove noise while retaining sharp edges and other important features.
Stable Diffusion Automatic1111 has several advantages over other denoising methods. For example, it is able to remove noise without blurring important details, it can handle images with complex textures and structures, and it is computationally efficient. The method has been applied to a wide range of applications, including medical imaging, microscopy, and computer vision.

OpenPose: This is an open-source library for real-time multi-person keypoint detection and multi-threading written in C++ with Python and Matlab bindings. It can detect and track human body and hand key-points in real-time and works with both single images and video streams.
OpenPose uses deep learning models to detect human body and hand key-points from 2D images or videos, and it provides an easy-to-use interface for developers to integrate this functionality into their own applications. The library has been used in a wide range of applications, including human-computer interaction, sports analytics, and robotics.

Overall, both Stable Diffusion Automatic1111 and OpenPose are powerful tools for image processing and computer vision, and they have many potential applications in various fields.
Рекомендации по теме
Комментарии
Автор

Please join discord, mention me and ask me any questions. Thank you for like, subscribe, share and Patreon support. I am open to private consulting with Patreon subscription

SECourses
Автор

Finally a helpful guide. For some reason other tutorials did show the TAB version of this editor and provided wrong links.

matsu
Автор

I don't speak your language, but I still understood the whole video, very well explained, you explained every detail, not like most who think we already know how to do everything

JulioCesar-ltwp
Автор

👏👏👏Another very very useful video. ControlNet OpenPose editor explained ultra-well! And then I finally figured out how to use the 'additional network' extension. Now works. On all the other videos that touched on the same topic I hadn't succeeded.

SandroGiambra
Автор

Actually you no longer need the additional-networks extension, you just need to add the lora files into the \models\lora Folder, then just click on the show extra networks button (red square under the main generate button) then click on the Lora tab and it will show you all your lora files, once you click on one the lora will be added into the prompt, the strength of the lora can be adjusted by modifying the :1 at the end for a smaller or bigger number like 0.5 or 1.5. You can also create sub folder and add thumbnails to sort all your lora, hypernetwork, checkpoints and textual inversion files.
Thank you so much for the videos!

josevillalobos
Автор

Thanks for this amazing tutorial video. I think that the results generated in this video are the best among all youtube tutorials on Stable Diffusion.

김준호-nwd
Автор

Thank you so much for this fantastic tutorial! I appreciate that you carefully went over all the settings, which not a lot of other tutorial guys do.

fluffsquirrel
Автор

Excellent tutorial man. I had a hard time tracking down those models for controlnet.

mattgue
Автор

wow, i had no idea what control net was before this, i assumed it was some plugin for some 3d modeling program, this thing is powerful! it would probably take like 500 prompt to get what a simple set of bones can do in 20 seconds of dragging around! now that i know this i guess it is time for me to try that a bit!

but i need to make a test first, since i am limited on my space on Linux but have like a 2 TB HDD (mostly unused it was a SMR drive, can't game with those or use it as a main OS drive...) that i can use to see if i can start the webui from different folders/drives... on windows it is as simple as install and run but on Linux you usually don't get the choice of where you install and run from. (should of given my self more than 60GB for my trial of Linux, never thought i would "try it" for 2 years!)

i say all this since i am sure there will be versions mismatching and copying 20 GB folders where everything is set is kind of a pain! (just starting it up takes enough on it's own!) also on my old post where i talked about me running LoRa training i also used these arguments: "--xformers --medvram --no-half-vae" and i have 32 GB of RAM and it is about full and sometimes it starts writing to the swap (Linux Page File) just starting the webui and doing a few pictures but it keeps the VRAM low so i can't complain!

theepicslayersss
Автор

it seems that my new and fresh installation does not work for openpose, I put the model controlnet and followed your tutorial to the letter (the only thing I did differently because not specified in the tutorial is to copy ffmpeg.exe in c:\windows otherwise I had error messages) but I get no image with the right pose and the second image that of the canvas of the pose is always a black square in output. any ideas to solve the problem please?

divmerte
Автор

I don't have any models by default? Just says none, however models show up in preprocesser?? 4:17

nonetrix
Автор

Hey is this extension still being kept? Will it run correctly? It seems to have drifted off though such a useful extension for Lora’s

___x__x_r___xa__x_____f______
Автор

türk olduğunu konuşmandan anladım sağol abim :D

ragnarr
Автор

it would be so awesome it would be so cool if you had your voice volume a lot louder

oldbonniegamer
Автор

When I download open pose editor I do not see any Models in the Model section in the txt2img section. I already found the models online but I am not sure where to even put in (folders wise) to make it appear. When I click on model it is empty.

RaidenFafnir
Автор

Hi, do you know why I don't have send to ControlNet button? I got a send to 0 or 1 cause I got multiple ControlNet, but it doesn't work. Tjanks

momippeti
Автор

when generate controlnet i have error runtime error: mat1 and mat2 shapes cannot be multiplied how should i do ?

ivoryphoto-video-aerial
Автор

idk im not getting option to send to controlnet. i just have sent txt2image and it doesnt do anything

Prettymouth.
Автор

After yesterdays update of ControlNet, which allows to use multiple ControlNets, the send to ControlNet button does not work. For the time being, until this gets fixed, you have to save it in the OpenPose Editor as a png, and then open it from the ControlNet section.

chillm
Автор

Could we re-use the face, clothes, and background of the input image and only modify the pose based on the modified skeleton?

eskim