FULL Perspective Control - ControlNET in Automatic 1111 - Stable Diffusion

preview_player
Показать описание
Use ControlNet in A1111 to have full control over perspective. You can use this with 3D models from the internet, or create your own 3D models in Blender or other software. This method allows you to create different views of a similar looking location. You can also use Multi-ControlNet to place a character into the scene

#### Links from the Video ####

Support my Channel:

Рекомендации по теме
Комментарии
Автор

Wonderful tip. I've been trying to do this with sketches, but I'm excited to free myself from the tunnel background with buildings on either side stretching infinitely into the distance.

jaredbeiswenger
Автор

Followed a hard guide on the internet to install stable diffusion and they didn't even go over xformers. Learned about it a week later and this little line of code literally sped up my renders by like 45% and I'm not joking. Some renders was taking 3 minutes (I use a lot of lora's) and this cut it down to like 1:30 minutes some even faster. Thank you!!!!

masterjx
Автор

I knew those sliders in Controlnet could do wonders :-) Gr8 video mate 👌

dreamzdziner
Автор

Thanks for sharing the tip.
You're awesome ;)

nonameishere
Автор

what you really need is a nested-type default slider. nesting sliders or somewhat similar will be absolutely neccessary to handle future complexities before they are simplified

nathanielblairofkew
Автор

Woo! I asked if you could post something like this before and you did it, awesome! Thank you so much!

zoybean
Автор

FIRST, and I #CantWait to get into it! 💪🧠 —Thanks, Olivio!! 🎉

MarkDemarest
Автор

I can't believe I never thought of using Depth to create scenery in a certain perspective!! Wow. That is actually a really good use of that feature. I've spent a long time trying to get the camera in the right spot, and I could have just done it this way. Thanks for the clever tip!

santosic
Автор

Heck yah, great video! I have been using game screenshots for this sort of thing too, works great!

audiogus
Автор

It's incredible. I didn’t even think that you can just take a screenshot of any scene in a movie.
Or a video game. Take a recognizable map and create your own version.
I think that we all have already taken a picture of our apartment and played with interior design, for example, classicism or Victorian style.

aggressiveaegyo
Автор

So interesting video. The way we can control the weight and that's affect the final results is something that I learned today👍

bryanpa
Автор

Any way we can turn the image in different (like fill in the blank) angles and get consistency so it can be used for 3D scenes? For example: You take an image and keep it as a texture on the model, then move the angle a bit so that the depth map can read it a different way, but now (or still) you're still having an imgtoimg feature involved, which can be tested to stay texturally consistent. Of course, the UV maps will update, but you have now textures that can be used for 3D animation. Please reply to let me know lol.

animestories
Автор

Not all that different from a project I did before -- taking a picture of a person from the internet and turning it into an anime-style fanart. The only things that the original picture and the final product have in common are the pose of the character and the camera angle.

ControlNet Depth and Canny had been very important, along with ControlNet Clip Image (style).

ryry
Автор

I have used kinda the same method but for interior architecture scene with images i generated i will send it

tarekramadan
Автор

Hi Olivio. I don't remember the name, but Stable Diffusion has an Expansion that bends the image and creates a corridor. You can create streets and whatever you want.But I have another question. If you know how to achieve stability when you create animation with ControlNet script (img2img) or when you process batch at once. Just the face can already be stabilized, but the clothes, they are always different. It's kind of similar, but not. If you know how and what to do.Show us, please.

michail_
Автор

Hi Olivio
is there still a way to use stable diffusion with all changes and additions in google colab?
or even if there is an alternative to google colab other than local installation would be great!
ty

RiyadJaamour
Автор

Hello! As always, everything is top notch. For which you have a lot of respect. In the video you mentioned that the resulting images can be converted into 3D. Could you show us how all these pictures could be converted back into 3-D model. )))

FikaBakilli
Автор

Oliver does controlNet generate a depth map ? I was under the impression that you need to supply it with it.

cekuhnen
Автор

Using the Guest mode gives very good result.

entrypoint
Автор

I always wondered what those sliders would do-

matbeedotcom