Create Realistic Render from Sketch Using AI (you should know this...)

preview_player
Показать описание

Okay, let me show you how you can turn these quick drawings into realistic renders like these using AI in just seconds.

I totally agree that it is not good to rely on these new AI tools to do specific tasks for us, but why not use them to make our lives easier?

In the early days of a project, it can be used to speed up the workflow in the conceptual phase, especially when you have to present your ideas to others but don't have anything other than many quick conceptual sketches.

You can use AI to help you improve the overall quality of your presentation. But at the same time, I think it is also possible to somehow "inspire" from some of the results. From different forms, materials, etc.

Let me know what you think about the new developments in the AI world related to Architecture!

Timeline
0:00 - Intro
0:32 - Start Stable Diffusion
1:00 - First View - Interior
4:10 - Interior Render Results
4:22 - Second View - Exterior
4:57 - Exterior Render Results
5:19 - Recap

Tools I Use
Рекомендации по теме
Комментарии
Автор

My younger clients will be able to do all my previous work in arch visualisation within a year. GAME OVER!!!

perrymaizon
Автор

This channel will grow so fast if you can show either through Stable Diffusions or MidJourney 5.1 how to render a sketchup file, 3d max (jpeg) the exterior of a building into the render we want without a lot of distortions using prompts.
There is no such video online. And I am positive that if people are not searching it now, they will very soon!

petera
Автор

Thank you for taking the time to show us this fantastic tool, and very inspiring ideas. I believe that AI resources, are here to stay. All we have to do is figure out the best way to work with them. We are just starting to work with this, and we still have a lot to learn, including improving our writing skills to make better prompts.

Fabi_terra
Автор

PERFECT !!!! That's all i see about it. Nice work bro 👍

adetibakayode
Автор

I wish some of the prompting be replaced by inputing additional images and tagging or labeling through sketching perhaps like in Dalle-E. For example instead of describing how modern styled green sofa with geometrical patterns I want, I should be able to drop a reference photo of such a sofa or any other object inside my project. I am sure these kind of features will come sooner than later but what makes Stable Diffusion amazing is that its also free and open source.

Constantinesis
Автор

So this needs to be developed using an interactive user interface.

The word prompts need to become labels. Architects want to be able to draw lines from objects and label them feeding specific information into the AI generation.

The Architect does not care about multiple options as much as he cares about creating the specific option he desires.

He must be enabled through the interface to engage in an interactive back and forth. Erasing parts, and redrawing them, developing parts of the drawing, adding more specific all in an endeavour to produce a vision as close to what he sees in his minds eye.

This is of utmost importance.

All said and done, on a positive note, this is the only useful sphere for architects, which I think they may use and be willing to pay for, that I have seen thus far from all the AI related attempts.

It would be idiotic not to take it forward to fruition.

aceheart
Автор

Hey I am a architect from Switzerland and it really amazes me how far we came. I already did a presentation in my architectural office and I am about to implement it in our design workflow... After using a lot of midjourney I came across the problem not having the control to just change a specific thing... I am trying now a combination of Stable Diffusion and MJ. Thank you for your informative video!

Rambli
Автор

Nice work. This is clearly the direction of how concepts are generated. probably in another 4 weeks this capability will be available at numerous webapps for free.

tomcarroll
Автор

I didn't know such a thing was possible from napkin sketch to render. Thanks

panzerswineflu
Автор

Thank you so much for sharing this. I am trying to figure out how to do something similiar with portraits, keeping the original face and changing the clothes, background focal length etc. This is a great starting point.

ThoughtFission
Автор

Hey. great video! do you know is it possible to make multiple angle in the same room so that the details remain the same?

taavetmalkov
Автор

WOW it worked!!! THANKS A LOT!!! I had to download some important stuff like .pth files and then drag them to the right place.
Just to find them afterwards in ControlNet / Model like in your example. YOU ARE AMAZING WITH THIS TUTORIALS!!! THANKS

chantalzwingli
Автор

Exactly what i was looking for, thank you!

IDArch
Автор

It is almost what I was searching of thank you for your help

m.a.a.
Автор

Brilliant tutorial. Many thanks for this.

socrates
Автор

Hello, my problem with this is that I cant find when I press processor scribble, and my generated images are very different than my sketch I upload, can you help me with that please ? appreciate your work <3

ovidiupatraus-ubuq
Автор

not sure, but I believe you don't need to choose anything from preprocessing menu, just leave in at none because otherwise you let SD to create sketch from sketch as input

kasali
Автор

This is amazing. I have a question: What does the <lora:epicNoiseoffset_v2:1> mean in your key words? What does dslr mean as well? Much appreciated!

ckngsane
Автор

Could you please tell me about your computer's specs? What graphics card are you using, and does it take a long time to generate each image?

SpinnerPen
Автор

I can't figure out how to install it. When I open the webui-user batch file, the code tells me to press any key to continue and when I do it, it just closes the window. Have restarted the PC, still not working properly

moizzasaeed