Get Better Results With Ai by Using Stable Diffusion For Your Arch Viz Projects!

preview_player
Показать описание
Big thanks to NVIDIA Studio Poland for sending us the NVIDIA GeForce RTX 4090 GPU.
#NVIDIAStudio #NVIDIACreators

🚀 Start Your AI journey with GeForce RTX 40 series:

🔴 Useful Videos:

⏱️ Timestamps
0:00 Intro
0:29 System Requirements
1:10 How to Install Stable Diffusion
3:56 Stable Diffusion Models
5:29 Checkpoint Merger
6:34 NVIDIA Sponsored Segment
7:29 Stable Diffusion Interface
9:20 Resolution Limitations
9:48 How to Generate Large Images
10:39 Batch Count & Size
11:17 CFG Scale
11:37 Img2Img
13:21 Improve Greenery with Inpaint


🔥 My Courses that will help you improve your skills:

👩💻 Software & Tools I use & recommend:

✅ Let's connect:

#archvizartist #rendering #architecturalvizualisation
PS: Some of the links in this description are affiliate links that I get a kickback from 😜
Рекомендации по теме
Комментарии
Автор

Best Stable Diffusion video for Archviz! Only the essentials, explained precisely but calmly without overwhelming the viewer.

MatteoFontana
Автор

Hey, Fantastic Video. I highly recommend that python be downloaded from the Microsoft Store. I tried downloading it directly from the website and ran into some issues but none when downloaded from the microsoft store.
Also, I think you may have forgotten to put the information about the upscaler you adviced in the blogpost as you said in the video :).

JANOOB
Автор

Thanks for this - I've made it through to the end and can get going with AI. Just a quick update with the latest version - at 14:30 where you drag the updated image back into inpaint, this is no longer a drag/drop operation but is done with a "send to" button under the generated window. Thanks again.

maxable
Автор

doesnt seem to work, I keep getting a multitude of errors almost immediately.
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.

aang_arang
Автор

Cieszy, że kanał na bieżąco z nowościami👌Brawo!

TheGalacticIndian
Автор

If you want to showcase your graphics card's capabilities or promote it, stable diffusion may be the best choice of a tool.

Most people don't have a powerful CPU or GPU and prefer to use GPU cloud services that also provide AI image generation tools, like Leonardo AI.

Leonardo AI is also a stable diffusion-based model, designed by developers in a way that artists don't need to worry about technical details such as multiple installation steps or potential errors during testing. By using Leonardo AI, it saves me from technical headaches and allows me to focus on being more creative.

Every feature mentioned in this video is also available in Leonardo AI, with the additional cost of their GPU services. These features include text-to-image, image-to-image, inpainting, outpainting, etc.

puja
Автор

Hi, Ava, thank you for the tutorial it is the only one in architecture field in English! Thank you a lot:) would be super interesting to get to know your experience with models, Lora, negative models. Maybe it is idea for another video:)

One more thing I saw during your process with deliberate model, you will not need to blend some parts in photoshop if you will use inpainting version of model (it usually placed in very bottom of description to model in civitai)

All the best!
Pavel.

py
Автор

Super, że przekładacie najnowsze świeżynki na realny grunt. Właśnie skończyłem kurs obsługi SD i to jest niesamowite! Wpadłem w wir poprawiania swoich wizek :)

atajqtsawa
Автор

you did a lot of work to make this video! thank you so much

ameralhomsi
Автор

Hi, great video.
I noticed that at vividvisions, they are using stable diffusion 2.1. Is that different from 1.5? Or is just a checkpoint model that is 2.1. Im bit confused.
Can someone please explain differences? Thank you :-)

tomaskorch
Автор

You managed to get a lot of information into a compact package. At the moment, I am interested in how AI becomes part of the 3D workflow, like
- how simple a 3D model can be used so that it can be used to generate an AI image
- how AI can make textures that are used in 3D modeling

Regeneration of parts of the image can also be done directly in Photoshop, but it cannot do everything that SD can do - it depends on the model and settings. I'm really looking forward to it that in the future, AI will learn to make 3D models directly as well, it will be interesting.

hotlineoperator
Автор

Have you found out if AI could be used to denoise our arch viz images?
For example, Arnold can denoise images, but LPEs or AOVs don't or it is quite complicated and time consuming. AI could help with denoising, don't you think?

Автор

Man, i would use this just for people, the result was amazing

shinonyx
Автор

Hi Ava! I appreciate your tutorials, they are very helpful. Could you make a tutorial showing us how to make a boucle material that looks as realistic as possible and that can be in different colors (for example a gray boucle, a green boucle, etc.)? Thank you!

anamariac
Автор

Thanks Ava!! really useful for an AI Newbie..btw, your render is awesome already...!!

david.rosyada
Автор

Hi Ava, you are awesome for this tutorial and all other content! Thanks so much!
Following these steps should I just download models that have SD 1.5 in the ''base model'' description? There are others like DXL Turbo or SDXL Lightning (no idea whats all this, newbie here in IA)

Lewhateverr
Автор

so much to learn bout AI from this one video! thank you Aga! 😊😊

pankajteli
Автор

Esta muy bueno el tutorial! 🔥Un poco largo pero vale la pena con todos los temas que se ven.!!👏

MegaLauritagarza
Автор

where are the link to the checkpoints and the upsacle? amazing video BTW! thank you cheers

mae
Автор

THANK YOU!! You cleared out a lot of things to me !

sarainr