AI Is Getting Out of Control in Blender | ControlNet

preview_player
Показать описание
The content creator coolzilj has taken tweeter by a storm when they published this tweet about a new tool for Blender that will help artists who like to create hyper realistic renders using just Blender.

📌 Blender-ControlNet:

Get a 1 month free trial of Skillshare:

----------------------------------------------------------------------------
***Check out these amazing Blender Addons***

⍟Modeling:
Kit Ops 2 Pro
Hard Ops
Fluent
Box cutter
Mesh Machine
Cablerator
shipwright

⍟Architecture/Rendering
CityBuilder 3D
Sketch Style
E-Cycles
K-cycles
Photographer
Pure-Sky
Physical Starlight And Atmosphere

⍟Vegetation
scatter
Grasswald
Botaniq Tree Addon
Tree Vegetation
Grassblade

⍟VFX, Simulations & Dynamics
Flip fluids
Khaos
Carl's Physics
RBDLab Addon
Spyderfy

⍟Materials/Texturing
Decalmachine
Extreme Pbr
Fluent : Materializer
Bpainter
Material Library Materialiq

⍟ Cloth Simulation:
Simply Cloth

⍟UV unwrapping
Zen Uv
Uv Toolkit
Uvpackmaster

⍟Rigging&Animation:
Human Generator
Auto-Rig Pro
Faceit
Animax
voxel heat diffuse skinning
Animation Layers

⍟Sculpting
Sculpt+Paint Wheel

⍟Retopology
Retopoflow

⍟Ready Vehicles
Car Transportation
Traffiq

⍟Scripting

Disclaimer: Some links here are affiliate links that help us create more content. Thanks in advance for using our links
Рекомендации по теме
Комментарии
Автор

the only issue is consistency of the design, for every edit its changing the design completely, you're not able to build upon a design you already liked

ultraozy
Автор

to be honest even the script that you're reading seems a bit robotic. that "in conclusion" at the end was just like chatgpt.

christianwilliam
Автор

For me the end goal isn't to create artwork in stable diffusion. Rather if AI can generate models in Blender that we can edit, now that would be a game-changer.

cagnazzo
Автор

I was curious about the sudden increase in traffic to my repository, but now I understand.
Thank you for sharing it with others.
I understand that some may mock me, saying that I could achieve the same effect by doodling in Painter. They are correct.
However, my intention was to demonstrate how easy it is to control an AI-generated image in Blender using a single layer of ControlNet and tools that you are already familiar with.
My ultimate goal is to utilize Multiple ControlNets for manipulating my drawings. This involves combining various maps such as openpose, depth, canny, segmentation generated by Blender which are not that easy to draw with Painter or not that accurate to generated by AI.
I just migrated to the new ControlNet API and encourage you to test the updated script.🤠

coolzilj
Автор

Being new to this space, it is SO amazing to watch tutorials from 5 months ago vs today. The progression of this tech is so fucking cool.

herbv
Автор

Impressive, but make it work in reverse(image ->to-> mesh with texture) so it can be useful for game dev! The DreamAI addon is good for similar things, but not quite there yet either.

moshersmusic
Автор

what if in two generations from now, there will be no reason for a youth to pursuit a creative skill, is this progress?

staranger
Автор

As someone with stable diffusion and playing with it without the need of internet. It sometimes does feel like magic when asking for anything my little pea brain wants. And it didn’t even come with censors or a company saying what I can and can’t do.. looking at Midjourney with that jab.

_Chessa_
Автор

Can't they create a AI that does auto UV unwrapping or auto topology or auto materials.

ingamgoduka
Автор

Is there any tutorial that you guys would recommend to get thid better, I'm looking for a video that helps me to understand it step by step if possible! Thanks!

erickbarsa
Автор

I really feel in danger, everything is changing too fast to adapt

andresvillalba
Автор

Sick. This will change everything. Like, absolutely everything. I'm not sure how to feel about it, but as you said it doesn't really matter 'cause it's happening either way. Better to take advantage of it!

Miatpi
Автор

It would be nice if there were a visual representation of what the AI is doing with the scene like a tree like structure with control points and additional specifiers. Maybe real time modification of its renders in a 3d traversable workspace. It should be like subdivision or sculpting. Fining tuning the result and maybe even backpropagating the result to the original with some added parts. Everything from the result to the original should be tweakable with the AI's assistance.

hewhointheearthlydomainsee
Автор

Asset flips will just get ... wild now ...

qywfltv
Автор

I’ve been waiting for ai to combine with blender for a while. Can’t wait until you as a 3d animator could make 2d projects using your 3d animations as a base☺️

micah
Автор

There has already been a good stable diffusion addon for blender for like 2 years and does not require that amount of setup that is wild how complicated that is to setup

BOSSposes
Автор

I dont care what anyone makes with this, it will never impress me. Only the artists that have spent the time to perfect their craft will ever garner my respect.

JIrish
Автор

yo that one is scary 😳
like bro it's dividing your work time by 100

petier
Автор

One thing that I find foolish about this script is it doesn't take advantage of 3D depth information.
We are taking 3D data, converting it to 2D and then using a NN to infer the depth from a 2D image.
Why infer the depth from 2D when the data is right there in 3D?

I've been experimenting with rendering depth directly in blender and it is quite fun and effective.

AZTECMAN
Автор

well i wish i would be the other way around lol, taking pose from an image and seamlessly apply it to a model.

FulguroGeek
visit shbcf.ru