AI images to meshes / Stable diffusion & Blender Tutorial

preview_player
Показать описание
after my quick proof-of-concept experiment with this technique, i've got many requests to explain how I made these meshes and what actually stable diffusion do in this case. here is your guide

Zoe Depth model

ShaderMap

Backgound music: (me jamming on elektron)

Follow me on:
Рекомендации по теме
Комментарии
Автор

what is the purpose of this though? it can be used for distant object *maybe* but there are easier ways to make those. for general purpose assets, you really can't pass the quality standart of modern games with this tech. not to mention this is just the base color. and throw away the aesthetic consistency between models too. ai makes either nearly identical images if you ask or it just can not understand what you are trying to do at all. plus if you want symbolism in your game, there is additional steps on fixing this which i think is way cumbersome and boring than actually making the asset. i didn't even mention cinema since these kinds of assets are pretty low quality even for games. (just to add, it is still ethically questionable to use these in a profit-driven project) oh one more thing, usually, games require some procedurality in their textures for some of the assets they have. this can not produce that flexibility too.

only thing that is beneficial is that depth map thing i guess. that is kinda cool.

pygmalion
Автор

I had this theory on the start of this year, when I noticed you could generate good displacement maps using control nets, good to see someone putting that at practice.

VincentNeemie
Автор

I just want to point out that you people that are dissing this, that for a person like me who had zero clue about any of this, just inticed into trying out something I can get actual creative results from like this is so exiting, I mean I read a few of the technical comments and that's so past my head it really shows how its a specialized viewpoint that's not generalized to more common people in terms of general knowledge ok weird ass rant over

PuppetMasterdaath
Автор

usually I don't find such good music with these tutorials, cheers mate

MordioMusic
Автор

My jaw literally dropped. This is incredible! Thank you!

nswayze
Автор

Please ignore the salty comments. This is a game changer, especially for mobile platforms. jaw dropping result and pragmatic pipeline.

kingcrimson_
Автор

This can be a great process to use for a rough starter mesh that you can then refine

dmingod
Автор

You can't just plug the color data of a normal map texture into the normal slot in principled BSDF, you need to put a "normal map" node in between.

games
Автор

This is good enough for some indie game companies honestly. Might really help some folks out there get some assets done faster.

XirlioTLLXHR
Автор

I think using Mirror is a nice idea, but it may not be applicable for all objects. How about using SD & LORA to create 2x2 images or 3x3 images of the same object from multiple different POVs, then connecting them together instead of using a mirror?

LeKhang
Автор

How did you get the animated face, That seems completely different to what you showed us in this demo.

salvadormarley
Автор

That actually is a pretty decent little quick workflow. Pop that out to something like zbrush and go to town refining.
Is it really good enough on its own? For previz and posing with a quick rig, absolutely. That's pretty fast tbh and simple.

miinyoo
Автор

For BG objects like murals on walls and ornaments this can give a nice 2.5 D feel. Maybe can also speed up design to find form from first idea.

referencetom
Автор

@DIGITAL GUTS, I really like this workflow, I also wanted to know, can I use this same strategy for humanoid AI characters, as you are the only person I have seen use this workflow, thanks in advance :) also subbed

shaunbrown
Автор

The more people who experiment with new technology, the more cool ideas we come up with, and better uses we figure out for the technology. This particular workflow may not be usable for anything meaningful, but maybe it inspires someone to try something different, and that person inspires someone else, and so-on until really cool uses come out of this.

JamesClarkToxic
Автор

How can we generate 3d models from multiple depth maps of the same character from different angles? I have a comfy-ui workflow that produces identical characters from multiple angles so I should be able to combine these to avoid things like mirroring and sculpting, right?

jamesvictor
Автор

nice one! got to try this! thansk for sharing

joseterran
Автор

Honestly I'm quite impressed, a really cool way to make a lot of kitbashing, really necessary nowadays I guess now I have to learn how to make AI images hehe cheers from Mexico!

retroeshop
Автор

I love this! Plus (because of the horror-related prompts that I've been using), I'll probably give myself nightmares 😅
Thank you for sharing ❤

WhatNRdidnext
Автор

Very cool, thanks for sharing the workflow!

Arvolve