Kasucast #19 - Stable Diffusion: Worldbuilding an IP with 3D and AI ft. MVDream | Part 1

preview_player
Показать описание
#MVdream #sdxl #ComfyUI #stablediffusion #conceptart #worldbuilding #aiart #blender #substance3d #bytedance #tiktok

This is a video going primarily going over MVDream. It is a unique text-to-3D model. It is also the first part of series that I plan on finishing, which is worldbuilding an intellectual property with 3D and AI.

It focuses heavily on the theory and setup/usage of the MVDream repositories. The second half consists of using the exported meshes for various concepting workflows with the objective being the interior design of a character's living space.

The final 3D design of the character's room isn't finished in this video as it would take too long. I elected to break up this premise into several parts, so please look forward to it.

Practical tools:

Generating a 3D obj from a single image:

Theory/Discussions:

MY DOCKER ENVIRONMENT SETTINGS:

Timestamps:
00:00 Intro
01:21 MVDream: what is it/what does it solve?
03:38 Dataset Overview
05:00 Camera settings explanation
06:01 Multiview perspective
06:51 Multiview 2D code dive
07:44 Camera embedding
09:36 Camera utils
10:05 Setting up MVDream 2D environment
11:54 Trying out the Gradio server
15:26 Start of MVDream-threestudio
17:49 Setting up Docker environment for MVDream-threestudio
27:10 Explaining why the gradio server for 3D is not usable
34:37 Generating NeRFs through CLI
38:25 Exporting meshes
40:20 Evaluating MVDream mesh fidelity
42:35 Second stage refinement and why I don't recommend it
44:09 Redesign from refinement = unusable
44:57 Showing some other NeRF to 3D mesh objects
47:17 Rendering out a 3D object
48:10 Using 3D renders as ControlNet guides
50:34 Worldbuilding overview (context)
52:32 Potential room designs
53:33 Potential chair designs
54:36 Generating albedo map texture in ComfyUI
56:19 Using Adobe 3D Sampler to convert albedo into PBR textures
58:52 Quick setup of converted PBR textures
1:02:00 Using same process to generate metal textures
1:03:33 Quick overview of using Cube by CSM to convert a picture to a mesh
1:05:18 Checking refined mesh from Cube
1:05:50 Closing thoughts

🎉 Social Media:

Images/processes may be fabricated and therefore not real. I am unaware of any illegal activities. Documentation will not be taken as admission of guilt.
Рекомендации по теме
Комментарии
Автор

This is the first part of a multi-part series called Worldbuilding with 3D and AI. The first half is pretty heavy on the theory, and the installation of the environment is also pretty convoluted if you're not an engineer. I go over all of it, but don't get discouraged if you can't get it working. I had to make several Docker adjustments and explore several gotchas before I could output a NeRF and 3D object.

kasukanra
Автор

fantastic video ! excellent walkthrough of the mvdream repo and workflow, thank you . super exciting to see the reinfement and consitency in 3d gen now, greatly looking forward to the creativity that will be unlocked for everyone to create their own worlds and stories .

zzzzzzz
Автор

I feel like this tech still needs a bit to bake before we get more usable outputs from it. Maybe Gaussian splatting or something will help. Promising first steps tho!

edkenndy
Автор

Wow great stuff! Thank you for sharing. I subscribed. At 10 minutes in you mention compatibility issues and you used Ubuntu. Did you try with windows first and noticed you ran into errors or you just decided to go with Ubuntu? When you use MVDream and generate a 3d object, do you get the UVmap as well? Have you experimented with modifying the UVmap using stable diffusion?

MyWhyAI
Автор

I'm looking to do something similar to this but I'm wanting to keep it 2d.

Would that be possible with this technique?

CrazyEyezdotpng
Автор

i am trying to run this on multiple a100's and still getting 5 or 6 it/s . how can i increase the it/s using high end gpus like a100 or h100

MadhavAllam
Автор

Ran into an issue: "The detected CUDA version (11.8) mismatches the version that was used to compile PyTorch (12.1). Please make sure to use the same CUDA versions."
To fix this I installed the latest CUDA version 12.2

I also ran into permission issues when running python. I followed your Dockerfile commands but did not work.
Solution: su chown dreamer:dreamers -R

octane_ape