NVIDIA Instant NeRF: NVIDIA Research Turns 2D Photos Into 3D Scenes in the Blink of an AI

preview_player
Показать описание
When the first instant photo was taken 75 years ago with a Polaroid camera, it was groundbreaking to rapidly capture the 3D world in a realistic 2D image. Today, AI researchers are working on the opposite: turning a collection of still images into a digital 3D scene in a matter of seconds with the implementation of neural radiance fields (NeRFs).

#GTC22 #AI #neuralradiance #neuralgraphics #rendering #deeplearning
NeRF, Deep Learning, Neural Radiance Fields, Neural Graphics, Rendering
Рекомендации по теме
Комментарии
Автор

This tech is incredible and I can already imagine people combining it with deepfake tech to erode every shred of confidence we have left in video footage

rico
Автор

We will soon be able to walk around the sets of our favorite movie scenes. This technology is absolutely astounding.

viperlimo
Автор

Exciting! Was it really just 4 photos as input? I'm curious what the size of input needs to be to create reliable results.

blenderguru
Автор

Impressive! And I though photogrammetry was like magic...creating a scene like this with so few images and so much faster is a game changer!

mrlightwriter
Автор

I hope we can play around with this tech soon :D
This tech could be a game changer for things like VR.

Ceeed
Автор

It would be nice if they have an app for normal end users... or exclusive for those that purchase their graphic cards

haoever
Автор

Hope this helps streamline 3d modeling and photogrammetry workflows even further; really cut down on data and clean up.

TallSilhouette
Автор

I'd like to see the mesh without the texture to see what this truly looks like. A good texture can hide a lot of missing details in a mesh.

treborrrrr
Автор

Just eliminated the need to use hundreds of cameras to perform a matrix-style time stopping movie shot instead of stitching many stills together.

youngyingyang
Автор

This would be incredible from a memory standpoint. We have some old houses in my family that are on the verge of being sold off/demolished. I would love to be able to use this to create some 3D renders to show my grandkids years later to how those homes looked from inside.

neoasura
Автор

Imagine taking old family VHS movies with family that is no longer around and being able to see something new.

yaemish
Автор

Dude they should figure out how to use this for mixed reality object and area target creation!!!! Oh my god that could force Vuforia and Vislab to no longer charge like 50k + for their services on an annual basis.

grahamwebster
Автор

As a photographer myself I am intrigued, it's quite impressive. I do wonder if it just generates what it thinks is the surroundings of each photograph. Still pretty impressive.

Libertarianach_na_h-Alba
Автор

This is going to make it very easy (cheap) to get 3d representations/avatars of ourselves into games/metaverse.

Steviec
Автор

This is a huge step towards of getting VR to be popularized.

GreasyFox
Автор

Promising! I would love to know how many input images were actually used. I see six tripods with potentially multiple cams.

polytrauma
Автор

This is incredible, you could build a whole VR 3D world based on google maps photos

zRedPlays
Автор

So, is it a next gen photogrammetry software like awesome Reality Capture? In RC you need to take hundreds of photos to generate that kind of scene. NeRF could be a game changer!

bikeshunters
Автор

How could they know what is on the scenario on the parts where there are no pics from it?

iangmaraes
Автор

I thought for sure it was going to be Lidar arrays that let us stream real-time volumetric video that can be viewed at any angle in VR.... but this tech could mean it will simply be a regular camera array and AI that allows us to do it. Regardless, I can't wait for us to get there. I want to record family gatherings, weddings, and other events in volumetric video and then go back and walk around in VR and relive it life-sized and from every angle! Throw in volumetric audio-data and you'll be able to listen to the different conversations happening simultaneously during an event simply by walking over to it. How cool would that be?

joelface