3D Gaussian Splatting for Real-Time Radiance Field Rendering

preview_player
Показать описание
SIGGRAPH 2023
(ACM Transactions on Graphics)
----------------------------------------------------

Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates.

We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 100 fps) novel-view synthesis at 1080p resolution.

First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
Рекомендации по теме
Комментарии
Автор

Wtf this is like pure magic. Amazing work, can’t wait to see it in action!!

jajajinks
Автор

WOW great job this is verry impressiv. I don't Belive what my Eyes see....

scopehitmoneyundrpglitches
Автор

I don't understand any of the technical terms but that looks like dark magic to me.
This is like, good enough to use in high end films now.

ShaharHarshuv
Автор

The future will be representation. I wonder if there is enough viewing angles on smartphones for real time representation.. or maybe a sort of setup with multiple cameras? Or maybe the LiDAR on an iPhone has enough depth for representation ?

Instant_Nerf
Автор

Awesome! I want to see those scenes in VR! 🥽

zodchiyd
Автор

every damn day we get closer to a hype realistic optimized vr game

saltmuffinLGDPS
Автор

Wow Awesome work looks stunning! I would love to do a Vulkan implementation.

chucktrier
Автор

This is amazing! Since it's able to render orders of magnitude faster, is it also able to capture moving objects in the camera view?

tehwayniac
Автор

How are non-lambertian (i.e. viewpoint dependent) effects, like fresnel, specular, etc. achieved with this approach?
It appears as though the technique struggles to represent glass (e.g. the reflections and transparency of the windscreen as shown in the project page).

I suppose one approach may be to optimize spherical harmonics coefficients which are later pruned after training, though this may result in really bad ringing.

Or you could have a "cone of influence" (cone orientation quaternion, anisotropic cone fov) that defines the opacity of a given gaussian depending on the viewing angle. Unfortunately this would likely massively increase the number of gaussians you'd need for certain surfaces, and you may need to know how many to allocate for these surfaces ahead of time for initialization.

Third, you could have a very sparse voxel grid of learned functions that converts the viewing angle from the gaussian center position, and the gaussian center position within the voxel cell, to rgba. This may be the fastest, as you'd only need to evaluate this once per gaussian per frame, not per pixel.

WhiteDragon
Автор

Every day we get closer to the holodeck

theishiopian
Автор

finally something that is easy to interpret and to work with. no idea how come they are able tor render novel views that fast. if every scene has >100k splats, I would pre-render them into bilboard sprites, but how come reflections are so good?

naninano
Автор

Just imagine the future: an instant 3D models mrom a mobile phone camera!

greg.skvortsov
Автор

can we generate the point cloud instead of capture it from real life? i'm thinking of game development use cases

Автор

What an incredible achievement, how do I get my hands on this? :o

MonsterJuiced
Автор

Waiting for a tutorial for how to setup 😍

FillypeFarias
Автор

Will this be available for public/professional use soon?

TedHolmwood
Автор

1. Wouldn't it be more accurate to compare 3dgs to nerf when both are either initialized at random or both with sfm points?
2. Since this method is based on rendering and not ray tracing, how can it do such a good job at modeling the sun reflecting of a shiny object like the countertop only at certain angles?

TESRG
Автор

Waiting for this rendering tech to be available in Blender, Unreal and Unity 😊

gridvid
Автор

I wonder if you could say
train an ai on Gaussian splatters of animals that come from the input of dna sequences
if you could get it to output mammoths

felinecatgirl
Автор

I was here before it went mainstream. It's awesome!

Daexx