I made a better Ray-Tracing engine

preview_player
Показать описание
Two years ago, I showed you how I created a simple ray-tracer from scratch. This is my attempt at improving my first version and showing you how I did it!

► Valuable resources:

🎵 Music from Epidemic Sound, register with my link to support the channel and get a discount:

Detailed description:
In this video, I explain how I created a ray tracing engine in C++ using OpenGL. Raytracing is a rendering algorithm that can be used to generate photorealistic images with nice shadows and reflections. The technology has recently been made accessible to many when NVIDIA announced their RTX graphics cards. I successfully implemented lighting, soft shadows, ray-traced global illumination, progressive rendering, reflections and more.

Chapters:
0:00 - Intro
1:23 - GPU acceleration
2:06 - Ray-tracing recap
3:10 - Direct illumination
4:37 - First result
4:42 - Soft shadows
5:20 - New result
5:39 - User interface
6:27 - Indirect illumination
8:18 - Progressive rendering
9:42 - Reflections
10:24 - Skybox
10:51 - Recursion problem
13:50 - Anti-aliasing
14:33 - Bloom
15:24 - Final results & conclusion

#Raytracing
#OpenGL
Комментарии
Автор

Sorry for the short break :)
I hope this video was worth the wait!
Also, thank you so much for 10k subscribers!!

EDIT: At around 1:17, I called OpenGL a graphics "library" which isn't the right word. OpenGL is an API, not a library!

NamePointer
Автор

Don’t believe it. You just saw a video online. Or used google street view. There’s no way you went outside

pablovega
Автор

A quick hack for fast antialiasing is to cast the rays through the corners of the pixels instead of the centers. It's basically the same amount of rays, but you can average the 4 values for each pixel and get 1 step for free. Adding your offsets will improve the antialiasing over time as you currentlly have it.

mgkeeley
Автор

Your first raytracing video motivated me to implement fast ray-grid traversal in my CFD software for ultra-realistic fluid rendering. The simple stuff already brought me quite far. I'm amazed by the more complex techniques you show in this video. Thank you for sharing your knowledge!

ProjectPhysX
Автор

9:17 One potential solution is to implement motion vectors and move the pixels in the buffer accordingly. That way you can move the camera while keeping old samples for additional data. Note however that newer samples need to be weighted more heavily so that new data is generated for previously invisible parts of the screeen, and that specular reflections with low roughness would look inaccurate as you move around since they are dependent on the camera direction. The latter may help the former a bit but a proper solution might need to put specular reflections in a separate buffer and handle them differently.

This is an important part of ray-tracing in Teardown, the SEUS PTGI Minecraft shader, Quake II RTX and many other RTX-powered games, so it's a well-known technique. There might even be papers or tutorials out there that describe how to do it in more detail. I also know that Dennis Gustavsson, the programmer of Teardown and its custom engine, has written a blog post on using blue noise to decrease perceived noise in ray-tracing, and other things about real-time ray-tracing that could be of help.

OllAxe
Автор

One simple change to consider: look into different color spaces for image processing, RGB is very intuitive because it's what displays use, but it's not really the best option for things like blending values together - actual color info can get lost and coalesce into muddy grays pretty easily. If you do all the math in HSV color space though, you can do blending the same, and maintain better hue and saturation while you blend before converting back to RGB for display.

KingBobXVI
Автор

When I was implementing a raymarching algorithm, a lot of my stuff looked fake. Thanks for the new features for me to implement. Something I did use was an AABB optimisation. I went from being able to render about 15 objects in almost real time to way more. If you want more frames, it's quite easy. You have also inspired me to implement ray tracing and try to make my own engine. Thanks.

JJIsShort
Автор

other channels may do this sort of thing, but none go quite as in depth on the technical side as you do. the 10k subs are well deserved!

dazcarrr
Автор

Great video!! I’m excited to see your new projects! Don’t stress too much over them and try to have fun!

ThrillDaWill
Автор

If you separate view-dependent lighting (reflections) from view-independent lighting (lambertian) you can keep the view independent lighting buffer while moving the camera. If you move an object though, you'll have to reset both buffers.

WhiteDragon
Автор

I only subscribed two hours ago. Looked at the date of your last video and assumed this channel was dead. Then coincidently you post the first video in a year 10 minutes after I subscribed!

monstrositylabs
Автор

The better solution to avoid rendering from scratch when the camera moves is to save the colors found, not as a buffer based on what appears on the screen, but instead as a buffer of what color should be associated with each piece of 3D objects those rays hit (color in this case being the total light that part of the shape could be considered to emit, which is averaged with the new calculation for that point).

The one downside of this method is that it will require a lot more memory associated with each object in the scene, (sort of like a baked light map texture), and that more metallic objects will take a bit longer to converge (since their lighting changes considerably with camera movements).

evannibbe
Автор

You can do an optimisation by rendering into sparse voxel space instead of screen space. All of those dot products you calculated from the lights are still the same within voxel space, you can just cull the non visble voxels and recalculate whatever lights are in screen space if they move / change intensity. It just becomes a data management task which is much faster. Lumen works like this AFAIK.

spacechannelfiver
Автор

Super stuff - always wanted to create a raytracer myself, did a bit of work but I think that the hardest bit to do quickly, is sorting the objects and determining the nearest collision.

bovineox
Автор

If you want even more realistic material behaviour, try looking into GGX scattering, it's a microfacet distribution, meaning it models the materials as a ton of microscopic mirrors oriented depending on smoothness etc. Great video btw!

oskartornevall
Автор

Hello!
A fix for not having accumulation when moving the camera is: instead of merging frames directly, take into account a velocity buffer. This should tell how much each pixel moved each frame. With that, it can combine pixels with previous ones even if they moved. TAA does this as well, you should look into it

caiostange
Автор

Hi, a month ago or so i finished my bachelor's thesis which revolved around path tracing. This video explains it better than I've seen anywhere else!

jorgeromeu
Автор

Being able to summarize the entire ray-tracing process, its finest details and professional touches in such a short video is a special ability. Thanks.

budokan
Автор

Proud to say you're the reason why I disable adblock sometimes! Such a great piece of content. Congrats.

alex-ykbh
Автор

The visuals in this video is stunning! Great job! I enjoyed every frame of the video

marexexe