OpenGL - deferred rendering

preview_player
Показать описание

All code samples, unless explicitly stated otherwise, are licensed under the terms of the CC BY-NC 4.0 license as published by Creative Commons, either version 4 of the License, or (at your option) any later version.
Рекомендации по теме
Комментарии
Автор

Good video. Quick note: to solve the issue of rendering the light volume when the camera is inside it (7:55), the standard solution is to only render the back faces (and cull the front faces).

krytharn
Автор

@Brian Will, with all due respect I apologize for being 3+ years too late to comment on this terrific video. But according to Joey De Vries, he mentions that one of the disadvantages of deferred shading is that, "Deferred shading forces us to use the same lighting algorithm for our scene's lighting..." mentioned in his Deferred Shading page about halfway down on disadvantages. However, it could be alleviated by including 'More material-specific data' in the G-buffer. I'm encountering this tricky situation where I have a ground object I don't want affected by specular lighting, but all my teapot objects is okay with specular. And so I'm not sure how to approach this unique scenario on only allowing specular on teapots and not with my ground object in the final buffer...

CodeParticles
Автор

Damn, this was a great video, I like how you're showing every relevant part of the code in its due time, very useful video to use even as a kind of reference

arsnakehert
Автор

Thank you for the whole openGL series, you've helped me a lot to understand many concepts.

ramoncf
Автор

Thank you for this video, it really helped me understand the differences between deferred rendering and forward rendering

jeroen
Автор

Nice and detailed video! ;) You can save a lot of memory and memory bandwidth if you don't store a separated "position layer" in your gbuffer, because you can recalculate the positions from the depth buffer (and you write and use the depth buffer anyway).

Mcsv
Автор

Oh my god. This video is sooo crisp and clean cut raw all-ini information. True genius!

kampkrieger
Автор

Just a question, at 5:40, I am not OpenGL expert. Does it make difference to use glBlitFramebuffer here (with nearest and same source / destination dimensions which disables resizing basically), vs using glCopyTextSubImage2D or glCopyImage ? I think the primary intent of glBlitFramebuffer is to do resizing of buffers and converting texture formats. I know for the fact that on some older hardware and older drivers glBlitFramebuffer can be slower.

movaxh
Автор

Amazing video! I do have a funny hypothetical question tho. Let's take a fixed camera with 2D background system( such as RE2 of FF9). How would someone go about having a pre-rendered G-buffer, and make it so that the first pass just skips the static elements and only updates the buffer where dynamic 3D objects are found??

franesustic
Автор

how would light occlusions work with this? like if there is a model between the light and the target model, that light's contribution might be zero but there is no way to find out about that from this setup I presume. So is this the job of a separate shadow pass? Thank you for the series by the way, amazing content.

gnorts_mr_alien
Автор

Does it make any sense to have another gbuffer that stores values indicating which shader to use for each pixel? In this way it is possible to use multiple fragment shaders right?

andreafasano
Автор

Captions look like this.

Good video though :)

tezza
Автор

9:30
Did you just mistake something 2d for 3d?

Cheesecannon