Unity Shader Graph Basics (Part 8 - Scene Intersections 1)

preview_player
Показать описание
Many shader effects rely on detecting the intersection between the mesh being rendered and other objects in the scene. In this tutorial, I break down exactly how to detect those intersections and use the technique to create a basic ambient occlusion effect in screen-space.

I'm using Unity 2022.3.0f1, although these steps should look similar in previous and subsequent Unity versions.
------------------------------------------------------------------------

------------------------------------------------------------------------

------------------------------------------------------------------------
------------------------------------------------------------------------
Рекомендации по теме
Комментарии
Автор

Under the hood, the graphics pipeline uses 4D vectors to represent 3D points in space. This representation is called “homogeneous coordinates” or “perspective coordinates”, and we use them because it is impossible to represent a 3D translation (i.e., moving a point in space) using a 3x3 matrix. Since we want to efficiently package as many transformations as possible into a single matrix (which you can do by multiplying individual rotation matrices, scaling matrices, and any other transformation matrices together), we take our 3D point vector in Cartesian space (what you probably normally think of when you are using a coordinate system) and bolt an additional “w” component equal to 1 onto the end of the vector. This is a homogeneous coordinate. Thankfully, it is possible to represent translations using a 4x4 matrix, so we use those instead. Adding a component to the vector was necessary because you can’t apply a 4x4 matrix transformation to a 3D vector.

In homogeneous coordinates, any vector that is a scalar multiple of another vector are in fact representative of the same point – the homogeneous points (1, 2, 3, 1) and (2, 4, 6, 2) both represent the Cartesian 3D point (1, 2, 3). So, now by the time we get to just before the view-to-clip space transformation, the w component of each point is still 1 since none of the preceding transformations alter the w. After the view-to-clip space transformation, the w component of each point is set to be equal to the view-space z component. I’d post the full matrices involved here, but YouTube comments aren’t really a matrix-friendly zone! In essence, this means the clip space w is equal to the distance between the camera and the vertex of the object being rendered. That’s what I needed in this tutorial.

And, for funsies, after this, the graphics pipeline executes the “perspective divide”, whereby your 4D vector is divided by its own w component in order to collapse every point on screen onto a virtual ‘plane’ located at z=1. This is where things get shown on screen. Basically, two points with identical (x, y) clip space values do not necessarily get placed at the same (x, y) screen positions, as they may have different clip space z values – with a perspective camera, further away objects appear smaller. After the perspective divide, all your points are in the form (x, y, 1, 1) so you can drop the z and w components and bam, there’s your 2D screen positions. It’s fascinating to me that we need to deal with 3D, 4D, and 2D just to get stuff on your screen.

danielilett
Автор

The goose made everything more than clear!

All jokes aside, this is highly professionally done and incredibly clear.
I had yet to find someone that explained it instead of just telling us what to put where and which to connect to what.

Thank you so much.

sadusko
Автор

Here are steps to make it work in HDRP:
1. Replace the Screen Position node with a Position node, and set Space to View.
2. Replace the Subtract node with an Add node, because the value we're going to be using from the Position node is negative.
3. In the Split node, plug the B(1) output into the Add node's B(1) input.

Soulsphere
Автор

shader graph has easily gotta be the most convoluted and backwards-logic shit I've worked with in Unity so far.

SkinnyFG
Автор

i need help projecting a urp decal (or any decal) onto the surface of a transparent sphere, would you know how to do that by any chance

zing
Автор

This is exactly what I need for soft particles on shader graph!!!

aleksp
Автор

the scene diffrence could be for the post processing only

tnti
Автор

Hello, can’t find a feature to hide objects and it parts inside a cutout object. Ii want to create some kind of 3d cutout mask to hide walls and objects. Is it actually possible in URP?

AlexBradley
Автор

ScreenPosition.w vs Position(View).z .. both work for depth difference, view space and clip space are not the same though in general right? Is there a meaningful difference when using viewPosition.z for this?

fleity
Автор

I must be doing something wrong, because I can't make it work the same way you are showing. The Intersection Power property does nothing, and the Occlusion Strength properties darkens the entire object that is intersecting with the plane. I've watched the video several times trying to spot my mistake, but I can't find my bug.
What could be happening?

BTW, I'm running Unity 6 on a snapdragon CPU without dedicated graphics card

Eclectic
Автор

Was trying this one with HDRP and it doesn't seems to be detecting it's close or far from another surface. If I leave Occlusion Strength to 1, it becomes totally black, and the texture shows only if I have a value smaller than 1. I thought that was becasue it was opaque, but after changing to Transparent, it still don't work as intended. Not sure if that has to do with something on HDRP or I did something wrong. I re-checked my graphs and it really doesn't seems to have any mistake, it's exactly as yours in the video. Any idea on what could that be? Thanks

Kinosei
Автор

Hi,
I want to make blend transition, so I had my intersection shader which runs but just with opaque objects that can seen on camera. I want to make this effect with an object that doesnt seen on camera. How can I do that?

okanaydin
welcome to shbcf.ru