Jump to content
  • Advertisement
Sign in to follow this  
ZealGamedev

Inferred lighting - is DSF buffer needed?

This topic is 2966 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I dont have a ton of experience with Inferred lighting, so correct me if I am wrong, but... The purpose of the DSF buffer seems to be - when shading a pixel in the final material pass, you can determine if each of the 4 surrounding 'light' values are from the same surface/face, right? If one of the light pixels has a different 'ID' it can be discarded or weighted accordingly. Am I right so far?

Well, if that is the purpose of the DSF buffer, cant we already achieve the same thing by just comparing the depth and normal of the surrounding light values with the pixel being shaded? If either the depth or normal dot product are beyond some threshold, then you know the light value belongs to another surface, and should be ignored.

What am I missing here?

Share this post


Link to post
Share on other sites
Advertisement
That is not at all the purpose of the DSF buffer.

The purpose of a DSF buffer allows you to use a lower-resolution light-buffer/g-buffer than the back-buffer. This also allows for the use of up to 4 layers of transparency by doing something akin to interlacing in order to render translucency.

While it seems as though this information could be gathered via depth/normal discontinuities, you will find, in practice, that this has a high rate of error.

I strongly recommend re-reading the paper. The DSF buffer is critical to the technique.

Share this post


Link to post
Share on other sites
The purpose of the ID buffer is catch cases where the depth and normal is continuous across different material surfaces/objects.

Share this post


Link to post
Share on other sites
Quote:
The purpose of the ID buffer is catch cases where the depth and normal is continuous across different material surfaces/objects.


Yes and what I am saying is, if you already know the the depth and normal for the surrounding light values, do you really need a 'ID'? Dont you have all the information you need at your fingertips?

Quote:
While it seems as though this information could be gathered via depth/normal discontinuities, you will find, in practice, that this has a high rate of error.


Can you elaborate? It just seems like if we put some thought into it we could get acceptable results.

Share this post


Link to post
Share on other sites
Quote:
Yes and what I am saying is, if you already know the the depth and normal for the surrounding light values, do you really need a 'ID'? Dont you have all the information you need at your fingertips?
Yes and no -- ID-based DSF is fast and accurate. Trying to guess based on normal/depth is slower and not accurate, but still mostly works.

Firstly, comparing object IDs is much faster than comparing depth values or normals. If you're reading depth from a non-linear buffer, or if you're using some kind of normal packing (which you should be), then paying those decode-costs 4x is best avoided.

In practice, depth-based DSF is hard to get right. You tweak your thresholds so one scene works fine, and then you get false-negatives on another scene...

If you throw out depth-based DSF, then normal-based DSF isn't enough to distinguish between objects (e.g. a distant flat plane in front of a close flat plane), so you need to use object ID's.

If you're using object ID's, then all you need the normal-based DSF for is to catch sharp edges within individual objects. However, this is hard to get right. You tweak your thresholds so one model works fine, and then you get false-negatives on another model... So, in the paper, they ended up going with normal-group ID's (each smoothing group gets a unique ID per model).

So in the end, the paper ends up using object-IDs and normal-group IDs for DSF only, because it's faster and more accurate.

When I first implemented mine, I used normal/depth thresholds, but you get a lot of artifacts that way, so I've switched to IDs as well.

Share this post


Link to post
Share on other sites
Quote:
Yes and no -- ID-based DSF is fast and accurate. Trying to guess based on normal/depth is slower and not accurate, but still mostly works.


How is it slower? You mentioned unpacking, but to simply compare normals, you could just compare vs the compressed version right? Plus consider the overhead saved from not having to sample/keep track of object IDs (which have their own problems like they mentioned in the paper, like the 256 limit).

And I still dont understand the accuracy issue. What can you do with object ids that you CANT do with just comparing normals/depth?

Share this post


Link to post
Share on other sites
Quote:
Original post by ZealGamedev
How is it slower? You mentioned unpacking, but to simply compare normals, you could just compare vs the compressed version right?
To accurately compare normals, you have to unpack them and perform a dot-product with the vertex-interpolated normal to get a scalar difference. With IDs you start with two scalar values that you're checking for equality.
Also, with many encodings you can't compare the compressed version either because it's not continuous, or it's not normalised.
Quote:
Plus consider the overhead saved from not having to sample/keep track of object IDs
In mine, I use a 2-MRT G-buffer to store depth, normal, spec-power and IDs. Without IDs, this would still require 2 render targets, so the storage/bandwidth costs aren't different.
Quote:
And I still dont understand the accuracy issue. What can you do with object ids that you CANT do with just comparing normals/depth?
With IDs you're guaranteed not to get false-negatives, but when 2 objects overlap you've got a 1/256 chance of getting a false-positive along the overlapping edge.
With depth/normal threshold based methods, you've got a varying/unpredicatable chance of both false-positivies and false-negatives over every edge.

Share this post


Link to post
Share on other sites
Before we go any further, we need to clearly define the purpose of the DSF buffer. It seems like the ultimate goal is to discard/weight light values that A.) Come from samples that vary spatially (seems like a depth check solves this), and B.) Come from samples with greatly varying normals (again, seems like comparing the normals, regardless of how slow, solves this).

If the purpose (sole purpose?) of a ID is to determine if the light sample comes form the same poly face youre currently shading, cant the same be achieved by simply comparing the normals for equality (since if they are the same face, they will have the same normal)? But what about two different faces, with the same normal? Well who cares, they should be receiving the same amount of light anyway right?

So lets assume there was some clever way to get the performance to be about the same, what problems could arise? Can you give an example?

Share this post


Link to post
Share on other sites
Quote:
Original post by ZealGamedev
It seems like the ultimate goal is to discard/weight light values that A.) Come from samples that vary spatially (seems like a depth check solves this), and B.) Come from samples with greatly varying normals.
...if the light sample comes form the same poly face youre currently shading, cant the same be achieved by simply comparing the normals for equality (since if they are the same face, they will have the same normal)?
If the normal doesn't vary across the face, then you've got flat shading (non-gouraud shading).

If depth/normal are varying too quickly across the face (e.g. on glancing angles, which are common at geometric edges, which is the one place you *need* accurate DSF), and your LBuffer is too low resolution, it's possible for *none* of the 4 LBuffer samples to pass the depth/normal DSF test.

Also, if you're using normal-maps, then that adds more variation -- one solution to this is to store both the interpolated-normal, and the post-normal-map normal, but this increases your GBuffer size.
Quote:
So lets assume there was some clever way to get the performance to be about the same, what problems could arise? Can you give an example?
Stepping back again, we take 4 LBuffer samples (which contain normal/depth/ID information) and we compare them against the current pixel (which has the actual normal/depth/ID information).
Lets say that of the 4 LBuffer samples, 2 are from the right object and 2 are from the wrong object. With depth/normal-based DSF, if the surface is rough and at a glancing angle, it's possible for all 4-samples to fail the DSF test (i.e. 2 are false-negatives) in which case we fall back to bilinear filtering (which incorrectly includes the 2 incorrect samples).
With ID-based DSF, even though none of the samples are a depth/normal "match", we know that 2 of them did come from the right object, and are probably a much better match than the other 2 samples, which produces less artefacts.

In theory you could do both methods - use depth/normal as a first preference, and then if that filter produces a bad result, you fall back to ID to get a reliable fall-back.


The best way to see this for youself is just to go and implement both of them on a test scene with lots of geometric varyation, large depth variations, normal maps, thin geometry, sharp edges, etc.. and then see the good and the bad cases for each method.

Share this post


Link to post
Share on other sites
Quote:
If the normal doesn't vary across the face, then you've got flat shading (non-gouraud shading).


Ah of course, my bad.

Quote:
Lets say that of the 4 LBuffer samples, 2 are from the right object and 2 are from the wrong object.


But what makes something the "right object"? I am rereading the paper again a second time now... arent the IDs simply used to identify faces? Doesnt that cause problems with normal maps (since two points on a face might have very different normals after a normal map texture is applied).

I just dont like the idea of having to encode extra stuff in my geometry. It feels like there should be a more elegant solution... but maybe there isnt..

*btw did they ever release any source code for this? If not could you post how you use your DSF buffer when calculating the light for a pixel in the material pass? I have an idea of what it will look like, but seeing it might make things more clear.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!