Sign in to follow this  

Inferred lighting - is DSF buffer needed?

This topic is 2661 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I dont have a ton of experience with Inferred lighting, so correct me if I am wrong, but... The purpose of the DSF buffer seems to be - when shading a pixel in the final material pass, you can determine if each of the 4 surrounding 'light' values are from the same surface/face, right? If one of the light pixels has a different 'ID' it can be discarded or weighted accordingly. Am I right so far?

Well, if that is the purpose of the DSF buffer, cant we already achieve the same thing by just comparing the depth and normal of the surrounding light values with the pixel being shaded? If either the depth or normal dot product are beyond some threshold, then you know the light value belongs to another surface, and should be ignored.

What am I missing here?

Share this post


Link to post
Share on other sites
That is not at all the purpose of the DSF buffer.

The purpose of a DSF buffer allows you to use a lower-resolution light-buffer/g-buffer than the back-buffer. This also allows for the use of up to 4 layers of transparency by doing something akin to interlacing in order to render translucency.

While it seems as though this information could be gathered via depth/normal discontinuities, you will find, in practice, that this has a high rate of error.

I strongly recommend re-reading the paper. The DSF buffer is critical to the technique.

Share this post


Link to post
Share on other sites
Quote:
The purpose of the ID buffer is catch cases where the depth and normal is continuous across different material surfaces/objects.


Yes and what I am saying is, if you already know the the depth and normal for the surrounding light values, do you really need a 'ID'? Dont you have all the information you need at your fingertips?

Quote:
While it seems as though this information could be gathered via depth/normal discontinuities, you will find, in practice, that this has a high rate of error.


Can you elaborate? It just seems like if we put some thought into it we could get acceptable results.

Share this post


Link to post
Share on other sites
Quote:
Yes and what I am saying is, if you already know the the depth and normal for the surrounding light values, do you really need a 'ID'? Dont you have all the information you need at your fingertips?
Yes and no -- ID-based DSF is fast and accurate. Trying to guess based on normal/depth is slower and not accurate, but still mostly works.

Firstly, comparing object IDs is much faster than comparing depth values or normals. If you're reading depth from a non-linear buffer, or if you're using some kind of normal packing (which you should be), then paying those decode-costs 4x is best avoided.

In practice, depth-based DSF is hard to get right. You tweak your thresholds so one scene works fine, and then you get false-negatives on another scene...

If you throw out depth-based DSF, then normal-based DSF isn't enough to distinguish between objects (e.g. a distant flat plane in front of a close flat plane), so you need to use object ID's.

If you're using object ID's, then all you need the normal-based DSF for is to catch sharp edges within individual objects. However, this is hard to get right. You tweak your thresholds so one model works fine, and then you get false-negatives on another model... So, in the paper, they ended up going with normal-group ID's (each smoothing group gets a unique ID per model).

So in the end, the paper ends up using object-IDs and normal-group IDs for DSF only, because it's faster and more accurate.

When I first implemented mine, I used normal/depth thresholds, but you get a lot of artifacts that way, so I've switched to IDs as well.

Share this post


Link to post
Share on other sites
Quote:
Yes and no -- ID-based DSF is fast and accurate. Trying to guess based on normal/depth is slower and not accurate, but still mostly works.


How is it slower? You mentioned unpacking, but to simply compare normals, you could just compare vs the compressed version right? Plus consider the overhead saved from not having to sample/keep track of object IDs (which have their own problems like they mentioned in the paper, like the 256 limit).

And I still dont understand the accuracy issue. What can you do with object ids that you CANT do with just comparing normals/depth?

Share this post


Link to post
Share on other sites
Quote:
Original post by ZealGamedev
How is it slower? You mentioned unpacking, but to simply compare normals, you could just compare vs the compressed version right?
To accurately compare normals, you have to unpack them and perform a dot-product with the vertex-interpolated normal to get a scalar difference. With IDs you start with two scalar values that you're checking for equality.
Also, with many encodings you can't compare the compressed version either because it's not continuous, or it's not normalised.
Quote:
Plus consider the overhead saved from not having to sample/keep track of object IDs
In mine, I use a 2-MRT G-buffer to store depth, normal, spec-power and IDs. Without IDs, this would still require 2 render targets, so the storage/bandwidth costs aren't different.
Quote:
And I still dont understand the accuracy issue. What can you do with object ids that you CANT do with just comparing normals/depth?
With IDs you're guaranteed not to get false-negatives, but when 2 objects overlap you've got a 1/256 chance of getting a false-positive along the overlapping edge.
With depth/normal threshold based methods, you've got a varying/unpredicatable chance of both false-positivies and false-negatives over every edge.

Share this post


Link to post
Share on other sites
Before we go any further, we need to clearly define the purpose of the DSF buffer. It seems like the ultimate goal is to discard/weight light values that A.) Come from samples that vary spatially (seems like a depth check solves this), and B.) Come from samples with greatly varying normals (again, seems like comparing the normals, regardless of how slow, solves this).

If the purpose (sole purpose?) of a ID is to determine if the light sample comes form the same poly face youre currently shading, cant the same be achieved by simply comparing the normals for equality (since if they are the same face, they will have the same normal)? But what about two different faces, with the same normal? Well who cares, they should be receiving the same amount of light anyway right?

So lets assume there was some clever way to get the performance to be about the same, what problems could arise? Can you give an example?

Share this post


Link to post
Share on other sites
Quote:
Original post by ZealGamedev
It seems like the ultimate goal is to discard/weight light values that A.) Come from samples that vary spatially (seems like a depth check solves this), and B.) Come from samples with greatly varying normals.
...if the light sample comes form the same poly face youre currently shading, cant the same be achieved by simply comparing the normals for equality (since if they are the same face, they will have the same normal)?
If the normal doesn't vary across the face, then you've got flat shading (non-gouraud shading).

If depth/normal are varying too quickly across the face (e.g. on glancing angles, which are common at geometric edges, which is the one place you *need* accurate DSF), and your LBuffer is too low resolution, it's possible for *none* of the 4 LBuffer samples to pass the depth/normal DSF test.

Also, if you're using normal-maps, then that adds more variation -- one solution to this is to store both the interpolated-normal, and the post-normal-map normal, but this increases your GBuffer size.
Quote:
So lets assume there was some clever way to get the performance to be about the same, what problems could arise? Can you give an example?
Stepping back again, we take 4 LBuffer samples (which contain normal/depth/ID information) and we compare them against the current pixel (which has the actual normal/depth/ID information).
Lets say that of the 4 LBuffer samples, 2 are from the right object and 2 are from the wrong object. With depth/normal-based DSF, if the surface is rough and at a glancing angle, it's possible for all 4-samples to fail the DSF test (i.e. 2 are false-negatives) in which case we fall back to bilinear filtering (which incorrectly includes the 2 incorrect samples).
With ID-based DSF, even though none of the samples are a depth/normal "match", we know that 2 of them did come from the right object, and are probably a much better match than the other 2 samples, which produces less artefacts.

In theory you could do both methods - use depth/normal as a first preference, and then if that filter produces a bad result, you fall back to ID to get a reliable fall-back.


The best way to see this for youself is just to go and implement both of them on a test scene with lots of geometric varyation, large depth variations, normal maps, thin geometry, sharp edges, etc.. and then see the good and the bad cases for each method.

Share this post


Link to post
Share on other sites
Quote:
If the normal doesn't vary across the face, then you've got flat shading (non-gouraud shading).


Ah of course, my bad.

Quote:
Lets say that of the 4 LBuffer samples, 2 are from the right object and 2 are from the wrong object.


But what makes something the "right object"? I am rereading the paper again a second time now... arent the IDs simply used to identify faces? Doesnt that cause problems with normal maps (since two points on a face might have very different normals after a normal map texture is applied).

I just dont like the idea of having to encode extra stuff in my geometry. It feels like there should be a more elegant solution... but maybe there isnt..

*btw did they ever release any source code for this? If not could you post how you use your DSF buffer when calculating the light for a pixel in the material pass? I have an idea of what it will look like, but seeing it might make things more clear.

Share this post


Link to post
Share on other sites
Another major problem I just thought of - I am currently using volumetric rendering techniques for several things (grass for one). I dont see how it would be possible to adapt the 'ID' per vertex approach to something like that. Just one more reason why I am so interested in exploring alternative options...

Share this post


Link to post
Share on other sites
Quote:
Another major problem I just thought of - I am currently using volumetric rendering techniques for several things (grass for one). I dont see how it would be possible to adapt the 'ID' per vertex approach to something like that.
The per-vertex IDs are the normal-group IDs, which you use instead of normal-threshold testing.
If you can't supply this data, then you can compare normals like you want to ;)

The other ID channel is supplied per-object, so they go in shader uniforms, not in the vertex stream.
[edit]for per-object IDs, I just increment a counter with each 'DrawModel' call (resetting to 0 when it hits 256), and put the value in a shader uniform.[/edit]
Quote:
Original post by ZealGamedev
Quote:
Lets say that of the 4 LBuffer samples, 2 are from the right object and 2 are from the wrong object.
But what makes something the "right object"? I am rereading the paper again a second time now... arent the IDs simply used to identify faces?
There's two ID channels - one identifies entire objects, the other identifies smoothing groups (i.e. sections of an object that have sharp edges between them).

Instead of 'right'/'wrong', substitute 'same'/'different'. When you've got 4 lighting samples, and none of them are passing the depth/normal test, then samples that came from an entirely different object are likely to be less representative.

As an example of where IDs can help over normal/depth comparisons --
one particular headache for me was a character with a brimmed hat. When viewed front on, the brim of the hat took up very few pixels. With the low-res LBuffer, sometimes the brim wouldn't get any lighting samples at all!
When reconstructing lighting for a pixel on the top of the brim, I'd get 2 light values from the top of the hat, just above the brim, and 2 light values from the guy's forehead (in shadow). All 4 of these samples fail the normal/depth comparison, but 2 of them are from the top of the hat (in sunlight - a decent approximation) and 2 are from underneath the hat (in shadow - bad approximation).
If I use IDs to say that the top of the hat is a different 'normal group' to the guys head, then the DSF filter will reject the 2 shadowed pixels while keeping the 2 'hat' pixels, which looks good.
With the normal/depth based DSF, the filter would fail and fall back to bilinear, which caused horrible flickering between correct/too-dark/too-light on the thin brim area.
Quote:
I just dont like the idea of having to encode extra stuff in my geometry. It feels like there should be a more elegant solution... but maybe there isnt..
Yeah it's yucky having to put the smoothing group IDs in there, but it's only one more byte per vertex. Plus, as you've been saying all along, there's alternatives such as comparing the normals ;) You can use the IDs on objects that supply them, and just compare normals on other objects.

What I recommend is just going with normal/depth threshold based DSF to begin with -- once you start noticing the lighting artifacts around edges, or have trouble tweaking your threshold values, then give the ID based DSF a go and see if it helps (it did for me) ;)

[Edited by - Hodgman on September 3, 2010 9:42:47 AM]

Share this post


Link to post
Share on other sites
My GLSL DSF code (not optimised) is something like:
	//bilinear filtering weights
vec2 invSubPos = 1.0-subPos;
weights.x = invSubPos.x * invSubPos.y;
weights.y = invSubPos.x * subPos.y;
weights.z = subPos.x * invSubPos.y;
weights.w = subPos.x * subPos.y;

vec4 dsf = vec4(1.0,1.0,1.0,1.0);
#ifdef DO_ID_DSF
vec4 lbObjectIds = /*object ids of the 4 LBuffer samples*/;
dsf = step( abs(curNodeId - lbObjectIds), 0.5 );
#endif

#ifdef DO_DEPTH_DSF
float cut = 0.1;//depth compare threshold
float depth = viewPos.z;
vec4 lbDepths = /*depth values of the 4 LBuffer samples*/;
dsf *= 1.0-step( vec4(cut), abs( depth-lbDepths ) );
#endif

#ifdef DO_NORMAL_DSF
/* n0-3 are the normals of the 4 LBuffer samples */
float acut = 0.25;//angular threshold
vec4 normalDiff = vec4( dot( normal, n0 ), dot( normal, n1 ), dot( normal, n2 ), dot( normal, n3 ) );
dsf *= step( vec4(acut), normalDiff );
#endif
//account for the case where dsf is zero
//This is the failure case - none of the 4 samples are a match, so we fall back to bilinear filtering.
float dsfTotal = dot(dsf,vec4(1.0));
/* if dsfTotal is zero, then reset dsf to vec4(1) */

//apply the dsf
weights *= dsf;

//normalize the weights
float total = dot(weights,vec4(1.0));
weights = weights / total;

//blend the 4 LBuffer samples
vec4 lighting;
lighting = lighting0 * weights.x +
lighting1 * weights.y +
lighting2 * weights.z +
lighting3 * weights.w;

All of this is a lot of math to be doing per-pixel (when compared with light-pre-pass or deferred rendering). This does make the material pass in inferred quite expensive -- you only come out ahead performance wise because the lighting-pass is so much cheaper at low-res.

Share this post


Link to post
Share on other sites
Quote:
If you can't supply this data, then you can compare normals like you want to ;)


For my volume rendering I dont see how I could supply such data, whether I wanted to or not. However you are saying simply comparing normals will lead to major artifacts, so I am just screwed?

But your brimmed hat example was pretty good. Although it really highlights my concerns with this technique in general. I HATE how you have to worry about special cases like that. And I assume that each model is different, requiring a artist to manually specify the different normal groups.. I just dont see how you could write a algorithm that automates this process... seems like it requires a artists touch/attention,...

I am interested in deferred/inferred lighting techniques because of the ELEGANCE, and stuff like that just ruins everything for me.

Quote:
All of this is a lot of math to be doing per-pixel (when compared with light-pre-pass or deferred rendering). This does make the material pass in inferred quite expensive -- you only come out ahead performance wise because the lighting-pass is so much cheaper at low-res.


Hmm this makes me wonder - if the prime benefit of inferred lighting is really just speeding up the lighting pass, maybe something like Light prepass (to handle multiple lights) + Reverse Reprojection Caching (to speed up the lighting phase), would be a better overall solution...

Although the other big reason I was attracted to inferred lighting is how it unifies the pipeline for opaque/transparent objects...

Share this post


Link to post
Share on other sites
Quote:
Original post by ZealGamedev
However you are saying simply comparing normals will lead to major artifacts, so I am just screwed?
There's two main types of artifacts that come from your DSF tests being inaccurate -- One is when you get jagged lighting along an edge with similar lighting conditions (you notice the lighting is blocky/low-resolution). The other is jagged lighting along an edge with different lighting conditions (unsightly bright/dark pixels).

Most errors in normal comparison will lead to the first type of artifact, which I wouldn't call major. The exception is major changes, like the brim of a hat.
Blades of grass probably don't even need the normal-comparison whatsoever (just object ID and/or depth would probably do).
Quote:
I HATE how you have to worry about special cases like that. And I assume that each model is different, requiring a artist to manually specify the different normal groups.. I just dont see how you could write a algorithm that automates this process... seems like it requires a artists touch/attention,...
It always has required an artists touch ;) -- artists have always used smoothing groups within modeling tools to identify hard edges. All you have to do is export that existing data into the vertex stream.
Quote:
Hmm this makes me wonder - if the prime benefit of inferred lighting is really just speeding up the lighting pass, maybe something like Light prepass (to handle multiple lights) + Reverse Reprojection Caching (to speed up the lighting phase), would be a better overall solution...

Although the other big reason I was attracted to inferred lighting is how it unifies the pipeline for opaque/transparent objects...
Yeah there's 2 benefits - transparency support, and mixed-resolution rendering (i.e. fast lighting).

Actually, if you've got a working deferred renderer, it's a small amount of work to convert it into a light-pre-pass renderer... and once you've got a LPP renderer, the only difference between it and an inferred one is the addition of the DSF filter. In fact, if you've got a inferred renderer and you run the lighting pass at 100% resolution (and comment out the DSF code) then you're back to LPP.

So, if you haven't built one already, I'd recommend tackling an LPP renderer first, and then adding on the inferred functionality if you're interested in the transparency support or the low-res lighting. Or you could try other improvements like the re-projection cache :)

Share this post


Link to post
Share on other sites

This topic is 2661 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this