Light pre-pass problem

Started by
4 comments, last by MJP 12 years, 3 months ago
[color=#4A4A4A]Okay, so I've set up a light pre-pass render system once before, except my new one is slightly different.[color=#4A4A4A]
I'm trying to implement it without using a normal buffer. In theory this would work well for the toon-shaded game I'm making, except I've run into a problem.
[color=#4A4A4A]
When I render a pointlight (as a sphere) in the lighting pass, I'm sampling from the depth-map and comparing the sphere's depth.[color=#4A4A4A]
Clipping any pixels from drawing where the scene is closer to the camera then the light. This sadly only works for one side of the point-light

[font="'Segoe UI"][size="2"][color="#4a4a4a"]BTW: I'm using XNA, but that's irrelevant to this problem... [/font]


[color=#4A4A4A]

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;

float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);

output.Depth = 1 - (viewPosition.z / viewPosition.w);

return output;
}

float4 PixelShaderFunction(VertexShaderOutput input, float2 screenSpace : VPOS ) : COLOR0
{
screenSpace /= float2(1280, 720);

float depth = tex2D(Depth, screenSpace);

clip(depth < input.Depth ? -1:1);

return LightColor;
}

[color=#4A4A4A]

When I render with this code, the clip function works as expected, but I don't know how to clip the sphere from lighting parts of the scene that are behind it, beyond it's reach.

Here's a pic showing that the depth behind the sphere isn't being taken into account:
The farthest cube shouldn't be receiving any light, and the plane shouldn't be shaded that mush in the rear.

Error1.png

Here's another picture showing the problem a bit more visibly. Top is the final image, bottom is the light buffer:

Error4.png


I think I'm just missing the last step for determining the influence the light has, but I just can't figure it out!
Any help would be greatly appreciated. :)
Advertisement
Your problem is that your depth test is only one way: you're rendering the front faces of your sphere, and testing if those faces are closer than the depth in your depth buffer. To get the results you want, you would need to also do the reverse depth test on the back faces on the sphere. However you won't know the depth of the back faces when rendering the front faces, or vice versa.

Two solutions come to mind:

  1. Use the depth value to reconstruct position, and then determine the distance from that position to the sphere's center. You can then clip out any pixels that have a distance > the sphere radius, or even fade out the light contribution as surfaces get further from the light center (which is how real light attenuation works).
  2. Render the backfaces first with color writes off and stencil writes on, and clip out any pixels where the depth buffer value is greater than the pixel depth. Then turn color writes on, stencil writes off, and stencil testing on, and render front faces the way you do right now. This will give you the results you were expecting.
I think I'll try solution 1, but despite my efforts I've yet to find an example of calculating world position that works with my depth buffer...

[color=#000000]output[color=#666600].[color=#660066]Depth[color=#000000] [color=#666600]=[color=#000000] [color=#006666]1[color=#000000] [color=#666600]-[color=#000000] [color=#666600]([color=#000000]viewPosition[color=#666600].[color=#000000]z [color=#666600]/[color=#000000] viewPosition[color=#666600].[color=#000000]w[color=#666600]);

I like the way I'm calculating Depth right now, as it returns a linear value from the camera, is there any way I can calculate the world position using this type of depth?
z/w is actually very non-linear, but you can use it to reconstruct depth. Something like this should work:

float x = screenSpace.x * 2 - 1;
float y = (1 - screenSpace.y) * 2 - 1;
float z = 1.0f - depth;
float4 projectedPos = float4(x, y, z, 1.0f);
float4 worldPosition = mul(projectedPos, InvViewProjection);
worldPosition.xyz /= worldPosition.w;


Where "InvViewProjection" is the inverse of View * Projection.
Thanks, I got it!

Here's the code if anyone else runs into this problem:

Depth is now : Output.Position.z / Output.Position.w


screenSpace /= float2(1279.5, 719.5);
float lightDepth = input.Depth.x / input.Depth.y;
float sceneDepth = tex2D(Depth, screenSpace).r;
clip(sceneDepth > lightDepth || sceneDepth == 0 ? -1:1);
float4 H = float4(screenSpace.x * 2 - 1, (1 - screenSpace.y) * 2 - 1, sceneDepth, 1);
float4 D = mul(H, iViewProjection);
float4 worldPos = D / D.w;
clip( length(LightPosition - worldPos) > LightRadius ? -1:1);
return LightColor;


Pic:

Fixed2.png
I'm glad that you got it working! It actually looks better now that you're using position, since the light volumes look perfectly spherical rather than having polygonal edges.

This topic is closed to new replies.

Advertisement