Deferred omni/point light issues

This topic is 1457 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hi! I'm having very annoying issues trying to implement omni lights (ie, point lights).

Here is a pic of the issue http://imgur.com/NSbIS77

As you can see, there are a bunch of textured meshes, with an omni directional light in the center. For some reason, only the sides facing down get lit, which makes sense for meshes that are above of the light, but not for meshes that are under it (ie, from the camera's view, those meshes should remain dark).

Position reconstruction works since its what I use in directional lighting, which works for all directions from what I've tested.

For rendering the point lights I use the two step process detailed in this article Killzone 2 Deferred Rendering Extended Notes, ie, geometry only draw with GEQUAL depth test and mark pixels in front of the volume with the stencil mask. Then another pass with the full shader I posted with depth test LEQUAL on the marked pixels.

I use bilinear interpolation to get the view ray to reconstruct the position as described in this site derschmale.com - Reconstructing position from depth, then again, I've been using it for directional lighting without issues, albeit without the bilinear interpolation since I just draw a fullscreen quad and use the built in interpolators for the corners. I tried doing the same bilinear interpolation on the directional light to test it if I got it wrong and it works just fine too.

For anything else you might need to know, just ask.

Share on other sites

Light dir should be calculated at pixel shader. From pixel calculated position towards light center(this is contant so calculate it at cpu)

Share on other sites

Hey TheChubu !

You have inversed the light direction in the vertex shader :

passLightDir = (mv * inPosition) - pLight.viewCenter;

You are doing : LightDir = SurfacePos - LightPos.

The good calcule is : LightDir = LightPos - SurfacePos.

passLightDir = pLight.viewCenter - (mv * inPosition);

Since your inPosition is vec3 you should do :

passLightDir = pLight.viewCenter - (mv * vec4(inPosition,1.0));

The reason is without the w component on the vector to 1.0, it will just rotate the position but not translate it.

You should compute the LightDir in the pixel shader to avoid problems (just return the view space position in the vertex shader and use it in the pixel shader).

Why not compute the LightDir in the vertex shader ? Because if your geometry is culled you will not have lighting on a large part.

Don't forget to add : if( NdotL > 0.0 ) before add the specular to avoid artifacts.

Edited by Alundra

Share on other sites

You can't calculate light direction at vertex shader with deferred shading you only have actual position at pixel shader stage.(its calculated based on depth)

Share on other sites

You can compute the view ray using this vertex shader and use this pixel shader when rendering the sphere geometry (use an icosahedron for better perf) :

void main()
{
gl_Position = WorldViewProjection * InVertex;
vec2 NDCPosition = gl_Position.xy / gl_Position.w;
OutTexCoord0 = 0.5 * NDCPosition + 0.5;
OutViewRay = vec3( NDCPosition.x * TanHalfFov * Aspect,
NDCPosition.y * TanHalfFov,
1.0 );
}

void main()
{
vec3 Position = OutViewRay * texture( DepthMap, OutTexCoord0 ).r;
vec3 LightVec = normalize( LightPosition - Position );
float NdotL = dot( Normal, LightVec );
if( NdotL > 0.0 )
{
...
}
}

But it's better to not do that because if the sphere (or icosahedron) is culled on a part, you will not have lighting on the culled part.

This culling problem is when a sphere is culled by the far plane.

Edited by Alundra

Share on other sites

All right. Removed those computations from the vertex shader and now I'm only doing this in the fragment shader:

vec3 viewPos = computePosition(viewDepth, texCoord);
vec3 lgtDir = lgt.viewCenter - viewPos;

Where 'viewPos' is the reconstructed view space position of the fragment and lgt.viewCenter is the center of the bounding volume in view space (with 'w' component set to 1 so translation is accounted for). So to get the direction between the fragment and the center of the omni directional light.

The only thing I managed is to get it to lit only the top of the meshes, rather than the bottom. ie, things only get lit on +X and -Y directions.

I've revised the view matrix, it should be fine (it gets computed with the same code as all other meshes), it gets uploaded to the UBO and all. I'm at a loss of whats wrong, it could be I'm missing something important from that Killzone 2 article?

EDIT: I'm starting to think there is some mismatch between the bounding volume center and the view space light center passed through the UBO.

Edited by TheChubu

Share on other sites

Nope. Bounding volume is fine. Ie, i set it to move around, draw it as wireframe, looks completely fine. Its the light center or something. Something isn't converted properly to view space.

I do exactly this:

Spatial spatial = pLight.spatial;
CmpVec4f viewPos = pLight.viewPosition;

// Copy light position to viewPosition vec.
spatial.worldTransform.getPosition( viewPos );
// w component to 1.
viewPos.setW( 1 );
// Update view space position of the light.
OpsMathCmp.mulMatVec( spatial.mvTransform, viewPos, viewPos );
// Add to both point light passes.



Getting really really annoyed. Maybe the whole model view transform is wrong.

BTW, editor ate what I wrote after this code tag again. Fucking cherry on top.

Edited by TheChubu

1. 1
2. 2
frob
20
3. 3
Rutin
17
4. 4
5. 5

• 13
• 10
• 9
• 18
• 9
• Forum Statistics

• Total Topics
632555
• Total Posts
3007035

×