Deferred omni/point light issues

Started by
5 comments, last by TheChubu 9 years, 6 months ago

Hi! I'm having very annoying issues trying to implement omni lights (ie, point lights).

Here is a pic of the issue http://imgur.com/NSbIS77

As you can see, there are a bunch of textured meshes, with an omni directional light in the center. For some reason, only the sides facing down get lit, which makes sense for meshes that are above of the light, but not for meshes that are under it (ie, from the camera's view, those meshes should remain dark).

This is my vertex shader

This is my fragment shader

Position reconstruction works since its what I use in directional lighting, which works for all directions from what I've tested.

For rendering the point lights I use the two step process detailed in this article Killzone 2 Deferred Rendering Extended Notes, ie, geometry only draw with GEQUAL depth test and mark pixels in front of the volume with the stencil mask. Then another pass with the full shader I posted with depth test LEQUAL on the marked pixels.

I use bilinear interpolation to get the view ray to reconstruct the position as described in this site derschmale.com - Reconstructing position from depth, then again, I've been using it for directional lighting without issues, albeit without the bilinear interpolation since I just draw a fullscreen quad and use the built in interpolators for the corners. I tried doing the same bilinear interpolation on the directional light to test it if I got it wrong and it works just fine too.

For anything else you might need to know, just ask.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Advertisement

Light dir should be calculated at pixel shader. From pixel calculated position towards light center(this is contant so calculate it at cpu)

Hey TheChubu !

You have inversed the light direction in the vertex shader :


passLightDir = (mv * inPosition) - pLight.viewCenter;

You are doing : LightDir = SurfacePos - LightPos.

The good calcule is : LightDir = LightPos - SurfacePos.


passLightDir = pLight.viewCenter - (mv * inPosition);

Since your inPosition is vec3 you should do :


passLightDir = pLight.viewCenter - (mv * vec4(inPosition,1.0));

The reason is without the w component on the vector to 1.0, it will just rotate the position but not translate it.

You should compute the LightDir in the pixel shader to avoid problems (just return the view space position in the vertex shader and use it in the pixel shader).

Why not compute the LightDir in the vertex shader ? Because if your geometry is culled you will not have lighting on a large part.

Don't forget to add : if( NdotL > 0.0 ) before add the specular to avoid artifacts.

You can't calculate light direction at vertex shader with deferred shading you only have actual position at pixel shader stage.(its calculated based on depth)

You can compute the view ray using this vertex shader and use this pixel shader when rendering the sphere geometry (use an icosahedron for better perf) :


void main()
{
  gl_Position = WorldViewProjection * InVertex;
  vec2 NDCPosition = gl_Position.xy / gl_Position.w;
  OutTexCoord0 = 0.5 * NDCPosition + 0.5;
  OutViewRay = vec3( NDCPosition.x * TanHalfFov * Aspect,
                     NDCPosition.y * TanHalfFov,
                     1.0 );
}

Here the pixel shader :


void main()
{ 
  vec3 Position = OutViewRay * texture( DepthMap, OutTexCoord0 ).r;
  vec3 LightVec = normalize( LightPosition - Position );
  float NdotL = dot( Normal, LightVec );
  if( NdotL > 0.0 )
  {
    ...
  }
}

But it's better to not do that because if the sphere (or icosahedron) is culled on a part, you will not have lighting on the culled part.

This culling problem is when a sphere is culled by the far plane.

All right. Removed those computations from the vertex shader and now I'm only doing this in the fragment shader:


vec3 viewPos = computePosition(viewDepth, texCoord);
vec3 lgtDir = lgt.viewCenter - viewPos;

Where 'viewPos' is the reconstructed view space position of the fragment and lgt.viewCenter is the center of the bounding volume in view space (with 'w' component set to 1 so translation is accounted for). So to get the direction between the fragment and the center of the omni directional light.

The only thing I managed is to get it to lit only the top of the meshes, rather than the bottom. ie, things only get lit on +X and -Y directions.

I've revised the view matrix, it should be fine (it gets computed with the same code as all other meshes), it gets uploaded to the UBO and all. I'm at a loss of whats wrong, it could be I'm missing something important from that Killzone 2 article?

EDIT: I'm starting to think there is some mismatch between the bounding volume center and the view space light center passed through the UBO.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Nope. Bounding volume is fine. Ie, i set it to move around, draw it as wireframe, looks completely fine. Its the light center or something. Something isn't converted properly to view space.

I do exactly this:


Spatial spatial = pLight.spatial;
CmpVec4f viewPos = pLight.viewPosition;
 
// Copy light position to viewPosition vec.
spatial.worldTransform.getPosition( viewPos );
// w component to 1.
viewPos.setW( 1 );
// Update view space position of the light.
OpsMathCmp.mulMatVec( spatial.mvTransform, viewPos, viewPos );
// Add to both point light passes.
prebatch.renderQueue.add( pLight );
batch.renderQueue.add( pLight );

Getting really really annoyed. Maybe the whole model view transform is wrong.

BTW, editor ate what I wrote after this code tag again. Fucking cherry on top.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

This topic is closed to new replies.

Advertisement