Deferred rendering light position

Started by
6 comments, last by undead 15 years, 3 months ago
I've ran into a small issue with my deferred rendering shader. I am working on the light pass using my own light data where I pass in values like position, diffuse, specular.. And the problem occurs with the position. I understand to make it into eye space I must multiply it by the inverse model view, how ever when I get to the lighting pass I no longer have the camera set up, thus the model view is incorrect. This also seems like a waist full method of getting the correct light position, as I only needs to be update, at most, per frame when the camera move, where as using that method I would be finding the position per pixel. My question is how do I fix this? Do I need to make a camera that stores the inverse model view and then just have it update the lights when I move, or is there a better way? Thanks
Advertisement
You could have stored the Position in EyeSpace when you rendered the geometry into the position map so that in the lighting phase you have what you need already.

Why don't you have the camera information available? I do. True - for for directional lighting a camera that is immediately available the the directional lighting shader executes is a 2D ortho camera, but you could still pass in shader matrix values relating to the scene's 3D camera should you require it.

It seems his problem isn't about reconstructing geometry position form a depth buffer, but to transform light position in view space.

If I correctly understood, you have two ways to accomplish what you are looking for:
1- transform light position via CPU and pass the value you need as a shader parameter.
2- calculate lighting in world space, thus eliminating the need to transform the light position. Note that to do this you need to transform the position in depth buffer in world space, I don't know if you have that matrix available.

If you have a deferred shader you should be rendering lights as 3d meshs. You need a full worldviewproj matrix to do it. Unless you draw lights using full screen quads (which is a bad practice in a deferred shader), you should already have the ability to set those matrices.
If the light is directional, you transform by the inverse-transpose camera matrix, which will end up looking like the camera matrix. The upper 3x3 anyway.
For point light, transform by camera matrix.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Quote:Original post by undead
If you have a deferred shader you should be rendering lights as 3d meshs.


what exactly do you mean here?
Quote:Original post by Valeranth
Quote:Original post by undead
If you have a deferred shader you should be rendering lights as 3d meshs.


what exactly do you mean here?

Suppose you already filled your GBuffer.
You have to draw a point (omni) light. The light attenuation is set so that the amount of light emitted goes to zero after 5 meters/units.

To draw a full screen quad is an huge waste of resources for a light like that. What you should do is to draw a sphere with radius 5.0, centered at that light's position. By doing so, you are able to cull away pixels which will never be affected. The same applies to directional lights (with a box) and spot lights (a pyramid). Note that geometry isn't really required to precisely fit the light falloff.. instead of rendering a 2k polys sphere you could use a box. You'll waste just a few pixels, but save vertex shader power. Which solution is better depends on the application you are developing and the target platform.

To my experience in a "real" scene the only light REQUIRED to be drawn as a full screen quad is the sun/moon light (because of problems with shadow mapping). Of course you can create as much "infinite" lights as you want, and perform a full screen pass for each one.

Hope this helps :)


[Edited by - undead on December 31, 2008 4:11:46 AM]
Quote:Original post by undead
Quote:Original post by Valeranth
Quote:Original post by undead
If you have a deferred shader you should be rendering lights as 3d meshs.


what exactly do you mean here?

Suppose you already filled your GBuffer.
You have to draw a point (omni) light. The light attenuation is set so that the amount of light emitted goes to zero after 5 meters/units.

To draw a full screen quad is an huge waste of resources for a light like that. What you should do is to draw a sphere with radius 5.0, centered at that light's position. By doing so, you are able to cull away pixels which will never be affected. The same applies to directional lights (with a box) and spot lights (a pyramid). Note that geometry isn't really required to precisely fit the light falloff.. instead of rendering a 2k polys sphere you could use a box. You'll waste just a few pixels, but save vertex shader power. Which solution is better depends on the application you are developing and the target platform.

To my experience in a "real" scene the only light REQUIRED to be drawn as a full screen quad is the sun/moon light (because of problems with shadow mapping). Of course you can create as much "infinite" lights as you want, and perform a full screen pass for each one.

Hope this helps :)


Thank you for that explanation, but how would I go about doing that? would rendering the sphere with a stencil buffer work right?
Quote:Original post by Valeranth
Thank you for that explanation, but how would I go about doing that? would rendering the sphere with a stencil buffer work right?

You don't need stencil buffer writes to draw lights as meshes. There are simple optimizations which result in a significant speedup. Stencil is useful if you have a cumemap/skydome, in that case you should use the stencil buffer to tag the pixels unaffected by lighting. When rendering lights you only affect pixels whose stencil value is different than the reference "unaffected" value. This works for full screen quads and meshes.

I suggest you to start with the simplest optimization possible: draw with inverse culling (draw only the backfaces), disable z writes, enable z test allowing farther z values to pass the test (the reason is you want to affect those pixels who are IN FRONT of (NEARER than) the backfaces of the light volume, so when rendering a backface, test if the backface is BEHIND (FARTHER than) geometry).

The tricky part is there are four cases to cover when rendering lights as meshes:

case 1- you are outside light volume
case 2- you are partly outside (the near plane clips a front face) light volume
case 3- you are inside light volume
case 4- you are partly inside (the near plane clips a back face) light volume.

The simplest configuration (draw backfaces and perform an "inverse" z test) covers cases number 1-2. Here's why:

case 1- the light volume is convex, so the projected backfaces cover the same area covered by the projected front faces
case 2- since you don't draw the front faces, they don't get clipped by the near plane (actually they are always clipped by culling)

In theory (and in practice) also case number three works because:

case 3- if you are inside, obviously you can only see backfaces all around

The point is the area covered by backfaces when you are inside a light volume is the entire screen (not taking z test into account), so you could make this a full screen quad pass. The reason why it is better to draw a full screen quad instead of a standard backface+ztest is the following, aka case number four:

case 4- if you are inside the light volume but one of the backfaces gets partly clipped by your near plane, you'll see subtle artifacts (lights flickering/disappearing).

In order to cover case number four the simplest solution is to draw a full screen quad. It's not going to be a performance killer, as it's a corner case. The problem is DETECTING case number 4. In theory you should clip the light volume backfaces against your view frustum (this is done by the CPU, not the GPU) to determine if that geometry is clipped by the near plane.

If you don't want to use CPU clipping, then the only way to easily know if you could be facing case number 4 is to test if your camera lies inside the light volume. If it's inside you perform a full screen pass (cases 3-4), if it's outside you only draw backfaces with z test (cases 1-2).

This isn't the optimal solution if you need maximum performace, as there are some cons:
- if you are looking at a wall and there's a light behind it, the light won't get rejected by z test.
- when inside the light volume, you don't take advantage of z test.
- to batch lights isn't easy

Despite those cons, I think this technique is a decent tradeoff between speed and implementation complexity.

Advanced solutions can involve multipass techniques and/or the use of "different geometry/geometry adjustments" to take near plane distance into account.

This topic is closed to new replies.

Advertisement