Deferred Shading Issues

Started by
2 comments, last by MJP 16 years ago
Hi, I'm having a spot of trouble with the lighting stage of my simple deferred renderer using OpenGL. I've created a simple (mostly working) deferred renderer based on reading various articles etc on the web. For reference my chosen RenderTargets are all RGBA_F16 formats which seems to give good performance whilst offering good precision. The geometry pass works and I can view as debug textures the output of each stage. The specific problem I am having is with the lighting stage. According to (different) resources available on the web lights can be split into: Those that operate globally - such as directional lighting and the global lighting such as the Sun and those that are spotlight and point light based. For global lighting I use a 2D orthographic view and perform a full 2D pass over each screen pixel - modifying the RT target as appropriate to the 'global/directional' light source. This seems to do the right thing. However, the second type of light is the point and spot light. Right now I've focused on the SpotLight. What is the correct approach here? The articles/web references are not all that clear and seem to omit some key details. For example do I: a) Perform a 2D pass - over a sub-range of the 2D screen space. The spot light bounding volume can be computed to the screen. I have tried this and although I have my 'clipped screen space rect' I do not think this is entirely correct - what happens to shadows? b) An NVidia paper 6800 leagues... talks about first rendering a Cone using the stencil buffer - in effect creating a volume mask area. This of course I can do. But... so I then only need a 2D pass - or do I still require a 3D pass using the material setup associated with the spotlight ? Within this spot light pass I can also attempting to incroporate shadows using a shadow-map. Does any one have any real practical advice they can share.
Advertisement
Quote:Original post by BionicBytes

a) Perform a 2D pass - over a sub-range of the 2D screen space.
The spot light bounding volume can be computed to the screen. I have tried this and although I have my 'clipped screen space rect' I do not think this is entirely correct - what happens to shadows?

b) An NVidia paper 6800 leagues... talks about first rendering a Cone using the stencil buffer - in effect creating a volume mask area. This of course I can do.
But... so I then only need a 2D pass - or do I still require a 3D pass using the material setup associated with the spotlight ?
Within this spot light pass I can also attempting to incroporate shadows using a shadow-map.



Both are valid approaches, and have different advantages. The first method, simply rendering a quad big enough to cover the region of the screen affected by the light source, has the advantages of

-Least vertex work
-Easy batching
-Coherent rasterization and g-buffer memory access

With the major disadvantage of

-Pixels are wasted in areas of the quad that are outside the light's affected region and also where there are no lit pixels (IE the light is behind a wall, or floating in the air)


The second method, using a multi-pass approach with light volumes, has the exact opposite pros and cons. You shade the least pixels since you do a check to see if the light's volume is visible and since you also use a 3D volume for a "tighter fit" with the affected region. However there's more vertex work, and you can't easily batch since this method requires a state change between passes.

So the key here is that neither of these techniques, and probably any other technique you could come up with, is going to be perfect for all situations. What gives you the best performance is going to depend on the shape of the light, the screen-space size of the light, how many lights you're rendering of a type, what your scene geometry is like (are there lots of occluders that could bury the light, could the light easily be floating in the air?), and what sort of hardware your renderer is working with. You may want to use some sort of heuristic to determine which method to use (if lightVolumeRadius > x, do this), this is what I used to do in my old deferred renderer. I also tried playing around with different variations of techniques: rendering a volume with stencil then rendering a full-screen quad, marking the stencil in the initial geometry pass in order to mark off areas where lights would be "floating in the sky", batching large amounts of simple cube volumes with z-test for particles...there's lot's of possibilities. Ultimately you'll want to minimize your per-pixel work if you want maximum performance, so try to pick your methods with that in mind.

Oh and about that first method and shadows...any of these method should work work fine with shadow maps, it's pretty much a completely orthogonal problem. All you need to do is make sure that the area you're rendering covers the full volume of the light source.

[Edited by - MJP on April 22, 2008 9:42:01 AM]
Okay - thanks for the info. Good to know both are valid techniques.

So - currently I'm trying to do SpotLights using a volume screen aligned clipped 2D rect rendered as a 2D orthographic projection.
My G-buffer's vertex position is stored in Eye-Space in a 3-component f16 RT.

But, isn't there some problem here. The vertex data being send from the video card to the pixel shader is now of course a 2D quad. The rasterizer is interpolating across the entire (or clipped) region of the screen.
The real vertex data is inside the G-buffer - so we must first get the real vertex data from the g-buffer's vertex-position-RT from the incoming fragments 2D interloperated position.

Question:
As you render the 2D quad do you also generate a texture coordinate (containing unprojected 2D screen space coordinates) and pass this to the shader?

Secondquestion: What is your apps VertexToLightSpaceProjectionMatrix look like. Here is mine:
Gl.glMatrixMode (Gl.GL_TEXTURE);
Gl.glLoadIdentity();
Gl.glTranslatef (0.5f,0.5f,0.5f); // modify clip space to texture space range
Gl.glScalef (0.5f, 0.5f, 0.5f);
Gl.glMultMatrixf (lightCam.fp_glProjectionMatrix);
Gl.glMultMatrixf (lightCam.fp_glModelViewMatrix);
Gl.glMultMatrixf (aCam.fp_glInvModelViewMatrix);

again - I'm assuming you pass a matrix like this to the pixel shader, so that you can then compute a LightSpaceVertexPos using this matrix

Quote:Original post by BionicBytes

But, isn't there some problem here. The vertex data being send from the video card to the pixel shader is now of course a 2D quad. The rasterizer is interpolating across the entire (or clipped) region of the screen.
The real vertex data is inside the G-buffer - so we must first get the real vertex data from the g-buffer's vertex-position-RT from the incoming fragments 2D interloperated position.

Question:
As you render the 2D quad do you also generate a texture coordinate (containing unprojected 2D screen space coordinates) and pass this to the shader?



Sure, you can send texture coordinates if you'd like. You can also calculate texture coordinates from your post-projection 2D coordinates too if you'd like, I do that when rendering light volumes. All you have to do is pass the projected position of your vertex to your pixel shader, then in the pixel shader divide by w and convert x and y from the range [-1,1] to [0,1] (like you do for your shadow map projection). I've also seen implementations that always render full-screen quads, but use scissor rectangles to limit rendering to the affected region. This would let you set your texture coordinates once and then forget about it.


Quote:Original post by BionicBytes

Secondquestion: What is your apps VertexToLightSpaceProjectionMatrix look like. Here is mine:
Gl.glMatrixMode (Gl.GL_TEXTURE);
Gl.glLoadIdentity();
Gl.glTranslatef (0.5f,0.5f,0.5f); // modify clip space to texture space range
Gl.glScalef (0.5f, 0.5f, 0.5f);
Gl.glMultMatrixf (lightCam.fp_glProjectionMatrix);
Gl.glMultMatrixf (lightCam.fp_glModelViewMatrix);
Gl.glMultMatrixf (aCam.fp_glInvModelViewMatrix);

again - I'm assuming you pass a matrix like this to the pixel shader, so that you can then compute a LightSpaceVertexPos using this matrix


Yeah I do something similar for lights that have shadow-maps, except using D3DX matrix functions. I also had the code for converting to texture space in my shader, since in HLSL any code that will evaulate the same for every vertex or pixel gets pulled out and run as a pre-shader on the CPU.

This topic is closed to new replies.

Advertisement