Deferred Shading lighting stage

Started by
5 comments, last by rocklobster 10 years, 8 months ago

Hey guys,

I've attempted to implement deffered shading lately and I'm a bit confused about the lighting stage.

This is the algorithm for what I do at the moment:


FirstPass

- Bind render target
- bind my G-buffer shader program
- render the scene

SecondPass

- Bind my normal shader for rendering the quad 
(fragment shader takes Position, Normal and Diffuse samplers as uniforms)
- Render full screen quad
- swap buffers

So am I right to assume that to light the scene I should also add this change to the second pass:


SecondPass

- Bind my normal shader for rendering the quad 
(fragment shader takes Position, Normal and Diffuse samplers as uniforms)
- FOR EACH LIGHT IN THE SCENE
    - Set this light as uniform for the shader
    - Render full screen quad 
- END FOR
- swap buffers

And in my fragment shader I'll have the code which shades all the pixels based on the type of light passed in (directional, point or spot).

Cheers for any help

Advertisement

That seems more or less correct to me. At some point you'll want to implement some sort of light culling so that you're not shading every pixel with every light. But yeah, typically you'll have one shader for every material type, and one shader for every light type (directional, point, ambient, etc.). In your first pass, you bind the G-buffer as your render target. You render the scene using the material shaders, which output normals, diffuse, etc., to their respective textures in the G-buffer. In the second stage, you go one light at a time, and you render a full-screen quad using the correct light shader for each light. The light shader samples from each of the G-buffer textures, and applies the correct lighting equations to get the final shading value for that light. Make sure to additively blend the light fragments at this stage.

What a lot of people do, in order to cut down on the number of pixels being shaded with a given light, is to have some geometric representation for the light (a sphere for a point light, for example). Before rendering the full-screen quad with the point light shader, they'll stencil the sphere in so that they can be sure to only shade pixels within that sphere. Some even go as far as to treat the sphere almost like a shadow volume, stenciling in only the portions where scene geometry intersects the sphere. This gives you near pixel-perfect accuracy, but it might be overkill. I've been reading lately that some people just approximate the range of the point light using a billboarded quad, because the overhead in rendering a sphere into the stencil buffer (let alone doing the shadow volume thing) is greater than the time spent unnecessarily shading pixels inside the quad that the light can't reach.

Of course, a real point light can reach anywhere. If you were to use a sphere to approximate the extent of the point light, you'd have to use a somewhat phony falloff function so that the light attenuates to zero at the edge of the sphere.

Thanks. Right now I'm currently outputting Position, Normal, Diffuse, Tangent and BiTangent. Should I also output specular and ambient properties? Or will this be too much memory usage?

What I would recommend doing is outputing your positions and normals in view space. If you have any tangent-space normals, transform them in your material shader before outputting them to your g-buffer. That way, you don't need to store tangents or bitangents. I do think it would be a good idea to output specular and ambient properties. If you find that memory bandwidth becomes a problem, then there are some optimizations you could try. For instance, you could reconstruct the position from the depth buffer, thus getting rid of the position texture in your g-buffer. You could also store just two components of each normal (say, x and y) and then use math to reconstruct the third in your light shaders. Even though these reconstructions take time, it's often worth it because of the memory bandwidth savings. Also, if you haven't already done so, you can try using a 16-bit half-float format, instead of a full 32-bit floating point format.

Just one last thing, what do you mean when you say "one shader for every material type"?

My definition of a material is just something like:


class Material 
{
    Vec3 diffuse
    Vec3 ambient
    Vec3 specular

    Tex2 diffuse_Map
    Tex2 normal_Map
    Tex2 specular_Map
}

so I don't know what you mean.

That material class probably covers all that you need at this stage in terms of material options. However, you might find it useful to have separate shaders to cover the cases where a) you have untextured geometry, b) you have geometry with a diffuse texture, but no specular or normal map, c) you have a diffuse texture and a normal map, but no specular, d) etc. If you handle all of these cases with the same shader, you end up sampling textures that aren't there (or else introducing conditional branching into your shader code). Of course, if everything in your game has all of these textures, then it isn't a problem. Unfortunately, I am not that lucky because I have to render some legacy models, some of which have untextured geometry.

As you start working with more advanced materials, you may find that your shader inputs grow in number and become more specialized, and so the number of material shaders you use will grow as well.

Ah ok, I get it thanks. I was wondering about this also, because not all of my geometry has Tangents and BiTangents also, which is a bit annoying... Thanks for all your help.

This topic is closed to new replies.

Advertisement