Deferred shading - confusion

Started by
14 comments, last by linsnos 12 years, 5 months ago
Hi. I am trying to implement deferred shading in my engine and i am slightly confused. I have already set up multiple render targets and this is what i have so far:

1. Bind deferred shader and fill 3 textures with: position, normal and texture color.
2. Bind lighting shader and fbo textures, multi textures available in my fragment shader and i can output the various textures.

An image of how my textures look from rendering a simple terrain and an axe is attached: Is there anything wrong with them? (top right is just ambient property)

So to actually implement the lighting i have a few questions that i would be glad if anyone could clarify for me.

1. Do i output the scene once in the deferred pass and then blend the lighting pass to it or is all of the fragment output done in the lighting pass?
2. In the lighting pass, what do i have to draw? For a point light, do i have to use a cone 'proxy shape' or i can compute shading for each pixel based on the light area of effect?
3. Where do materials come in in deferred shading? Do i have to render to a texture the material properties ambient - diffuse - specular - emissive of every object for every pixel?

In addition is there any resources on deferred shading that you can recommend. I have read the major ones (guerrilla-games, nvidia, gamedev image space lighting).

Thanks in advance smile.gif
Advertisement
Doesn't look as if there's anything wrong with your g-buffer textures. As regards your questions:

1) All of the fragment output can be done in the lighting pass - your g-buffer is an input to the lighting stage, you accumulate 'lit' pixels in the final framebuffer.
2) You can draw anything you like in the lighting pass, as long as it covers the pixels which need to be lit. You could draw a full screen-aligned quad for every light, which is pretty inefficient (you'll end up shading a lot of pixels which don't need to be shaded). Drawing the light volumes is the optimal solution: a sphere for point lights, a cone for spotlights is generally the way to go.
3) You'll need to write material properties to a target in the g-buffer. How you do this depends on what your lighting stage requires and the sort of material properties you need to support. A simple format might be something like R = specular level, G = specular power B = emissiveness A = AO factor.

Here's a few links to some deferred rendering resources (you may have already seen some of them):

http://developer.dow...red_Shading.pdf
http://www.talula.de...rredShading.pdf
http://bat710.univ-lyon1.fr/~jciehl/Public/educ/GAMA/2007/Deferred_Shading_Tutorial_SBGAMES2005.pdf
http://developer.amd...StarCraftII.pdf
The first thing you have to do right before you draw lights, is put the full screen diffuse to the screen. Then for each light such as a point light, you will have to draw a physical sphere , but your shader isnt drawing the sphere, its just using it to do lighting on the diffuse you already drew to the screen. It just continually blend with what is on screen and your screen will fill up as more objects are drawn.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Thank you very much for your answers

Doesn't look as if there's anything wrong with your g-buffer textures. As regards your questions:

1) All of the fragment output can be done in the lighting pass - your g-buffer is an input to the lighting stage, you accumulate 'lit' pixels in the final framebuffer.
2) You can draw anything you like in the lighting pass, as long as it covers the pixels which need to be lit. You could draw a full screen-aligned quad for every light, which is pretty inefficient (you'll end up shading a lot of pixels which don't need to be shaded). Drawing the light volumes is the optimal solution: a sphere for point lights, a cone for spotlights is generally the way to go.
3) You'll need to write material properties to a target in the g-buffer. How you do this depends on what your lighting stage requires and the sort of material properties you need to support. A simple format might be something like R = specular level, G = specular power B = emissiveness A = AO factor.

Here's a few links to some deferred rendering resources (you may have already seen some of them):

http://developer.dow...red_Shading.pdf
http://www.talula.de...rredShading.pdf
http://bat710.univ-lyon1.fr/~jciehl/Public/educ/GAMA/2007/Deferred_Shading_Tutorial_SBGAMES2005.pdf
http://developer.amd...StarCraftII.pdf


Good resources thanks. The only thing i still dont understand is how to pack an RGBA value into 1 component. My render targets are of the format GL_RGBA16F so how can i pack and unpack them into this format in GLSL?




The first thing you have to do right before you draw lights, is put the full screen diffuse to the screen. Then for each light such as a point light, you will have to draw a physical sphere , but your shader isnt drawing the sphere, its just using it to do lighting on the diffuse you already drew to the screen. It just continually blend with what is on screen and your screen will fill up as more objects are drawn.


So in the deferred pass stage the output will be the diffuse color?
Although it's probably possible to pack an RGBA value into a single two byte component you'd lose a lot of precision, since you're halving the number of bits representing each component. I can only assume that you're thinking in terms of the way that 'full' Phong lighting treats materials, with seperate colour values for ambient/diffuse/specular/emissive. This is overkill, especially for a deferred renderer where you preferably want to limit the memory cost of the g-buffer. The way in which you slim down the material properties depends on what kinds of materials you want to render and how much 'space' (i.e. unused components) you've got in the g-buffer. You could use a whole render target to store material properties, or elbow them into any unused components on other targets (most of the references do this).

So a simple g-buffer layout might be something like this:
Target0: RGB = diffuse albedo A = specular power
Target1: RGB = normal xyz A = specular intensity
Target2: RGB = position xyz A = emissiveness

You render these values out at the g-buffer stage. Then, at the lighting stage, you clear the output framebuffer and render the lights. Each light you render taps into the g-buffer targets to get the required data. Obviously, since the material properties have been slimmed down, you'll need to use a modified lighting equation. So, for the example g-buffer, you might do:

ambient = material_diffuse * light_color * light_ambient_level;
diffuse = material_diffuse * light_color * light_diffuse_level;
specular = light_color * material_specular_level * light_specular_level ^ material_specular_power;
emissive = material_emissiveness * material_diffuse;
result = ambient + diffuse + specular + emissive;

Clearly this is less flexible than 'full' Phong. One of the drawbacks of deferred rendering is that the materials/lighting model tends to be very rigid, since the inputs to the lighting stage (the g-buffer targets) have a fixed format. However, with a bit of cunning you can come up with a materials/lighting system which supports the gamut of materials that you want to render.

I'm not sure what dpadam450 means by "put the full screen diffuse to the screen." The output of the first stage is the g-buffer, which specifies the material properties. This is an input to the deferred stage, in which lights are rendered and the final, shaded pixels accumulate into the final output buffer (either the back buffer or another target for post-processing).

Also, if you're using OpenGL >= 3.0 you can use render targets of different formats (but not sizes).
Thanks for the reply.

I have only written shaders for the simple phong shading using full rgba values for material properties. What equations do i need to use to compute shading using only those 3 values form the g-buffer? Is there some resources that can teach me this?

You also mention that in the lighting stage i need the material properties but how can i pass these to my lighting shader if they are not stored in my g-buffer?

thanks
The material properties are stored in the g-buffer. The diffuse albedo, specular intensity/power and emissiveness values in the example I gave are what I was referring to when I said 'material properties.'

You can use the ordinary Phong formulae to compute the light_ambient_level/light_diffuse_level/light_specular_level factors and use them as per my previous post. The only difference is in which material properties are available from the g-buffer, so you'll notice that (in the example) I used the material's diffuse colour to modulate the ambient result (because there's no material ambient colour) and that the final specular value only uses the light's colour (becaues there's no material specular colour).
I see but then what is the values that i store in my gbuffer for lighting? and how can i compute them? ([size=2]specular intensity, power, emissiveness)
If i understand correctly, specular intensity is calculated based on light direction and viewpoint so then how can i compute this when i am in my first pass? Shouldnt light properties only be available at the lighting stage?


As you can probably tell i am very confused by all this sad.gif
Ahh, sorry if i've confused you more. Perhaps I should have made it more clear: the values in the g-buffer are per-pixel material properties. By specular intensity/power I mean the shininess/glossiness of the material, not the specular coefficient.
Thanks, i think im starting to understand now. So let me get this right.

1. The albedo texture should be a combo of the material diffuse property and the objects texture?
2. No ambient material should be needed since it is usually the same as the diffuse property. Later i can add SSAO
3. And now there will be no emissive or specular color but instead a factor that will be multiplied by the light properties?

Only question that remains for me now is if the materials are now like this, will the light sources properties be similar or do they still retain their RGBA values

This topic is closed to new replies.

Advertisement