Jump to content

  • Log In with Google      Sign In   
  • Create Account

Deferred shading - confusion


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
15 replies to this topic

#1 Mohanddo   Members   -  Reputation: 109

Like
0Likes
Like

Posted 26 August 2011 - 01:17 PM

Hi. I am trying to implement deferred shading in my engine and i am slightly confused. I have already set up multiple render targets and this is what i have so far:

1. Bind deferred shader and fill 3 textures with: position, normal and texture color.
2. Bind lighting shader and fbo textures, multi textures available in my fragment shader and i can output the various textures.

An image of how my textures look from rendering a simple terrain and an axe is attached: Is there anything wrong with them? (top right is just ambient property)

So to actually implement the lighting i have a few questions that i would be glad if anyone could clarify for me.

1. Do i output the scene once in the deferred pass and then blend the lighting pass to it or is all of the fragment output done in the lighting pass?
2. In the lighting pass, what do i have to draw? For a point light, do i have to use a cone 'proxy shape' or i can compute shading for each pixel based on the light area of effect?
3. Where do materials come in in deferred shading? Do i have to render to a texture the material properties ambient - diffuse - specular - emissive of every object for every pixel?

In addition is there any resources on deferred shading that you can recommend. I have read the major ones (guerrilla-games, nvidia, gamedev image space lighting).

Thanks in advance Posted Image

Attached Thumbnails

  • def.png


Sponsor:

#2 johnchapman   Members   -  Reputation: 550

Like
1Likes
Like

Posted 26 August 2011 - 02:29 PM

Doesn't look as if there's anything wrong with your g-buffer textures. As regards your questions:

1) All of the fragment output can be done in the lighting pass - your g-buffer is an input to the lighting stage, you accumulate 'lit' pixels in the final framebuffer.
2) You can draw anything you like in the lighting pass, as long as it covers the pixels which need to be lit. You could draw a full screen-aligned quad for every light, which is pretty inefficient (you'll end up shading a lot of pixels which don't need to be shaded). Drawing the light volumes is the optimal solution: a sphere for point lights, a cone for spotlights is generally the way to go.
3) You'll need to write material properties to a target in the g-buffer. How you do this depends on what your lighting stage requires and the sort of material properties you need to support. A simple format might be something like R = specular level, G = specular power B = emissiveness A = AO factor.

Here's a few links to some deferred rendering resources (you may have already seen some of them):

http://developer.dow...red_Shading.pdf
http://www.talula.de...rredShading.pdf
http://bat710.univ-lyon1.fr/~jciehl/Public/educ/GAMA/2007/Deferred_Shading_Tutorial_SBGAMES2005.pdf
http://developer.amd...StarCraftII.pdf

#3 dpadam450   Members   -  Reputation: 945

Like
1Likes
Like

Posted 26 August 2011 - 04:58 PM

The first thing you have to do right before you draw lights, is put the full screen diffuse to the screen. Then for each light such as a point light, you will have to draw a physical sphere , but your shader isnt drawing the sphere, its just using it to do lighting on the diffuse you already drew to the screen. It just continually blend with what is on screen and your screen will fill up as more objects are drawn.

#4 Mohanddo   Members   -  Reputation: 109

Like
0Likes
Like

Posted 26 August 2011 - 06:05 PM

Thank you very much for your answers

Doesn't look as if there's anything wrong with your g-buffer textures. As regards your questions:

1) All of the fragment output can be done in the lighting pass - your g-buffer is an input to the lighting stage, you accumulate 'lit' pixels in the final framebuffer.
2) You can draw anything you like in the lighting pass, as long as it covers the pixels which need to be lit. You could draw a full screen-aligned quad for every light, which is pretty inefficient (you'll end up shading a lot of pixels which don't need to be shaded). Drawing the light volumes is the optimal solution: a sphere for point lights, a cone for spotlights is generally the way to go.
3) You'll need to write material properties to a target in the g-buffer. How you do this depends on what your lighting stage requires and the sort of material properties you need to support. A simple format might be something like R = specular level, G = specular power B = emissiveness A = AO factor.

Here's a few links to some deferred rendering resources (you may have already seen some of them):

http://developer.dow...red_Shading.pdf
http://www.talula.de...rredShading.pdf
http://bat710.univ-lyon1.fr/~jciehl/Public/educ/GAMA/2007/Deferred_Shading_Tutorial_SBGAMES2005.pdf
http://developer.amd...StarCraftII.pdf


Good resources thanks. The only thing i still dont understand is how to pack an RGBA value into 1 component. My render targets are of the format GL_RGBA16F so how can i pack and unpack them into this format in GLSL?



The first thing you have to do right before you draw lights, is put the full screen diffuse to the screen. Then for each light such as a point light, you will have to draw a physical sphere , but your shader isnt drawing the sphere, its just using it to do lighting on the diffuse you already drew to the screen. It just continually blend with what is on screen and your screen will fill up as more objects are drawn.


So in the deferred pass stage the output will be the diffuse color?

#5 johnchapman   Members   -  Reputation: 550

Like
2Likes
Like

Posted 27 August 2011 - 03:20 AM

Although it's probably possible to pack an RGBA value into a single two byte component you'd lose a lot of precision, since you're halving the number of bits representing each component. I can only assume that you're thinking in terms of the way that 'full' Phong lighting treats materials, with seperate colour values for ambient/diffuse/specular/emissive. This is overkill, especially for a deferred renderer where you preferably want to limit the memory cost of the g-buffer. The way in which you slim down the material properties depends on what kinds of materials you want to render and how much 'space' (i.e. unused components) you've got in the g-buffer. You could use a whole render target to store material properties, or elbow them into any unused components on other targets (most of the references do this).

So a simple g-buffer layout might be something like this:
Target0: RGB = diffuse albedo A = specular power
Target1: RGB = normal xyz A = specular intensity
Target2: RGB = position xyz A = emissiveness

You render these values out at the g-buffer stage. Then, at the lighting stage, you clear the output framebuffer and render the lights. Each light you render taps into the g-buffer targets to get the required data. Obviously, since the material properties have been slimmed down, you'll need to use a modified lighting equation. So, for the example g-buffer, you might do:

ambient = material_diffuse * light_color * light_ambient_level;
diffuse = material_diffuse * light_color * light_diffuse_level;
specular = light_color * material_specular_level * light_specular_level ^ material_specular_power;
emissive = material_emissiveness * material_diffuse;
result = ambient + diffuse + specular + emissive;

Clearly this is less flexible than 'full' Phong. One of the drawbacks of deferred rendering is that the materials/lighting model tends to be very rigid, since the inputs to the lighting stage (the g-buffer targets) have a fixed format. However, with a bit of cunning you can come up with a materials/lighting system which supports the gamut of materials that you want to render.

I'm not sure what dpadam450 means by "put the full screen diffuse to the screen." The output of the first stage is the g-buffer, which specifies the material properties. This is an input to the deferred stage, in which lights are rendered and the final, shaded pixels accumulate into the final output buffer (either the back buffer or another target for post-processing).

Also, if you're using OpenGL >= 3.0 you can use render targets of different formats (but not sizes).

#6 Mohanddo   Members   -  Reputation: 109

Like
0Likes
Like

Posted 27 August 2011 - 08:52 AM

Thanks for the reply.

I have only written shaders for the simple phong shading using full rgba values for material properties. What equations do i need to use to compute shading using only those 3 values form the g-buffer? Is there some resources that can teach me this?

You also mention that in the lighting stage i need the material properties but how can i pass these to my lighting shader if they are not stored in my g-buffer?

thanks

#7 johnchapman   Members   -  Reputation: 550

Like
1Likes
Like

Posted 27 August 2011 - 09:21 AM

The material properties are stored in the g-buffer. The diffuse albedo, specular intensity/power and emissiveness values in the example I gave are what I was referring to when I said 'material properties.'

You can use the ordinary Phong formulae to compute the light_ambient_level/light_diffuse_level/light_specular_level factors and use them as per my previous post. The only difference is in which material properties are available from the g-buffer, so you'll notice that (in the example) I used the material's diffuse colour to modulate the ambient result (because there's no material ambient colour) and that the final specular value only uses the light's colour (becaues there's no material specular colour).

#8 Mohanddo   Members   -  Reputation: 109

Like
0Likes
Like

Posted 27 August 2011 - 01:41 PM

I see but then what is the values that i store in my gbuffer for lighting? and how can i compute them? (specular intensity, power, emissiveness)
If i understand correctly, specular intensity is calculated based on light direction and viewpoint so then how can i compute this when i am in my first pass? Shouldnt light properties only be available at the lighting stage?


As you can probably tell i am very confused by all this Posted Image

#9 johnchapman   Members   -  Reputation: 550

Like
1Likes
Like

Posted 27 August 2011 - 02:13 PM

Ahh, sorry if i've confused you more. Perhaps I should have made it more clear: the values in the g-buffer are per-pixel material properties. By specular intensity/power I mean the shininess/glossiness of the material, not the specular coefficient.

#10 Mohanddo   Members   -  Reputation: 109

Like
0Likes
Like

Posted 27 August 2011 - 06:06 PM

Thanks, i think im starting to understand now. So let me get this right.

1. The albedo texture should be a combo of the material diffuse property and the objects texture?
2. No ambient material should be needed since it is usually the same as the diffuse property. Later i can add SSAO
3. And now there will be no emissive or specular color but instead a factor that will be multiplied by the light properties?

Only question that remains for me now is if the materials are now like this, will the light sources properties be similar or do they still retain their RGBA values

#11 dpadam450   Members   -  Reputation: 945

Like
0Likes
Like

Posted 27 August 2011 - 11:16 PM

The only real thing you need, to get started are what you already have: position, normal, diffuse (texture) color.

1st, your diffuse has all perfectly lit green grass. So if you have a sunlight, you should calculate that first and put that in your diffuse buffer as well. For all other lights (lamposts, vehicles etc): they are contributing (addding) EXTRA light to the scene. So first again, you put your original diffuse on the screen (with or without sunlight), then you use addative blending of each extra light. Again keyword add/extra (addative). For each light you draw a physical model of the actual lights area. For a lampost it might be a 3d cone, a lightbulb would be a sphere. Hopefully you already knew that cuz if not, then your stepping too far.

After that you should have basic lighting. Again with materials like metal you have a certain specular power, as well as certain spots of metal that have different amounts of specular. Imagine a piece of metal with rust, the rust portions have no specular, and the clear ones do because they reflect light back. But even more when the light hits that surface, it has a specular power, either it is really focused and hard metal, or not as focused and soft. So you could (and eventually will), want to have those properties stored either in the color,normal,position, anywhere.

Any property/material that effects light from an object standpoint, is saved. The light properties are just used when you draw those 3d models.

#12 johnchapman   Members   -  Reputation: 550

Like
1Likes
Like

Posted 28 August 2011 - 03:57 AM

1st, your diffuse has all perfectly lit green grass. So if you have a sunlight, you should calculate that first and put that in your diffuse buffer as well.


I think this may potentially just cause more confusion, and in any case I'd say it was a good idea to maintain complete separation of the g-buffer/lighting stages. Are you talking about having a 'sun occlusion' factor (stored in the alpha channel or something), or actually pre-multiplying the diffuse component? The former doesn't work for dynamic objects, and the latter doesn't work at all.

1. The albedo texture should be a combo of the material diffuse property and the objects texture?
2. No ambient material should be needed since it is usually the same as the diffuse property. Later i can add SSAO
3. And now there will be no emissive or specular color but instead a factor that will be multiplied by the light properties?

Only question that remains for me now is if the materials are now like this, will the light sources properties be similar or do they still retain their RGBA values


1. I would say that an objects texture is its material's diffuse property, i.e. the colour which modulates any reflected, diffuse light. So your grass texture represents the final colour you'd see if there was a 100% white light shining on it.
2. You could have a seperate ambient colour if you wanted. But you'd need another set of RGB components in the g-buffer. As you said, however, the ambient reflectance of a material is the same as its diffuse, so you can just use the diffuse property to modulate any ambient light.
3. As with ambient, you could have separate emissive/specular colours but, again, you'd need a bigger g-buffer. For the colour of a specular reflection it's usually good enough to simply use the light's colour.

The light source properties can be as many or few as you need, since these will be passed directly into the shader and not stored in any intermediate buffers. So you could have RGB colours for separate ambient/diffuse/specular light or just a single colour for all three.

I should point out that the example I've given of the g-buffer setup is pretty arbitrary. Basically you need the g-buffer to store whatever you need in order to do lighting. If you look through the various references you'll see lots of different g-buffer configurations, non are the right or wrong way to do it. It entirely depends on what properties you need during the defered lighting pass. Understanding the lighting model you're going to use is key; make sure you're on top of Phong and know what all the inputs are doing.

#13 Mohanddo   Members   -  Reputation: 109

Like
0Likes
Like

Posted 28 August 2011 - 06:39 AM

The only real thing you need, to get started are what you already have: position, normal, diffuse (texture) color.

1st, your diffuse has all perfectly lit green grass. So if you have a sunlight, you should calculate that first and put that in your diffuse buffer as well. For all other lights (lamposts, vehicles etc): they are contributing (addding) EXTRA light to the scene. So first again, you put your original diffuse on the screen (with or without sunlight), then you use addative blending of each extra light. Again keyword add/extra (addative). For each light you draw a physical model of the actual lights area. For a lampost it might be a 3d cone, a lightbulb would be a sphere. Hopefully you already knew that cuz if not, then your stepping too far.

After that you should have basic lighting. Again with materials like metal you have a certain specular power, as well as certain spots of metal that have different amounts of specular. Imagine a piece of metal with rust, the rust portions have no specular, and the clear ones do because they reflect light back. But even more when the light hits that surface, it has a specular power, either it is really focused and hard metal, or not as focused and soft. So you could (and eventually will), want to have those properties stored either in the color,normal,position, anywhere.

Any property/material that effects light from an object standpoint, is saved. The light properties are just used when you draw those 3d models.


Yes i already understand all this except about the sunlighting. Do you mean to calculate phong directional light just as you would in a forward pass renderer and do all lighting calculations on that instead? If so, is there an advantage to this compared to a full screen quad at lighting stage?

1. I would say that an objects texture is its material's diffuse property, i.e. the colour which modulates any reflected, diffuse light. So your grass texture represents the final colour you'd see if there was a 100% white light shining on it.


Ok but in the case which the object i am rendering does not have a texture of its own, the material diffuse will be in this buffer i presume?




#14 johnchapman   Members   -  Reputation: 550

Like
0Likes
Like

Posted 28 August 2011 - 07:47 AM

Ok but in the case which the object i am rendering does not have a texture of its own, the material diffuse will be in this buffer i presume?


Okay, I see what you mean now. Yes, it will just be the diffuse colour in this case.

#15 Mohanddo   Members   -  Reputation: 109

Like
0Likes
Like

Posted 28 August 2011 - 10:18 AM

Ok i think i have enough info to implement it now, thanks alot for your help and patience Posted Image

#16 linsnos   Members   -  Reputation: 100

Like
0Likes
Like

Posted 24 October 2011 - 01:28 AM

Edit: Ok, sorry for the silly question. The light_x_level is of course calculated from viewing and light directions etcetera.

ambient = material_diffuse * light_color * light_ambient_level;
diffuse = material_diffuse * light_color * light_diffuse_level;
specular = light_color * material_specular_level * light_specular_level ^ material_specular_power;
emissive = material_emissiveness * material_diffuse;
result = ambient + diffuse + specular + emissive;


Thank you for a great explanation!

The "reduction" of full Phong was logical and implied in the references I have read, but it was good to have someone mention it, giving me confidence to proceed with my code.
In your "property mixing" example, where lights and material multiply, you have separate factors for light color and light level. Why? :)
I am curious how these factors are calculated. Personally I just have light.diffuse etc.,

Best regards
Tobias




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS