Jump to content
  • Advertisement
Sign in to follow this  
maya18222

Where to learn about deferred shading?

This topic is 3272 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Are there any good books with chapters on the subject? Ive seen lots of pdfs and powerpoint presentations on the features ands differences to forward rendering, but not so much on actual implementation. I think i'll be ok implementing the lighting and the materials. Render the whole scene to the G-Buffer, then draw spheres for omnis, cones for spotlights and rerender the whole screen to a quad for directlight lights. Now straight away, this suggests that direct lights are going to be quite expensive. Im also thinking that seeing as we only have 2 bits of geometry for the omni and cone lights, we could probaly render all the lights in the scene in 2 drawcalls with instancing. Which leads me to another question, how can you perform frustum culling when using instancing? Just use a dynamic buffer and fill it each frame? Im also unsure of the following - How to tie in shadow mapping - Parallax mapping - Glow The glow, i believe can just be an extra channel that we mark as pixels needing glow. But then what about glow colour and strength? The channels really same to add up. I assume parallax mapping will have to be baked into the diffuse texture channel, but then doesnt that go against the whole point of deferred shading? as these pixels could be over written many times Thanks

Share this post


Link to post
Share on other sites
Advertisement
I'm not really sure where i read about it. I'm sure there is an abundance of info on the google.

Basically to determine how to implement you need to understand the two stages of deferred shading. Geometry/material processing, and light processing. In the first stage, the geometry is rendered as normal. The only quirk is that you store a whole crap load more information, rather than just a resultant color, in what is called the g-buffer.

The g-buffer, so each screen pixel, at minimum contains:
Position, Normal, Material Diffuse Color, Material Emissive Color

The g-buffer is then stored in a texture which is passed to all the light shaders during the lighting stage.

In the most basic sense, starting with a blank screen buffer, a full screen quad is drawn for each light, adding the results to the buffer. The light shader references the g-buffer and applies the normal lighting computation to determine the color contribution for that light. This buffer is then presented to the screen.

That's where the beauty of deferred lighting lies. The g-buffer is only created once for any number of lights. That means you only incure the cost of your 2 million polygon scene once even if you have 500 lights. And those lights only incur the cost processing a single screens worth of pixels (each). The lighting cost is not included in overdraw. The vertex and material processing cost is still included in over draw though, in the geometry stage.

Optimization can be made to draw only a portion of the screen that a particular light will effect, Making a ton of really small lights the same cost as one huge light.

So, shadow mapping is a lighting stage feature. The normal computation is done in the light shader, a depth map is referenced. But rather than the pixel information coming from a traditional vertex shader it will be read from the g-buffer.

Parrallax mapping is a material effect. It would happen in the pixel shader of the g-buffer creation stage. Those pixel shaders would not contain any lighting computation, and simply dump all the required parameters into the g-buffer.

Glow is stored in the emissive channel. This channel skips lighting and makes it straight through to the screen. You could then render a "light" shader in the lighting stage which only reads the emissive channel and blurs it onto the screen for standard bloom.

Yes, the channels really add up, creating HUGE g-buffers. That is the major penalty of deferred shading. If your computer can handle the huge textures though it's entirely worth it.

Note that since there are only a limited number of texture types available, none of which have 16 or so channels. Multiple render targets (MRT) is used to write to more than one screen buffer (which is bound to a texture) at a time, giving you the required addition channels.

Also, one way to reduce the number of channels required is to use light material ids. For example each pixel may have a generic light material ID. Each light shader can then interpret that ID how ever it sees fit. In your glow light shader you may have an array of glow parameters passed in. The ID is then used to dereference the array and obtain the glow parameters for that pixel.

[Edited by - bzroom on November 1, 2009 3:14:23 AM]

Share this post


Link to post
Share on other sites
Introductory article and source code/executable:
http://www.gamedev.net/reference/programming/features/imgSpaceLight/default.asp

I also felt I should mention that the absolute minimum amount of data necessary is smaller than bzroom mentioned. In the case of "Afterlights", you just need depth, normals, luminance of the albedo, and a light buffer. Depending on how you pack it, you can have just 2-3 render targets to cover all your materials.

If you wish to learn more, I wrote an article on the topic:
http://n00body.squarespace.com/journal/2009/9/6/afterlights.html

[Edited by - n00body on November 1, 2009 8:02:38 AM]

Share this post


Link to post
Share on other sites
Quote:

Parrallax mapping is a material effect. It would happen in the pixel shader of the g-buffer creation stage. Those pixel shaders would not contain any lighting computation, and simply dump all the required parameters into the g-buffer.



Im not sure how you can defer Parallax mapping, as you need

- TBN basis, or just TB and cross for N
- Texture coodinates

and then obviously access to the actual height map

Quote:

So, shadow mapping is a lighting stage feature. The normal computation is done in the light shader, a depth map is referenced. But rather than the pixel information coming from a traditional vertex shader it will be read from the g-buffer.




Hmmm, i take it you mean with an occulsion map as well as a depth map?

Share this post


Link to post
Share on other sites
I said that parallax mapping must be done in the g-buffer creation stage. It is not deferrable.

I'm not sure why you'd need an occlusion map to do shadow mapping.

Share this post


Link to post
Share on other sites
I was referring to this method

http://mynameismjp.wordpress.com/samples-tutorials-tools/deferred-shadow-maps-sample/

Are you referring to the following method?


- render all g_buffer data

for(each light)
{
- render a depth buffer of the scene from lights view
- blend this lights results to back buffer, adding shadow info with above depth map
}



As i said, im not really sure how shadow mapping is done with deferred rendering, so the above maybe wrong.

Also is it common to store the normals in world space or screen space? The way i see it is that if they are stored in screen space, then you only need 2 channels and can compute the z, as the z will never be negative. Where as if you use world space, you do need 3 channels, but you dont have to perform any transforms.

Share this post


Link to post
Share on other sites
That correct, there's advantages to storing the positions and normals in different formats. It just depends on what the majority of your effects and shaders are going to need. The space trade off may not be worth it if you have to the conversion for every shader. But the space constraint may be an ultimately limiting factor leaving you no choice. Every deferred renderer is different.

Your depth map generation stage is exactly the same as it would be for a forward renderer. You dont want to fill up a giant buffer only to use the depth value so you use a small luminance texture or something for depths and standard forward shaders for creating it.

The thing about a deferred renderer is that you always need to actually have two renderers. Your traditional forward shading renderer and your deferred shading renderer. You'll use the forward for the depth maps, ui, transparency and possibly some off screen rendering if deferred shading is not appropriate or efficient.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!