Jump to content
  • Advertisement
Sign in to follow this  
Medo Mex

Lighting In Deferred Rendering

This topic is 1010 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I implemented deferred lighting and have few questions:

 

1. How many light should I calculate in each pass on screen space?

2. How do I handle materials in deferred lighting? I mean each mesh can have different material (Diffuse, Ambient, Emissive, Specular)

 

 

For Shadows:

3. In order to reduce the shadow acne, what values should I use for the shadow map near plane, far plane and bias?

 

Thanks,

Edited by Medo3337

Share this post


Link to post
Share on other sites
Advertisement

Hello,

 

I implemented deferred lighting and have few questions:

 

1. How many light should I calculate in each pass on screen space?

2. How do I handle materials in deferred lighting? I mean each mesh can have different material (Diffuse, Ambient, Emissive, Specular)

 

 

For Shadows:

3. In order to reduce the shadow acne, what values should I use for the shadow map near plane, far plane and bias?

 

Thanks,

1. As many as you could, with feature level 10/11 you can calculate lighting for every light that doesn't have shadows enabled. Basicly you decide the max number of lights you want to support in a single scene, create a structured buffer big enough to accomodate all of that, and before the lighting pass you map all your light data into it. (You can use the constant buffer too to upload light data, it can be faster, but it has a size hard limit and other problems too.) This could be extended to handle shadows as well, I did it for my hobby project, you basicly store shadow maps in a texture array (no shadow map rendertarget reuse, so it can be heavy on video mem, but on the other side you're not forced to render every shadow map every frame, so there is space for clever optimizations, involving deciding when you need to refresh which shadow maps), store the map index in the light info data structure, and with this you can index the texture array in the shader. All-in-all the less pass you have to do to calculate your lighting, the better, less passes mean less blending operations for your GPU, and less draw calls for your application.

2. What do you mean by material? Because on one hand you store your BRDF's input parameters in the G-Buffer apart from light data, so for example there is a channel somewhere in your G-Buffer containing the specular power of the visible object. Writing different spec power values into that channel will lead to different materials visibly. On the other hand there are cases where you need specialized BRDF's to render a material correctly (hair for example), in that case you either store a material index in your G-Buffer, and branch on that in your lighting shader, or put the material index into the stencil buffer, and render different materials in separate passes with separate lighting shaders and appropriately configured stencil test values, so that the actual pass renders only those pixel, whose been marked in the stencil buffer with the actual materials index.

Share this post


Link to post
Share on other sites

@LandonJerre:

 

in that case you either store a material index in your G-Buffer, and branch on that in your lighting shader

 

 

You mean I could upload all the scene materials to the shader using constant buffer, each one will have index?

 

Well, I could be having many materials and I know that the number of the variables allowed in HLSL is limited and I also don't know how many material I will be having each scene (it depends on the scene) I could have one material for the glass, another material for the building and another one for the vehicle and so on...

 

What I thought about is that I could pass each mesh material using G-Buffer by converting float3 RGB to a single float value and then in the lighting shader I can convert it back to float3 and apply the material.

 

So, it will be similar to the following:

out.color = ...;
out.normal = ...;

float ambient = RGBToFloat(ambientColor);
float diffuse = RGBToFloat(diffuseColor);
float emissive = RGBToFloat(emissiveColor);
out.material = float4(ambient, diffuse, emissive, specular);

What do you think about that idea?

 

For the shadows, to reduce the shadow acne, do you know what values should I use for the shadow map near plane, far plane and bias?

 

Thanks,

Edited by Medo3337

Share this post


Link to post
Share on other sites

The basic idea for deferred rendering is that you store the surface data in the g-buffers, so position/normal/albedo/roughness/metalness/etc.  The biggest issue is that you are usually limited to one material type without adding material ids and additional passes or if/else blocks to your lighting pass (which is doable and numerous engines seem to support that).  Ultimately everything you need for lighting you store in the initial deferred pass, then use all that during the lighting pass (when you only render a quad over the screen).

 

You shouldn't need to pack your data like that either, you setup your g-buffers to store what you need, and just write to those buffers during the initial pass.  Keep in mind that if your albedo buffer is a RGB texture, then packing the entire RGB to a float then into a single channel like you suggest means a major loss of precision.  If the source value is a 24bit RGB, and the material output is also a 32bit RGBA texture, then you're packing the entire 24bit RGB value into a single 8bit channel.  That's entirely possible, but it'll result in a lot of banding because of precision loss during the conversion.  It'd be better to have an 32bit albedo buffer and store the RGB directly (then use the A channel for something else if needed).  You can play around with different texture formats to optimize and minimize the g-buffers, and there are quite a few articles already on that subject.

 

For shadows, it really depends on what type of method you are using and how bad the shadow acne is.  Ideally you want the most precision available for the shadow maps, so the near/far should be adjusted to the range of the light (no reason to set the far range farther away since the lighting equation will should result in 0 because of falloff).  That won't necessarily solve all the acne issues though depending on what type of shadows you are using.  There are a lot of other ways to try to reduce the acne if it's an issue though.

Edited by xycsoscyx

Share this post


Link to post
Share on other sites

@xycsoscyx: The way I'm thinking to do it is to pass the material to the shader when I'm rendering the mesh, then I can calculate ambient and emissive in the first pass and give diffuse and specular to GBuffer to calculate them in the deferred pass.

 

So the GBuffer will look like the following:

- Color

- Normal

- Position

- Diffuse

- Specular

 

Is that way efficient?

Edited by Medo3337

Share this post


Link to post
Share on other sites

@xycsoscyx: The way I'm thinking to do it is to pass the material to the shader when I'm rendering the mesh, then I can calculate ambient and emissive in the first pass and give diffuse and specular to GBuffer to calculate them in the deferred pass.
 
So the GBuffer will look like the following:
- Color
- Normal
- Position
- Diffuse
- Specular
 
Is that way efficient?

No. Overdrawn pixels should be as algorithmicly simple as possible.
And you don't need a separate buffer for material ID's and branches. Use the stencil buffer to mark each material and stencil testing to avoid branches.
Look at existing G-buffer layouts to decide what is best for you. The goal is to keep memory and bandwidth down.


L. Spiro

Share this post


Link to post
Share on other sites

 

In order to reduce the shadow acne, what values should I use for the shadow map near plane, far plane and bias?

 

I'm not sure if there is a clear cut answer to that question. From what I remember a constant depth-map bias is really scene dependent. A constant bias that yields acceptable results in one scene may lead to acne, or peter-panning in another scene. Usually finding a bias that works for your scene, or particular object will involve some tweaking.

 

http://jcgt.org/published/0003/04/08/paper-lowres.pdf

 

This article goes over how to create an adaptive depth bias for every pixel in your scene based upon your geometry. Though this is something I've just found, and never have implemented myself. Also, not too sure how it would fit in the scope of a deferred rendering system. 

 

Marcus

Edited by markypooch

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!