# Lighting In Deferred Rendering

This topic is 930 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello,

I implemented deferred lighting and have few questions:

1. How many light should I calculate in each pass on screen space?

2. How do I handle materials in deferred lighting? I mean each mesh can have different material (Diffuse, Ambient, Emissive, Specular)

3. In order to reduce the shadow acne, what values should I use for the shadow map near plane, far plane and bias?

Thanks,

Edited by Medo3337

##### Share on other sites

Hello,

I implemented deferred lighting and have few questions:

1. How many light should I calculate in each pass on screen space?

2. How do I handle materials in deferred lighting? I mean each mesh can have different material (Diffuse, Ambient, Emissive, Specular)

3. In order to reduce the shadow acne, what values should I use for the shadow map near plane, far plane and bias?

Thanks,

2. What do you mean by material? Because on one hand you store your BRDF's input parameters in the G-Buffer apart from light data, so for example there is a channel somewhere in your G-Buffer containing the specular power of the visible object. Writing different spec power values into that channel will lead to different materials visibly. On the other hand there are cases where you need specialized BRDF's to render a material correctly (hair for example), in that case you either store a material index in your G-Buffer, and branch on that in your lighting shader, or put the material index into the stencil buffer, and render different materials in separate passes with separate lighting shaders and appropriately configured stencil test values, so that the actual pass renders only those pixel, whose been marked in the stencil buffer with the actual materials index.

##### Share on other sites

@LandonJerre:

in that case you either store a material index in your G-Buffer, and branch on that in your lighting shader

You mean I could upload all the scene materials to the shader using constant buffer, each one will have index?

Well, I could be having many materials and I know that the number of the variables allowed in HLSL is limited and I also don't know how many material I will be having each scene (it depends on the scene) I could have one material for the glass, another material for the building and another one for the vehicle and so on...

What I thought about is that I could pass each mesh material using G-Buffer by converting float3 RGB to a single float value and then in the lighting shader I can convert it back to float3 and apply the material.

So, it will be similar to the following:

out.color = ...;
out.normal = ...;

float ambient = RGBToFloat(ambientColor);
float diffuse = RGBToFloat(diffuseColor);
float emissive = RGBToFloat(emissiveColor);
out.material = float4(ambient, diffuse, emissive, specular);

What do you think about that idea?

For the shadows, to reduce the shadow acne, do you know what values should I use for the shadow map near plane, far plane and bias?

Thanks,

Edited by Medo3337

##### Share on other sites

The basic idea for deferred rendering is that you store the surface data in the g-buffers, so position/normal/albedo/roughness/metalness/etc.  The biggest issue is that you are usually limited to one material type without adding material ids and additional passes or if/else blocks to your lighting pass (which is doable and numerous engines seem to support that).  Ultimately everything you need for lighting you store in the initial deferred pass, then use all that during the lighting pass (when you only render a quad over the screen).

You shouldn't need to pack your data like that either, you setup your g-buffers to store what you need, and just write to those buffers during the initial pass.  Keep in mind that if your albedo buffer is a RGB texture, then packing the entire RGB to a float then into a single channel like you suggest means a major loss of precision.  If the source value is a 24bit RGB, and the material output is also a 32bit RGBA texture, then you're packing the entire 24bit RGB value into a single 8bit channel.  That's entirely possible, but it'll result in a lot of banding because of precision loss during the conversion.  It'd be better to have an 32bit albedo buffer and store the RGB directly (then use the A channel for something else if needed).  You can play around with different texture formats to optimize and minimize the g-buffers, and there are quite a few articles already on that subject.

For shadows, it really depends on what type of method you are using and how bad the shadow acne is.  Ideally you want the most precision available for the shadow maps, so the near/far should be adjusted to the range of the light (no reason to set the far range farther away since the lighting equation will should result in 0 because of falloff).  That won't necessarily solve all the acne issues though depending on what type of shadows you are using.  There are a lot of other ways to try to reduce the acne if it's an issue though.

Edited by xycsoscyx

##### Share on other sites

@xycsoscyx: The way I'm thinking to do it is to pass the material to the shader when I'm rendering the mesh, then I can calculate ambient and emissive in the first pass and give diffuse and specular to GBuffer to calculate them in the deferred pass.

So the GBuffer will look like the following:

- Color

- Normal

- Position

- Diffuse

- Specular

Is that way efficient?

Edited by Medo3337

##### Share on other sites

@xycsoscyx: The way I'm thinking to do it is to pass the material to the shader when I'm rendering the mesh, then I can calculate ambient and emissive in the first pass and give diffuse and specular to GBuffer to calculate them in the deferred pass.

So the GBuffer will look like the following:
- Color
- Normal
- Position
- Diffuse
- Specular

Is that way efficient?

No. Overdrawn pixels should be as algorithmicly simple as possible.
And you don't need a separate buffer for material ID's and branches. Use the stencil buffer to mark each material and stencil testing to avoid branches.
Look at existing G-buffer layouts to decide what is best for you. The goal is to keep memory and bandwidth down.

L. Spiro

##### Share on other sites

In order to reduce the shadow acne, what values should I use for the shadow map near plane, far plane and bias?

I'm not sure if there is a clear cut answer to that question. From what I remember a constant depth-map bias is really scene dependent. A constant bias that yields acceptable results in one scene may lead to acne, or peter-panning in another scene. Usually finding a bias that works for your scene, or particular object will involve some tweaking.

http://jcgt.org/published/0003/04/08/paper-lowres.pdf

This article goes over how to create an adaptive depth bias for every pixel in your scene based upon your geometry. Though this is something I've just found, and never have implemented myself. Also, not too sure how it would fit in the scope of a deferred rendering system.

Marcus

Edited by markypooch

1. 1
2. 2
3. 3
Rutin
18
4. 4
JoeJ
14
5. 5

• 14
• 10
• 23
• 9
• 33
• ### Forum Statistics

• Total Topics
632631
• Total Posts
3007538
• ### Who's Online (See full list)

There are no registered users currently online

×