Sign in to follow this  
arkangel2803

Deferred shading and objects without difuse component ?

Recommended Posts

arkangel2803    103
Hi all Here i am with a little question about deferred shading methodology :) In a forward render pipeline, we can render an object without beeing affected by light, for example, imagen his material has a parameter that says 'lightning = false'. This is acomplished with a special route throught forward pipeline where no difuse calculations are applied to that object. But whats happen when we have a deferred rendering pipeline ?? When we are done from G-Buffer, how can we determine which pixels are from a material that dosent need difusse calculations and which not ? P.D: When i say difuse calculations i mean difuse component coming from a dynamic light, not from a difuse texture aka albedo component. Thanks for your time :) LLORENS

Share this post


Link to post
Share on other sites
Sunray    188
You can use the stencil buffer. Mark the stencil buffer when rendering these models (sky etc) and setup a stencil test in the lighting pass.

Share this post


Link to post
Share on other sites
Hodgman    51237
The stencil buffer is often required for other optimisations in a deferred set-up.

Often the G-Buffer contains a 'material ID' channel - you could use this to specify a material that is not lit (e.g. by using dynamic branching in the lighting shader).
Another option would be to write a "lighting mask" into the G-Buffer (in the 0 to 1 range). When you're doing the lighting calculations, use this mask to blend between the lit and unlit colours.

e.g.
vec4 gbuf1 = read texture...
vec3 albedo = gbuf.rgb;
float mask = gbuf.a;

vec3 litColor = ...do lighting... * albedo;
vec3 finalColor = (albedo * (1.0-mask)) + (litColor * mask)

Share this post


Link to post
Share on other sites
Sunray    188
Quote:
Original post by Hodgman
The stencil buffer is often required for other optimisations in a deferred set-up.

Yes, but the stencil buffer is 8 bits. Plenty for both masking and light volume optimizations. That's what I'm doing in my render engine.

Share this post


Link to post
Share on other sites
arkangel2803    103
Hi all and thanks for your help.

I was reading the Starcraft 2 paper about his deferred shading techniques, and i notice that they have one or two unused channels. All channels are 16 bits, so R16G16B16A16 will be used.

If we enumerate them, we have something like this:

RT 1
----
R: Albedo Red
G: Albedo Green
B: Albedo Blue
A: Ambient Occlusion Term

RT 2
----
R: Normal x
G: Normal y
B: Normal z
A: Depth from eye to pixel

RT 3
----
R: Specular factor (0.0 to 1.0)
G: Specular exponent
B: Unused
A: Unused

RT 4
----
R: Reflection Red + Self Ilumination Red
G: Reflection Green + Self Ilumination Green
B: Reflection Blue + Self Ilumination Blue
A: Unused

This configuration is 99.9% equal to the config that apears on Starcraft 2 paper, i think that i can use some unused chanel to output a somekinf of mask that indicates if pixel needs difuse calculation or not.

But, if i want to render foliage or trees into the screen, ( i think foliage and trees only needs a discard pixels technique) whats happen with albedo alpha ? I think that we need this channel to test if pixel needs to be discarded, isnt it ?

Another thing that i miss is lightmap (or offline lights term) information, as you know, lightmaps needs to be multiplied with albedo and other things to apply the light information, but in RT 4, only addition ( + operation ) information is allowed, like self ilumination or reflection. Deferred render engines dosent use lightmaps anymore ??

Thanks for your help :)

LLORENS

Share this post


Link to post
Share on other sites
MJP    19755
Quote:
Original post by arkangel2803
But, if i want to render foliage or trees into the screen, ( i think foliage and trees only needs a discard pixels technique) whats happen with albedo alpha ? I think that we need this channel to test if pixel needs to be discarded, isnt it ?



Nah, you don't need to use alpha-testing. You can just use clip() in the pixel shader to discard a pixel manually.


Quote:
Original post by arkangel2803
Another thing that i miss is lightmap (or offline lights term) information, as you know, lightmaps needs to be multiplied with albedo and other things to apply the light information, but in RT 4, only addition ( + operation ) information is allowed, like self ilumination or reflection. Deferred render engines dosent use lightmaps anymore ??



Killzone 2 made heavy use of lightmaps...you can read about their approach here.

Share this post


Link to post
Share on other sites
arkangel2803    103
Hi MJP and thanks for your answer and for pointing me to this pdf of guerrilla.

I found in this document interesting things that i didnt know before. For example, Guerrilla uses a depth buffer (24D8S) to store depth, but, how can you do this at same time while you are 'outputting' 4 MRT from your pixel shader ?

About foliage rendering, excuse me my few expertice with alfa objects but, i found a 'methodology' to render foliage/trees without ordering leaves that involves 2 rendering calls per foliage/tree object. Its something like this:


...
//Rendering meshes with alpha (first pass)
m_pD3DDevice->SetRenderState(D3DRS_CULLMODE,D3DCULL_NONE);
m_pD3DDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,true);
m_pD3DDevice->SetRenderState(D3DRS_SRCBLEND,D3DBLEND_SRCALPHA);
m_pD3DDevice->SetRenderState(D3DRS_DESTBLEND,D3DBLEND_INVSRCALPHA);
m_pD3DDevice->SetRenderState(D3DRS_ZWRITEENABLE,false);

RenderSlots(inElapsedTime,&m_vRenderSlotsAlpha);

//Rendering meshes with alpha (second pass)
m_pD3DDevice->SetRenderState(D3DRS_ALPHATESTENABLE,true);
m_pD3DDevice->SetRenderState(D3DRS_ALPHAREF,0x00000040);
m_pD3DDevice->SetRenderState(D3DRS_ALPHAFUNC,D3DCMP_GREATEREQUAL);
m_pD3DDevice->SetRenderState(D3DRS_ZWRITEENABLE,true);

RenderSlots(inElapsedTime,&m_vRenderSlotsAlpha);
...


When i saw the information about clip() i started to think that maybe im doing more work than i need for rendering foliage, but i dont know at 100%.

COntinuing with Guerrilla information, let me enumerate the channels for G-Buffer:

Depth Buffer
------------
24Bit: Depth between eye and pixel
8Bit: Stencil, i dont know what is used for :(

RT 1
----
R: Albedo Red
G: Albedo Green
B: Albedo Blue
A: Ambient Occlusion Term

RT 2
----
R: Normal x (16 bits)
G: Normal y (16 bits)

RT 3
----
R: Motion X
G: Motion Y
B: Specular factor (0.0 to 1.0)
A: Specular exponent

RT 4
----
R: Light acumulation Red (Reflection? + Self Ilumination? + Lightmap?)
G: Light acumulation Green (Reflection? + Self Ilumination? + Lightmap?)
B: Light acumulation Blue (Reflection? + Self Ilumination? + Lightmap?)
A: Intensity (somekind of HDR valor ?)

Well, with this table, i need to make some questions :)

1.- How can you output these 4 Render Targets with Depth buffer at the same time ?

2.- If you use Clip() function to discard pixels, need you to order the geometry or not ?

3.- How you do enconde normal X and Y valors in Normal Buffer ? To recalculate Z you need to apply: sqrt(1.0f - Normal.x^2 - Normal.y^2) as Guerrilla said.

4.- I still stuck with lightmap idea, how you can mix an acumulative information with a 'multiplicative' information like lightmaps ?

5.- Rendering light volume mesh in lightning phase have a problem, whats happen if camera is inside one of these light volumes? Can you use this projection trick ---> http://www.terathon.com/gdc07_lengyel.ppt when you are rendering the light volume ? Is a trick to project vertex to the limit of frustrum, its used for rendering skyboxes, but may be useful for these light volumes...

Thanks for your help, im trying to learn as much as i can, now i have a forward rendering pipeline, but in future i wish to try deferred shading, here is the point of my long list of questions :D

LLORENS

Share this post


Link to post
Share on other sites
Hodgman    51237
Quote:
Original post by arkangel2803
Guerrilla uses a depth buffer (24D8S) to store depth, but, how can you do this at same time while you are 'outputting' 4 MRT from your pixel shader ?
They're working with a PS3, so the details will be different than with your DX9 application.
You probably already have a depth buffer attached as well as your 4 RTs (else you would have to be sorting all polygons by depth...).

Quote:
About foliage rendering, excuse me my few expertice with alpha objects but, i found a 'methodology' to render foliage/trees without ordering leaves that involves 2 rendering calls per foliage/tree object. Its something like this:
This code that you've shown draws trees with alpha blending, and then again with alpha testing.
Many games just use alpha testing these days (instead of full blending), and it is probably more appropriate for a deferred set up as well.

The ALPHATESTENABLE / ALPHAREF code enables fixed-function alpha testing -- if you were using shaders for these objects instead of fixed-function, then you could use the clip instruction within the shader.


Quote:
1.- How can you output these 4 Render Targets with Depth buffer at the same time ?
As mentioned above, you're probably already using a depth buffer automatically.
Quote:
2.- If you use Clip() function to discard pixels, need you to order the geometry or not ?
No, ordering is only required for alpha blending, not alpha testing.
Quote:
3.- How you do enconde normal X and Y valors in Normal Buffer ? To recalculate Z you need to apply: sqrt(1.0f - Normal.x^2 - Normal.y^2) as Guerrilla said.
Make sure the normals are transformed into view space and then just write the x/y components into the buffer. If you weren't using 16-bit floats you would have to encode them, but if you are using floating point buffers then there's no problem.
Quote:
4.- I still stuck with lightmap idea, how you can mix an acumulative information with a 'multiplicative' information like lightmaps?
All of your accumulated lighting passes is by itself multiplicative. Just treat the light maps as an initial accumulation pass.
Quote:
5.- Rendering light volume mesh in lightning phase have a problem, whats happen if camera is inside one of these light volumes?
The light volume meshes can be rendered into the stencil buffer in order to generate a mask of which pixel are affected by the light. If the camera is inside the mesh, then you change the stencil test function to compensate.

Share this post


Link to post
Share on other sites
arkangel2803    103
Hi and thanks a lot Hodgman, i will try to asimilate all this information and start (slowly) to think about new deferred rendering pipeline.

Btw, when you telling me that all lightning phase is 'multiplicative', whats happen when a red light and a blue light cohexist at the same point ? If you have multiplicative methodology in mind, when you add the red light and then, the blue light, only green channels survive, and i think that we will need to see somekind of violet light instead.

This is why im asking about this 'methodology' of multiplicative or accumulative operation.

Thanks for your time :)

LLORENS

Share this post


Link to post
Share on other sites
MJP    19755
You don't multiply the contribution from different light sources, you add them. Adding a red light and a blue light should give you a purpleish color. This is why Guerilla's approach works: they add the contribution from the light maps first (you can consider the light maps to be one "light source"), and then the contribution from the sun, and then the contribution from local point/spot lights.

Share this post


Link to post
Share on other sites
arkangel2803    103
Hi

And if im correct, when you finished from 'adding' light information in this buffer, then you multiply it with albedo's result from G-Buffer isn't it ?

And if im correct too, whith these added data from all light sources, you multiply it by 'intensity' channel (see Guerrilla diagram) and you can have valors higer than [0.0f - 1.0f] range, obtaining an HDR effect isn't it ?

Thanks for your time and help :)

LLORENS

Share this post


Link to post
Share on other sites
MJP    19755
Quote:
Original post by arkangel2803
And if im correct, when you finished from 'adding' light information in this buffer, then you multiply it with albedo's result from G-Buffer isn't it ?


Right. Well actually, there's two way to do it:


foreach light source
totalLightContribution += diffuseLight * diffuseAlbedo + specularLight * specularAlbedo;


or


foreach light source
totalDiffuse += diffuseLight;
totalSpecular += specularLight;
totalLightContribution = totalDiffuse * diffuseAlbedo + specularLight * specularAlbedo


The latter will give you better precision, but requires you to keep track of diffuse + specular separately.


Quote:
Original post by arkangel2803
And if im correct too, whith these added data from all light sources, you multiply it by 'intensity' channel (see Guerrilla diagram) and you can have valors higer than [0.0f - 1.0f] range, obtaining an HDR effect isn't it ?


Well that's how they do it...they didn't want to use floating point buffers (since they're expensive for the PS3) so that's what they came up with. If you wanted you could just accumulate HDR values in a floating point buffer or some encoded format.

Share this post


Link to post
Share on other sites
B_old    689
What is wrong with just adding the lightingresult for each light to the backbuffer? That is the first method described by MJP, isn't it? In that case you don't need another rendertarget for the lightresults.
A question about the second method:
What kind of rendertarget would you need for totalLigt/totalSpcular. Assuming that the lighting is colored you would need 3x2 channels for this?

Share this post


Link to post
Share on other sites
arkangel2803    103
Hi MJP and thanks for your help.

I need help to clarify a concept. When we are talking about 8 bit Render Targets and 16 bits Render Targets, ofcourse we are talking about R8G8B8A8 and the same for R16G16B16A16, but whats happen with the output from a pixel shader, i explain better:

When we are 'outputting' a value like float4(1.0f,1.0f,1.0f,1.0f) from our shader to a R8G8B8A8 Render Target, the graphic card is writing a byte with 255 value on it. So i supose that the same happens with a R16G16B16A16 Render Target. The only difference is that we have a better interpolation between 0.0f to 1.0f range. So... if i have a R16G16B16A16 Render Target, how can i output higer values than 1.0f top of range ? Need i to make all calculations of lights and other stuff thinking that my range [0.0f - 1.0f] is now [0.0f - 0.5f] and multiply for 2.0f at HDR process to have a valid HDR information ??

I dont know if i was clear :S

Thanks a lot for your time :)

LLORENS

Share this post


Link to post
Share on other sites
MJP    19755
Your output is scaled to the range [0, 1] only for fixed-point surface formats. So this includes A8R8G8B8, A16B16G16R16, A2R10G10B10 ,etc. In these cases you would pick some "maximum value", and divide your output by this to scale it back to the [0, 1] range.

This does *not* apply to floating-point formats like A16B16G16R16F, R32F, and A32B32G32R32F. These surface formats can represent values that go well beyond 1...in fact they have a sign bit so you can even store negative values in them.

Share this post


Link to post
Share on other sites
arkangel2803    103
Hi MJP andthanks again :P

So, with a fixed point Render Target, you need to make an scalar 'methodology' to get values beyond 1.0f in a 16 bits per pixel format.

In other way, whats happen if you want to make a buffer distorsion effect on screen (imagine that we want to render somekind of water, liquid, glass object), you need first to make de G-Buffer, and then, in a later proces de lighting stage, but if then, you want to render a buffer distorsion effect of a water/liquid/glass, this effect needs to be iluminated too to reach specular and difuse iluminations. I only see the solution of making a two render G-Buffers, one without these objects, and another with buf. distorsion objects, and fusionate the information in a later stage. Ofcourse, this will eat a lot of memory of graphic card, but i think there will be another way that i miss it.

Thanks for your time

LLORENS

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this