Deferred shading and objects without difuse component ?

Started by
15 comments, last by arkangel2803 14 years, 11 months ago
Hi all Here i am with a little question about deferred shading methodology :) In a forward render pipeline, we can render an object without beeing affected by light, for example, imagen his material has a parameter that says 'lightning = false'. This is acomplished with a special route throught forward pipeline where no difuse calculations are applied to that object. But whats happen when we have a deferred rendering pipeline ?? When we are done from G-Buffer, how can we determine which pixels are from a material that dosent need difusse calculations and which not ? P.D: When i say difuse calculations i mean difuse component coming from a dynamic light, not from a difuse texture aka albedo component. Thanks for your time :) LLORENS
Advertisement
You can use the stencil buffer. Mark the stencil buffer when rendering these models (sky etc) and setup a stencil test in the lighting pass.
[size="1"]Perl - Made by Idiots, Java - Made for Idiots, C++ - Envied by Idiots | http://sunray.cplusplus.se
The stencil buffer is often required for other optimisations in a deferred set-up.

Often the G-Buffer contains a 'material ID' channel - you could use this to specify a material that is not lit (e.g. by using dynamic branching in the lighting shader).
Another option would be to write a "lighting mask" into the G-Buffer (in the 0 to 1 range). When you're doing the lighting calculations, use this mask to blend between the lit and unlit colours.

e.g.
vec4 gbuf1 = read texture...vec3 albedo = gbuf.rgb;float mask = gbuf.a;vec3 litColor = ...do lighting... * albedo;vec3 finalColor = (albedo * (1.0-mask)) + (litColor * mask)
Quote:Original post by Hodgman
The stencil buffer is often required for other optimisations in a deferred set-up.

Yes, but the stencil buffer is 8 bits. Plenty for both masking and light volume optimizations. That's what I'm doing in my render engine.
[size="1"]Perl - Made by Idiots, Java - Made for Idiots, C++ - Envied by Idiots | http://sunray.cplusplus.se
Hi all and thanks for your help.

I was reading the Starcraft 2 paper about his deferred shading techniques, and i notice that they have one or two unused channels. All channels are 16 bits, so R16G16B16A16 will be used.

If we enumerate them, we have something like this:

RT 1
----
R: Albedo Red
G: Albedo Green
B: Albedo Blue
A: Ambient Occlusion Term

RT 2
----
R: Normal x
G: Normal y
B: Normal z
A: Depth from eye to pixel

RT 3
----
R: Specular factor (0.0 to 1.0)
G: Specular exponent
B: Unused
A: Unused

RT 4
----
R: Reflection Red + Self Ilumination Red
G: Reflection Green + Self Ilumination Green
B: Reflection Blue + Self Ilumination Blue
A: Unused

This configuration is 99.9% equal to the config that apears on Starcraft 2 paper, i think that i can use some unused chanel to output a somekinf of mask that indicates if pixel needs difuse calculation or not.

But, if i want to render foliage or trees into the screen, ( i think foliage and trees only needs a discard pixels technique) whats happen with albedo alpha ? I think that we need this channel to test if pixel needs to be discarded, isnt it ?

Another thing that i miss is lightmap (or offline lights term) information, as you know, lightmaps needs to be multiplied with albedo and other things to apply the light information, but in RT 4, only addition ( + operation ) information is allowed, like self ilumination or reflection. Deferred render engines dosent use lightmaps anymore ??

Thanks for your help :)

LLORENS
Quote:Original post by arkangel2803
But, if i want to render foliage or trees into the screen, ( i think foliage and trees only needs a discard pixels technique) whats happen with albedo alpha ? I think that we need this channel to test if pixel needs to be discarded, isnt it ?



Nah, you don't need to use alpha-testing. You can just use clip() in the pixel shader to discard a pixel manually.


Quote:Original post by arkangel2803
Another thing that i miss is lightmap (or offline lights term) information, as you know, lightmaps needs to be multiplied with albedo and other things to apply the light information, but in RT 4, only addition ( + operation ) information is allowed, like self ilumination or reflection. Deferred render engines dosent use lightmaps anymore ??



Killzone 2 made heavy use of lightmaps...you can read about their approach here.
Hi MJP and thanks for your answer and for pointing me to this pdf of guerrilla.

I found in this document interesting things that i didnt know before. For example, Guerrilla uses a depth buffer (24D8S) to store depth, but, how can you do this at same time while you are 'outputting' 4 MRT from your pixel shader ?

About foliage rendering, excuse me my few expertice with alfa objects but, i found a 'methodology' to render foliage/trees without ordering leaves that involves 2 rendering calls per foliage/tree object. Its something like this:

                ...		//Rendering meshes with alpha (first pass)		m_pD3DDevice->SetRenderState(D3DRS_CULLMODE,D3DCULL_NONE);		m_pD3DDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,true);		m_pD3DDevice->SetRenderState(D3DRS_SRCBLEND,D3DBLEND_SRCALPHA);	        m_pD3DDevice->SetRenderState(D3DRS_DESTBLEND,D3DBLEND_INVSRCALPHA);		m_pD3DDevice->SetRenderState(D3DRS_ZWRITEENABLE,false);		RenderSlots(inElapsedTime,&m_vRenderSlotsAlpha);		//Rendering meshes with alpha (second pass)		m_pD3DDevice->SetRenderState(D3DRS_ALPHATESTENABLE,true);		m_pD3DDevice->SetRenderState(D3DRS_ALPHAREF,0x00000040);		m_pD3DDevice->SetRenderState(D3DRS_ALPHAFUNC,D3DCMP_GREATEREQUAL);		m_pD3DDevice->SetRenderState(D3DRS_ZWRITEENABLE,true);		RenderSlots(inElapsedTime,&m_vRenderSlotsAlpha);                ...


When i saw the information about clip() i started to think that maybe im doing more work than i need for rendering foliage, but i dont know at 100%.

COntinuing with Guerrilla information, let me enumerate the channels for G-Buffer:

Depth Buffer
------------
24Bit: Depth between eye and pixel
8Bit: Stencil, i dont know what is used for :(

RT 1
----
R: Albedo Red
G: Albedo Green
B: Albedo Blue
A: Ambient Occlusion Term

RT 2
----
R: Normal x (16 bits)
G: Normal y (16 bits)

RT 3
----
R: Motion X
G: Motion Y
B: Specular factor (0.0 to 1.0)
A: Specular exponent

RT 4
----
R: Light acumulation Red (Reflection? + Self Ilumination? + Lightmap?)
G: Light acumulation Green (Reflection? + Self Ilumination? + Lightmap?)
B: Light acumulation Blue (Reflection? + Self Ilumination? + Lightmap?)
A: Intensity (somekind of HDR valor ?)

Well, with this table, i need to make some questions :)

1.- How can you output these 4 Render Targets with Depth buffer at the same time ?

2.- If you use Clip() function to discard pixels, need you to order the geometry or not ?

3.- How you do enconde normal X and Y valors in Normal Buffer ? To recalculate Z you need to apply: sqrt(1.0f - Normal.x^2 - Normal.y^2) as Guerrilla said.

4.- I still stuck with lightmap idea, how you can mix an acumulative information with a 'multiplicative' information like lightmaps ?

5.- Rendering light volume mesh in lightning phase have a problem, whats happen if camera is inside one of these light volumes? Can you use this projection trick ---> http://www.terathon.com/gdc07_lengyel.ppt when you are rendering the light volume ? Is a trick to project vertex to the limit of frustrum, its used for rendering skyboxes, but may be useful for these light volumes...

Thanks for your help, im trying to learn as much as i can, now i have a forward rendering pipeline, but in future i wish to try deferred shading, here is the point of my long list of questions :D

LLORENS
Hi all

Bumping this threat, nobody can answer me about my last questions ??

Thanks for your time :)

LLORENS
Quote:Original post by arkangel2803
Guerrilla uses a depth buffer (24D8S) to store depth, but, how can you do this at same time while you are 'outputting' 4 MRT from your pixel shader ?
They're working with a PS3, so the details will be different than with your DX9 application.
You probably already have a depth buffer attached as well as your 4 RTs (else you would have to be sorting all polygons by depth...).

Quote:About foliage rendering, excuse me my few expertice with alpha objects but, i found a 'methodology' to render foliage/trees without ordering leaves that involves 2 rendering calls per foliage/tree object. Its something like this:
This code that you've shown draws trees with alpha blending, and then again with alpha testing.
Many games just use alpha testing these days (instead of full blending), and it is probably more appropriate for a deferred set up as well.

The ALPHATESTENABLE / ALPHAREF code enables fixed-function alpha testing -- if you were using shaders for these objects instead of fixed-function, then you could use the clip instruction within the shader.


Quote:1.- How can you output these 4 Render Targets with Depth buffer at the same time ?
As mentioned above, you're probably already using a depth buffer automatically.
Quote:2.- If you use Clip() function to discard pixels, need you to order the geometry or not ?
No, ordering is only required for alpha blending, not alpha testing.
Quote:3.- How you do enconde normal X and Y valors in Normal Buffer ? To recalculate Z you need to apply: sqrt(1.0f - Normal.x^2 - Normal.y^2) as Guerrilla said.
Make sure the normals are transformed into view space and then just write the x/y components into the buffer. If you weren't using 16-bit floats you would have to encode them, but if you are using floating point buffers then there's no problem.
Quote:4.- I still stuck with lightmap idea, how you can mix an acumulative information with a 'multiplicative' information like lightmaps?
All of your accumulated lighting passes is by itself multiplicative. Just treat the light maps as an initial accumulation pass.
Quote:5.- Rendering light volume mesh in lightning phase have a problem, whats happen if camera is inside one of these light volumes?
The light volume meshes can be rendered into the stencil buffer in order to generate a mask of which pixel are affected by the light. If the camera is inside the mesh, then you change the stencil test function to compensate.
Hi and thanks a lot Hodgman, i will try to asimilate all this information and start (slowly) to think about new deferred rendering pipeline.

Btw, when you telling me that all lightning phase is 'multiplicative', whats happen when a red light and a blue light cohexist at the same point ? If you have multiplicative methodology in mind, when you add the red light and then, the blue light, only green channels survive, and i think that we will need to see somekind of violet light instead.

This is why im asking about this 'methodology' of multiplicative or accumulative operation.

Thanks for your time :)

LLORENS

This topic is closed to new replies.

Advertisement