Jump to content

  • Log In with Google      Sign In   
  • Create Account

deferred shading question


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
16 replies to this topic

#1 newtechnology   Members   -  Reputation: 788

Like
0Likes
Like

Posted 21 May 2014 - 08:47 AM

Do I have to render scene twice in deferred shading ? I.e one for normal buffer and one for color buffer?

Sponsor:

#2 Migi0027   Crossbones+   -  Reputation: 2117

Like
3Likes
Like

Posted 21 May 2014 - 09:15 AM

Not at all, actually it happens that you can output multiple colors (i.e. write to multiple render targets ) at the same time. 

 

As an example, in DirectX, when you set the render targets, you have the choice of passing an array of render target pointers, and then in the pixel shader, you would do the following: ( Top of my head, so there may be small mistakes )

struct PixelOut
{
    float4 Color : SV_Target0;
    float4 Normal : SV_Target1;
};

PixelOut PS(...)
{
    PixelOut out = (PixelOut)0;

    .. do the usual stuff, and fill out.Color and out.Normal ..

   return out;
}

Now I havn't done much in OpenGL, so Im not familiar with the syntaxes of this.

 

Hope this helps.

-MIGI0027


Hi! Cuboid Zone
The Rule: Be polite, be professional, but have a plan to steal all their shaders!

#3 newtechnology   Members   -  Reputation: 788

Like
0Likes
Like

Posted 21 May 2014 - 09:26 AM

Not at all, actually it happens that you can output multiple colors (i.e. write to multiple render targets ) at the same time.

As an example, in DirectX, when you set the render targets, you have the choice of passing an array of render target pointers, and then in the pixel shader, you would do the following: ( Top of my head, so there may be small mistakes )

struct PixelOut{    float4 Color : SV_Target0;    float4 Normal : SV_Target1;};PixelOut PS(...){    PixelOut out = (PixelOut)0;    .. do the usual stuff, and fill out.Color and out.Normal ..   return out;}
Now I havn't done much in OpenGL, so Im not familiar with the syntaxes of this.

Hope this helps.
-MIGI0027
Thanks. Also I use directs 11 not opengl :P
Edit: how does normal mapping fit into deferred shading?

#4 kauna   Crossbones+   -  Reputation: 2852

Like
0Likes
Like

Posted 21 May 2014 - 12:16 PM

Normal mapping works perfectly fine with deferred shading. You may output an interpolated vertex normal or a normal from a texture to the render target. Only thing that you'll need to define is the common space where you want to do your lighting (typically either in view space or world space). That means that you'll have to transform the normal from tangent space to the desired space.

 

Cheers!

 

[edit]

 

typically you'd use more than 2 render targets to store more variables needed for lighting calculations such as roughness / shininess, specular factor, metalness ... what ever is required by your lighting system. 


Edited by kauna, 21 May 2014 - 12:20 PM.


#5 Promit   Moderators   -  Reputation: 7620

Like
1Likes
Like

Posted 21 May 2014 - 12:23 PM

You may find this presentation useful: The Rendering Technology of Killzone 2



#6 Ashaman73   Crossbones+   -  Reputation: 7991

Like
0Likes
Like

Posted 22 May 2014 - 02:33 AM


Do I have to render scene twice in deferred shading ?

There are two similar techniques, deferred shading and deferred lighting.

 

deferred rendering:

1. render geometry(scene) to multiple buffers (color, material, normals,depth). the buffer holding the normal/depth is called g-buffer (for geometry), because you can reconstruct the visible 3d scene from it.

2. render multiple post processing passes on-top of the g-buffer (remember, you can reconstruct the 3d world from it), one of the most important post-processsing pass it the lighting pass.

 

deferred lighting

1. render geometry to mulitple buffers, but do not render color or material informations (only normal/depth).

2. render a lighting post-processing pass, this will write all lighting informationen to a new buffer.

3. render the geometry a second time, this time combine the color/material of your models with the lightbuffer

4. render some post-processing passes

 

The deferred rendering only needs one geometry pass,whereas deferred lighting needs two. On the other hand, deferred lighting has better lighting performance (you combine light/color/material only in the last step) and less material limitations.


Edited by Ashaman73, 22 May 2014 - 02:34 AM.


#7 newtechnology   Members   -  Reputation: 788

Like
0Likes
Like

Posted 22 May 2014 - 11:53 AM

Thanks for all this, i've another question. Will i still have to render objects twice? One foe gbuffers and one normally?

#8 C0lumbo   Crossbones+   -  Reputation: 2497

Like
1Likes
Like

Posted 22 May 2014 - 12:25 PM

Thanks for all this, i've another question. Will i still have to render objects twice? One foe gbuffers and one normally?

 

No, under the deferred rendering system described by Ashaman73, you render your objects once to fill the gbuffers, then the post-processing steps are all done as 2D passes.



#9 newtechnology   Members   -  Reputation: 788

Like
0Likes
Like

Posted 23 May 2014 - 02:14 AM

Thanks, I've started implementing deferred shading.

 

7gTUVzl.png 

(The color is weird because its red channel ONLY!)



#10 newtechnology   Members   -  Reputation: 788

Like
0Likes
Like

Posted 24 May 2014 - 04:27 AM

Now i have implemented deferred shading but where do i store material and light properties like diffuse, specular, ambient, reflect, specular power? Also how do i use multiple lights with deferred shading?

#11 phil_t   Crossbones+   -  Reputation: 4094

Like
1Likes
Like

Posted 24 May 2014 - 08:20 AM


Now i have implemented deferred shading but where do i store material and light properties like diffuse, specular, ambient, reflect, specular power?

 

Um, well the diffuse color goes in your color buffer, that's what it's for. Specular color and power can be stuffed wherever you have space channels in your g-buffer. It's a bit of an art to try to minimize the size of your g-buffer while still getting the flexibility you need. Mine consists of 3 32-bit buffers:

(1) Albedo (RGB), and an emmissive value (A)

(2) Normal (RG), specular power (B), specular intensity (A) // I don't support a specular RGB color, just a single intensity value

(3) Depth (RG),object id (B), baked occlusion term (A) 

 

I've managed to squeeze the normals into two 8-bit channels (using a spheremap transform), and 16-bit depth value into two 8-bit channels.

 

To start with though, I wouldn't bother with these optimizations, just make the buffers you need. If you just need spec power and a single spec intensity value, you could stuff them into the alpha channels of your color and normal buffers.

 

As for "ambient" and "reflect", I'm not sure what you mean. If these are light properties, they don't belong in the g-buffer. They are applied as you draw each light in your lighting pass (unless you're making a light pre-pass renderer (sometimes called deferred lighting), in which case you need to make a sort of g-buffer for lighting properties).

 


Also how do i use multiple lights with deferred shading?

 

You draw each light separately and additively blend then into your destination render target (your light accumulation buffer).



#12 newtechnology   Members   -  Reputation: 788

Like
0Likes
Like

Posted 24 May 2014 - 12:21 PM

I know about specular power but what is specular intensity? I know its general meaning but i never used it before in my old lighting? Also i dont know about emissive value and why do you store depth in 2 channels? Isn't it sufficient to store it in single channel? Also your normal is in 2 channels so where is your z component of normal? and why object id?



Sorry for too many questions but i'm confused.

 

 

EDIT: I understood specular intensity but what what about other terms?


Edited by newtechnology, 25 May 2014 - 04:30 AM.


#13 phil_t   Crossbones+   -  Reputation: 4094

Like
2Likes
Like

Posted 25 May 2014 - 11:31 PM


Also i dont know about emissive value

 

It's for when an object emits light (so it would show up even if no lights were shining on it).

 


and why do you store depth in 2 channels? Isn't it sufficient to store it in single channel?

 

Not if I'm using an 8 bit per channel render target format. But to start off with, I would just use a R32F format (32 bit floating point single channel).

 


Also your normal is in 2 channels so where is your z component of normal?

 

Normals are of unit length, so you only need to store 2 values. You can reconstruct the 3rd value in the shader, since you know the length of the normal is 1. (see http://aras-p.info/texts/CompactNormalStorage.html#method01xy)

 

But basically, get stuff working, and worry about all these kinds of optimizations at a later time.



#14 newtechnology   Members   -  Reputation: 788

Like
0Likes
Like

Posted 26 May 2014 - 02:46 AM

OK. So I implemented lighting from this tutorial: http://www.catalinzima.com/xna/tutorials/deferred-rendering-in-xna/directional-lights/

 

Why do I've wrong specular highlights? directional light's direction is (0.0f, -1.0f, 0.0f)

it is straight down so why are specular hightlights wrong (at up)?

 

aM36OJb.png

 

My deferred renderer:

#include "LightHelper.fx"
 
//================================
//DeferredRenderer: Renders lighting info to gbuffers
//DefferedRenderer.fx by newtechnology
//================================
 
cbuffer cbPerObject
{
float4x4 gWorld;
float4x4 gWorldViewProj;
float4x4 gWorldInvTranspose;
float4x4 gTexTransform;
   
Material gMaterial;
};
 
 
cbuffer cbFixed
{
const float AOFactor = 0.4f;
const float SpecIntensity = 0.8f;
const float specPower = 0.5;
};
 
struct VertexIn
{
float3 PosL : POSITION;
float3 NormalL : NORMAL;
float2 Tex : TEXCOORD;
float3 TangentL : TANGENT;
};
 
struct VertexOut
{
float4 PosH : SV_POSITION;
float3 PosW : POSITION;
float3 NormalW : NORMAL;
float2 Tex : TEXCOORD;
float3 TangentW : TANGENT;
};
 
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = WRAP;
AddressV = WRAP;
};
 
Texture2D gDiffuseMap;
Texture2D gNormalMap;
 
VertexOut VS(VertexIn vin)
{
VertexOut vout;
 
vout.PosW     = mul(float4(vin.PosL, 1.0f), gWorld).xyz;
vout.NormalW  = mul(vin.NormalL, (float3x3)gWorldInvTranspose);
vout.TangentW = mul(vin.TangentL, (float3x3)gWorld);
vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj);
vout.Tex = mul(float4(vin.Tex, 0.0f, 1.0f), gTexTransform).xy;
 
return vout;
}
 
 
struct PixelOut
{
float4 Color : SV_Target0;
float4 Normal : SV_Target1;
float4 Position : SV_Target2;
};
 
PixelOut PS(VertexOut pin) : SV_Target
{
PixelOut Out;
 
 
pin.NormalW = 0.5f * (normalize(pin.NormalW) + 1.0f);
 
float3 color = gDiffuseMap.Sample(samLinear, pin.Tex).rgb;
 
//output lighting info to gbuffer
Out.Color = float4(color, specPower);
Out.Normal = float4(pin.NormalW, SpecIntensity);
Out.Position = float4(pin.PosW, AOFactor);
  
return Out;
}
 
technique11 BuildGBuffers
{
pass P0
{
SetVertexShader( CompileShader(vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader(ps_4_0, PS() ) );
}
}

My Directional light:

#include "LightHelper.fx"
 
 
SamplerState samPoint
{
Filter = MIN_MAG_MIP_POINT;
AddressU = CLAMP;
AddressV = CLAMP;
};
 
cbuffer cbPerObject
{
float4x4 gWorldViewProj;
};
 
cbuffer cbPerFrame
{
float3 gEyePosW;
Material gMaterial;
DirectionalLight gLight;
};
 
 
//gbuffers
Texture2D gColorMap;
Texture2D gNormalsMap;
Texture2D gPositionMap;
 
struct VertexIn
{
float3 PosL : POSITION;
float2 tex : TEXCOORD;
};
 
struct VertexOut
{
float4 PosH : SV_POSITION;
float2 tex : TEXCOORD;
};
 
VertexOut VS(VertexIn vin)
{
VertexOut vout;
 
vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj);
 
//no transformation required here
vout.tex = vin.tex;
 
return vout;
}
 
float4 PS(VertexOut pin) : SV_Target
{
//Sample textures (gbuffers)
float4 color = gColorMap.Sample(samPoint, pin.tex);
float4 normal = gNormalsMap.Sample(samPoint, pin.tex);
float4 position = gPositionMap.Sample(samPoint, pin.tex);
 
 
//extract material properties from alpha channel of all 3 textures
float specPower = color.a * 255;
float specIntensity = normal.a;
float AOFactor = position.a;
 
    float3 normalW = 2.0f * normal.rgb - 1.0f;
 
float3 lightVector = float3(0.0f, 1.0f, 0.0f); // -(0.0f, -1.0f, 0.0f) = (0.0f, 1.0f, 0.0f); -> -(-1) = +1
lightVector = normalize(lightVector);
 
float NdL = saturate(dot(lightVector, normalW));
 
float3 DirectionToCamera = normalize(gEyePosW - position.xyz);
 
float3 reflectionVector = normalize(reflect(lightVector, normalW));
 
float SpecularLight = specIntensity * pow(saturate(dot(reflectionVector, DirectionToCamera)), specPower);
 
// float3(1.0f, 1.0f, 1.0f) is light's color
float3 DiffuseLight = (NdL * float3(1.0f, 1.0f, 1.0f)) + AOFactor;
 
float3 finalcolor = color.rgb * DiffuseLight + SpecularLight;
 
return float4(finalcolor, 1.0f);
}
 
technique11 DeferredLighting
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
};

Edited by newtechnology, 26 May 2014 - 02:48 AM.


#15 Ashaman73   Crossbones+   -  Reputation: 7991

Like
0Likes
Like

Posted 26 May 2014 - 06:14 AM


it is straight down so why are specular hightlights wrong (at up)?

You need to use the vector pointing to the light source, similar like the vector pointing to the camera. Therefor the sun light direction needs to be reverted.



#16 newtechnology   Members   -  Reputation: 788

Like
0Likes
Like

Posted 26 May 2014 - 07:38 AM

 


it is straight down so why are specular hightlights wrong (at up)?

You need to use the vector pointing to the light source, similar like the vector pointing to the camera. Therefor the sun light direction needs to be reverted.

 

I do invert it in pixel shader, look at the source code.



#17 phil_t   Crossbones+   -  Reputation: 4094

Like
1Likes
Like

Posted 26 May 2014 - 11:57 AM


I do invert it in pixel shader, look at the source code.

 

Invert it again :-). reflect(L, N) will produce something that points (roughly) in the opposite direction of L with respect to the normal. i.e. if L points towards the surface, reflect will return something that points away.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS