This topic is 4002 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, i am trying to implement Deferred Shading in my gaming engine, i am using a this shader:

float4x4 matW : World;  // World matrix for object
float4x4 matWit : WorldInverseTranspose;
float4x4 matWvp : WorldViewProjection;	// World * View * Projection matrix
float4x4 matVi : ViewInverse;

struct vsIn
{
float4 position : Position;
float3 normals : Normal;
float2 tex : TexCoord0;
float3 tangent : Tangent;
float3 binormal : Binormal;
};
struct vsOut
{
float4 position : Position;
float2 tex : TexCoord0;
float4 worldPosition : TexCoord1;
float3 tbnRow1 : TexCoord2;
float3 tbnRow2 : TexCoord3;
float3 tbnRow3 : TexCoord4;
};
struct psOut
{
float4 diffuse : Color0;
float4 normals : Color1;
float4 position : Color2;
};

texture diffuseTexture;
sampler sDiffuseTexture = sampler_state {
texture = &lt;diffuseTexture&gt;;
MipFilter = Linear;
MagFilter = Linear;
MinFilter = Linear;
};

texture normalMap;
sampler sNormalMap = sampler_state {
texture = &lt;normalMap&gt;;
MipFilter = Linear;
MagFilter = Linear;
MinFilter = Linear;
};

vsOut deferredSetup_Vs(vsIn In)
{
vsOut Out;

Out.position = mul(In.position, matWvp);
Out.tex = In.tex;
Out.worldPosition = mul(In.position, matW);

float3x3 tbnI = mul(float3x3(In.tangent, In.binormal, In.normals), matWit);
tbnI /= -1.0f;

Out.tbnRow1 = tbnI[0];
Out.tbnRow2 = tbnI[1];
Out.tbnRow3 = tbnI[2];

return Out;
}

psOut deferredSetup_Ps(vsOut In)
{
psOut Out;

Out.diffuse = tex2D(sDiffuseTexture, In.tex);
Out.normals = float4(mul(2.0f * tex2D(sNormalMap, In.tex) - 1.0f, float3x3(In.tbnRow1, In.tbnRow2, In.tbnRow3)), 1);
Out.position = In.worldPosition;

return Out;
}

float4 deferredSetup_Ps_color(vsOut In) : COLOR
{
return tex2D(sDiffuseTexture, In.tex);
}

float4 deferredSetup_Ps_normal(vsOut In) : COLOR
{
return float4(mul(2.0f * tex2D(sNormalMap, In.tex) - 1.0f, float3x3(In.tbnRow1, In.tbnRow2, In.tbnRow3)), 1);
}

float4 deferredSetup_Ps_position(vsOut In) : COLOR
{

return In.worldPosition;
}

{
pass P0
{
}
}

{
pass P0
{
}
}

{
pass P0
{
}
}

{
pass P0
{
}
}

///////////////////////////////////////////////////////////////////

texture normals, position, diffuse;
sampler sNormals = sampler_state { texture = &lt;normals&gt;; };
sampler sPosition = sampler_state { texture = &lt;position&gt;; };
sampler sDiffuse = sampler_state { texture = &lt;diffuse&gt;; };

float3 lightPosition;
float3 lightColor;

float4 deferredLighting_Ps(float2 uv : TEXCOORD0) : Color0
{
float3 pos = tex2D(sPosition, uv);
float3 normal = normalize(tex2D(sNormals, uv));
float4 diffuse = tex2D(sDiffuse, uv);

float3 lightDir = normalize(lightPosition - pos);
float3 vte = normalize(matVi[3] - pos);

float3 lightCol = lightColor / (0.01f * distance(pos, lightPosition));

float3 halfVec = normalize((lightDir + vte) * .5f);

float3 specular = 0;//pow(max(0.0f, dot(normal, halfVec)), 2);

return float4(lightCol * diffuse.rgb * max(0, dot(normal, lightDir)) + specular, 1);
}

technique Light
{
pass p0
{
}
}


Note that i modified the original shader to support multiple-passes rendering instead a single-pass with MRT. Now, to the problems... first of all, the final image quality is poor, specialy in low resolutions... second... The light calculations seems to be incorrect, but i can't find any error in the code! here is a image of the scene with and without the deferred, and a image with the 'normal' pass: http://br.geocities.com/caianbene/deferredshading.JPG http://br.geocities.com/caianbene/deferredshading_normal.JPG Note the light in the plane and the light in the teapot in the opposite direction (the light is the same!) [Edited by - Key_46 on November 5, 2007 2:07:16 PM]

##### Share on other sites
Judging from the 2nd pic it looks like your normals y component is inverted. Why are you dividing the tangent matrix by -1.0f? Take that line out and your lighting should be fixed, at first glance nothing seems wrong with your lighting calculations.

##### Share on other sites
I concur with leftleaner, you seem to be calculating your normals incorrectly. What you want to use is the transpose of your TBN matrix, rather than dividing it by -1. HLSL has a handy transpose() intrinsic that you can use for this purpose.

//get the surface normal from the texturefloat3 surfaceNormal = tex2D(normalMap, IN.texel0).rgb;	//transform from {0.0,1.0} to {-1.0,1.0}surfaceNormal -= 0.5f;surfaceNormal *= 2.0f;surfaceNormal.y = -surfaceNormal.y;//convert from tangent space to world spacefloat3x3 tbnMatrix = transpose(float3x3(normalize(IN.tangent), normalize(IN.binormal), normalize(IN.normal)));float3 normal = normalize(mul(tbnMatrix, surfaceNormal));

Also, please use the "source" tag in the future when including long segments of code. Pretty please. :)

[Edited by - MJP on November 5, 2007 3:38:14 PM]

##### Share on other sites
Swapping the order of a vector/matrix multiplication will be the equivilant of multiplying the vector by the transpose of the matrix, so he can save the instructions on that. It looks like he has the order correct to bring the normal into world space so he should be good just removing that one line. And yeah, the source tag is your friend ;)

##### Share on other sites
Thanks everyone, the normal and light calculations seems to work perfectly now!
and sorry about the 'source' option ^^, problem solved, but i still have a question about the Render Target that i should use...
In this demo i use a ARGB16 for normal and position and a ARGB8 for Color, but i am using multi-pass rendering to make the GBuffer, this formats will not work with Multiple Render Targets... is there a better format for the buffers or other storing method, because if i use a R32F only for depth and ARGB8 for Normals BUT then i will need to tranform depth to World and Screen space pos...ahh... it is confusing! [wow]

##### Share on other sites
Glad to hear you got it working! Wrangling with the TBN matrix can be tough business.

Anyway, as you've mentioned you can't use formats with different bpp for MRT's on certain GPU's. Using linear eye-space depth can help quite a bit, as it lets I recently changed my shaders to use this, previously I'd used 3 RGBA16F buffers for my G-Buffer due to storing screen-space. Right now I'm using R32F for depth, RGBA8 for diffuse albedo, A2R10G10B10 for my view-space normals, and RGBA8 for specular and other material properties. This allowed me to save 33% on G-buffer storage and bandwidth, which is quite nice. I'm also considering storing the x and y components of my normal in an RG16F texture, and then reconstructing the z-component in the pixel shader using Normal.z = sqrt(1.0f - Normal.x2 - Normal.y2). This would hopefully give me some better precision.

Making to switch to storing depth rather than position can be a pain, but its something I'd recommend. The bandwidth savings can be big, and having depth available as a texture can be really nice for a lot of post-processing techniques (ambient occlusion, DOF, motion blur). We had a thread here recently where we discussed some techniques for deriving world-space position from depth, which could be useful for you. What I do is store linear eye-space depth in my g-buffer, and then use the view-space location of the far frustum corners to derive the view-space position of the pixel. If need extra help, I can walk you through some of the specifics.

##### Share on other sites
Oh no! my GPU does not support A2R10G10B10... this can't be good... well, never mind i will try to encode 2 bits of each channel in the ARB8 alpha channel... any help will be appreciated.

##### Share on other sites
Hmm I'm not familiar with methods for doing that. I'd really recommend using R16G16F if your GPU supports it, as it seems to giving me good precision at the expense of a few shader instructions.

1. 1
Rutin
32
2. 2
3. 3
4. 4
5. 5

• 13
• 69
• 11
• 10
• 14
• ### Forum Statistics

• Total Topics
632967
• Total Posts
3009570
• ### Who's Online (See full list)

There are no registered users currently online

×