Jump to content
  • Advertisement
Sign in to follow this  
elurahu

DX11 Specular term banding (Deferred shading)

This topic is 2918 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to do some still fairly basic deferred shading experiments in my DX11 framework.

One problem however are eluding my problem solving skills. I'm getting some banding in my specular term of the lighting. The higher power I raise the term to the worse it gets. I seams like a sampling / precision problem but I just cannot find where. I was hoping someone here could give some input on what could be causing it.

Specular term alias illustration:
http://img263.imageshack.us/img263/5167/64467714.png
http://img839.imageshack.us/img839/4169/89839862.png

Specular term calculation:

float3 f3ToEyeWS = normalize(m_f3CameraPosition - f4PosWS.xyz);
float3 f3HalfWS = normalize(f3LightVectorWS + f3ToEyeWS);

float fNdotH = max(0.0f, dot(f3NormalWS, f3HalfWS));
float fSpecularTerm = fSpecIntensity * pow(fNdotH, fSpecPower);


I'm having a VERY hard time finding the problem here. I'm really hoping someone can help out atleast point me in a direction which could cause this to happen.

[Edited by - elurahu on October 23, 2010 6:11:23 PM]

Share this post


Link to post
Share on other sites
Advertisement
How are you storing/reconstructing position? How are you storing your specular albedo/power in your G-Buffer?

Share this post


Link to post
Share on other sites
Hey MJP - Thank you for responding!

I'm storing the depth in a 32bit float buffer and reconstructing using the inverse viewprojection matrix. (Yes I've read your blog - But right now I'm keeping things as simple as I can get it).

Depth write:

VS:

// Pass depth
Out.Depth.x = Out.Position.z;
Out.Depth.y = Out.Position.w;


PS:

// Depth (z / w)
Out.RT2.r = In.Depth.x / In.Depth.y;


Directional light shader recontruction:

// Depth
float fDepth = m_kRT2.Sample(m_kPointSampler, In.TexCoord).x;

// Convert to world space pos
float4 f4PosWS;
f4PosWS.x = (In.TexCoord.x * 2.0f) - 1.0f;
f4PosWS.y = -((In.TexCoord.y * 2.0f) - 1.0f);
f4PosWS.z = fDepth;
f4PosWS.w = 1.0f;

f4PosWS = mul(f4PosWS, m_kInvViewProjection);
f4PosWS /= f4PosWS.w;


Specular calculation:

// Specular light term
float3 f3ToEyeWS = normalize(m_f3CameraPosition - f4PosWS.xyz);
float3 f3HalfWS = normalize(f3LightVectorWS + f3ToEyeWS);

float fNdotH = max(0.0f, dot(f3NormalWS, f3HalfWS));
float fSpecularTerm = saturate(fSpecIntensity * pow(fNdotH, fSpecPower));


The specular power is stored in the alpha component of the alpha component of a 8 bit DXGI_FORMAT_R8G8B8A8_UNORM texture.

Share this post


Link to post
Share on other sites
How are you storing your normals? Small errors in your normal will show up mostly in specular lighting, and will get worse the higher your specular power.

Share this post


Link to post
Share on other sites
Normal are stored in a PF_R8G8B8A8 with all 3 components.

Storage -

VS:

// Transform normals to world
Out.Normal = mul( In.Normal, (float3x3)m_kWorld );

PS:

// Normals
Out.RT1.rgb = 0.5f * (normalize(In.Normal) + 1.0f);


Usage -


// Get normal
float3 f3NormalWS = 2.0f * f4RT1Sample.xyz - 1.0f;

Share this post


Link to post
Share on other sites
Have you tried storing them at floating point precision? Also you should try a compression method like reconstructing z or Cryteks idea (modify the length of the normal to fit it better into the 8 bit precision).

Share this post


Link to post
Share on other sites
I would bet that's your problem - 8 bits is really not enough for normals. There are lots of good suggestions for normal storage (with code) here.

Share this post


Link to post
Share on other sites
Thanks all - I guess it's my normals which are causing the problems. I'm going to give it a go again tomorrow and report back!

Share this post


Link to post
Share on other sites
try the format DXGI_FORMAT_R10G10B10A2_UNORM you get 2 more bits per component, or go with the DXGI_FORMAT_R16G16_FLOAT and reconstruct the z.

Share this post


Link to post
Share on other sites
Quote:
Original post by Pragma
I would bet that's your problem - 8 bits is really not enough for normals. There are lots of good suggestions for normal storage (with code) here.


8 bits actually is enough. But since normals all have a length of 1, they all describe positions of the unit sphere. Just by that fact you loose about 98% of the theoretical memory 8 bits per channel could save. Take a look at Cryteks Siggraph 2010 paper where they describe how they solved that problem.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!