Can anybody tell me what kind of problem this is?

Started by
6 comments, last by MJP 12 years, 6 months ago
I've got this weird visual banding artifact on point lights that I add to a lightmap. I'm using Deferred Shading (GBuffer has 4 RTs, standard stuff in it, including z/w depth). The directional lights add to the lightmap without banding, but the point lights add banding. This banding is very annoying in the final render as it flimmers when the camera moves.


Here is the HLSL pixel shader for the point light:

[source]

struct DirLight
{
float4 ambient;
float4 diffuse;
float4 spec;
float3 dirW;
float4 fogColor;
float3 posW;
};

struct Mtrl
{
float4 ambient;
float4 diffuse;
float4 spec;
float specPower;
float4 emissive;
};

//--------------------------------------------------------------------------------------
// Macro defines
//--------------------------------------------------------------------------------------

//--------------------------------------------------------------------------------------
// Global variables
//--------------------------------------------------------------------------------------
uniform extern DirLight gLight;
uniform extern float3 gEyePosW;
uniform extern float3 gViewDirW;
uniform extern float2 gHalfPixel;
uniform extern float4x4 gInvViewProj;
uniform extern Mtrl gMtrl;
uniform extern texture2D SceneNormalMap;
uniform extern texture2D SceneDepthMap;
uniform extern texture2D SceneEmittanceMap;

struct PS_in
{
float4 color : COLOR0;
float4 lightPosRadius : TEXCOORD0;
float4 lightColor : TEXCOORD1;
float4 ScreenPosition : TEXCOORD2;
};

struct PS_out
{
float4 vMaterial : COLOR0;
};

// rendertargets
sampler SceneNormalSampler = sampler_state
{
Texture = <SceneNormalMap>;
MagFilter = Point;
MinFilter = Point;
};

sampler SceneDepthSampler = sampler_state
{
Texture = <SceneDepthMap>;
MagFilter = Point;
MinFilter = Point;
};

sampler SceneEmittanceSampler = sampler_state
{
Texture = <SceneEmittanceMap>;
MagFilter = Point;
MinFilter = Point;
};

PS_out PS_Scene( PS_in i )
{
//
// Zero out our output.
//
PS_out o = (PS_out)0;


//obtain screen position
i.ScreenPosition.xy /= i.ScreenPosition.w;

//obtain textureCoordinates corresponding to the current pixel
//the screen coordinates are in [-1,1]*[1,-1]
//the texture coordinates need to be in [0,1]*[0,1]
float2 texCoord = 0.5f * (float2(i.ScreenPosition.x,-i.ScreenPosition.y) + 1);
//align texels to pixels
texCoord -= gHalfPixel;

//get normal data from the normalMap
float4 normalData = tex2D(SceneNormalSampler,texCoord);
//tranform normal back into [-1,1] range
float3 normal = 2.0f * normalData.xyz - 1.0f;
normal = normalize(normal);

float4 emittance = tex2D(SceneEmittanceSampler, texCoord);
//get specular power
float specularPower = emittance.z + gLight.spec.r;
//get specular intensity from the colorMap
float specularIntensity = emittance.y * 0.2f;

//read depth
float depthVal = tex2D(SceneDepthSampler,texCoord).r;

//compute screen-space position
float4 position;
position.xy = i.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;
//transform to world space
position = mul(position, gInvViewProj);
position /= position.w;

//surface-to-light vector
float3 lightVector = i.lightPosRadius.xyz - position;

//compute attenuation based on distance - linear attenuation
float attenuation = saturate(1.0f - length(lightVector)/i.lightPosRadius.w);

//normalize light vector
lightVector = normalize(lightVector);

//compute diffuse light
float NdL = max(0,dot(normal,lightVector));
float3 diffuseLight = NdL * i.lightColor.rgb;

//reflection vector
float3 reflectionVector = normalize(reflect(-lightVector, normal));
//camera-to-surface vector
float3 directionToCamera = normalize(gEyePosW - position);
//compute specular light
float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);

//take into account attenuation and lightIntensity.
float4 color = float4(diffuseLight.rgb,specularLight);

color.rgb = saturate( color.rgb + (gLight.ambient.xyz * 0.3f) );

o.vMaterial = attenuation * gLight.ambient.w * color;
//
// Done--return the output.
//
return o;
}


[/source]

Here is an example of the banding:


http://i942.photobuc...sPointLight.jpg


This is the lightmap with just the directional light in it (no banding yet):

http://i942.photobuc...to/dirLight.jpg


Please help me!

[Edit]
Here are some things I've done that might be different than other deferred point light shaders:


1. I render the spheres for all my point lights in a single draw call using DX9 PS 2.0 instancing. The vertex buffer has N copies of M vertices (where M is the number of vertices to represent the geometry of a sphere) as well as a float1 texcoord which holds the light "instance index". This value indexes into a table of shader constant registers which hold that particular light's world (x,y,z) position, light radius (w) and light color (r,g,b).

2. The vertex shader for the instanced point light sphere geometry outputs a screen position (the light's center in world space transformed by ViewProj matrix) as well as the light's center position, the light's radius and light color.

3. Since the light sphere geometry does not have Texture Coordinates, the pixel shader creates texture coordinates to sample the G-Buffer with by converting the ScreenPosition to TexCoords.

4. Also the pixel shader turns the ScreenPosition into a World Position for this pixel.
Advertisement
Can you output just lighting (no colors)?
My first guess would be that your screen-depth to world-pos conversion isn't accurate enough. What precision are the z values that you're using?

My first guess would be that your screen-depth to world-pos conversion isn't accurate enough. What precision are the z values that you're using?


The depthVal in the shader above comes from a R32F render target and is filled during G-Buffer creation with clip space pos.x / clip space pos.z

[source]


// Scene depth-Map: R32F

HR(g_d3dDevice->CreateTexture(w,h,1,D3DUSAGE_RENDERTARGET,

D3DFMT_R32F,D3DPOOL_DEFAULT,&m_pd3dRTSceneDepthMap,NULL));

[/source]

All my render targets are the same size and that size is the same size as the backbuffer.
Perspective z/w in a 32F render target will generally give you worse precision than in 24-bit integer buffer (standard depth buffer precision). I ran some analysis here. Lack of precisions for normals in the G-Buffer can also cause issues. How are you storing them in the G-Buffer?
Thanks MJP! I was really hoping you would reply. I've read that article of yours (I've read through many of your blog posts, very awesome stuff, thank you!)

I saw on your analysis charts that 32-bit z/w had some red pixels showing up near the far clip plane. I was not sure what to do in response to this knowledge though. My normals are stored in an ARGB8888 where r,g,b store the x,y,z component of the normal (so 8-bits per normal).

[EDIT]
OK this is going to be a tough fix i think. I started moving my near clip plane in my projection matrix from 0.1f closer to far clip plane (2900.0f) and the banding started going away around 5.0f, but 10.0f was even better. However anything past 1.0f is unsuitable for my program, since it's a game where you control a character.

What are my options? What do other people do to fix these problems? How do I use "1 – Perspective Z/W” and is it the silver bullet I need?
I could still use some help/guidance here. Should i use linear depth? Should i try to send my normals in view space? Can anyone else comment on how they fix these kinds of banding issues? My point light pixel shader is adapted from an example by Catalin Zima.
If you're going to store depth in a render target rather than sampling the hardware depth buffer, than there's really no reason not to use linear depth. It will give you better precision, and it can be a little cheaper to boot. Flipping your near and far plane can definitely help for floating point buffers, but it's only really useful for sampling hardware depth buffers where you don't have direct control over how depth gets written. FYI, implementing it as as simple as swapping the near and far plane values you pass to your function for creating a perspective projection and flipping the direction of your depth tests.

In the future you may also run into precision issues with that normal format. I've seen artifacts appear even when using a 16-bit floating point buffer for storing world-space normals. If you're not going to encode a 16-bit integer format is ideal, otherwise you can look into Crytek's best fit normals.

This topic is closed to new replies.

Advertisement