DX9 Deferred Shading: I have a couple problems that need solving

Started by
5 comments, last by megabyte86x 10 years ago

Hi,

For about a week now I've been learning deferred shading. Things were running smooth until I ran into a couple snags. while implementing my shaders(3.0) I have been getting strange lighting artifacts and I managed to narrow it down to the normal map/buffer of the g-buffer:
post-132195-0-67319100-1396472077.png
If you look along the left edge of the model you'll see the artifact( orangish color ). I've notice that this is a common problem with deferred shaders and have not mange to find any way to resolve the issue. I've tried disabling multi-sampling/anti-aliasing, adjusting the filters to none, point, linear, antistropic, and adjusting the pixel cooridantes to match the texel offset( -=.5/screenWidth, -=.5/screenHeight). Different techniques only minimize the artifacts.

So my first question is, how do you combat lighting artifacts.

Another problem that I have is that when the camera sits above or below the model, along the y-axis, the model is not rendered. Why is this?

post-132195-0-07763500-1396472795.png

If I adjust the camera's x or y value the slightest, say .0001f, then the model is rendered. Is this another down sides of deferred shading?

shader code:

[g-buffer]


///////////////////////////////////////////
//    G L O B A L     V A R I A B L E S
///////////////////////////////////////////
float4x4    gWorld;
float4x4    gWorldViewProjection;
float        gSpecularIntensity;
float        gSpecularPower;
sampler2D    gColorMap;
///////////////////////////////////////////
vsOutput    vsDeferredShaderGeometryBuffer( vsInput IN )
{
    vsOutput OUT    = (vsOutput)0;

    OUT.position    = mul( float4(IN.position,1.f), gWorldViewProjection );
    OUT.texcoord    = IN.texcoord;
    OUT.normal        = normalize(mul( IN.normal, (float3x3)gWorld ));
    OUT.depth.x        = OUT.position.z;
    OUT.depth.y        = OUT.position.w;
    return OUT;
}
///////////////////////////////////////////
psOutput    psDeferredShaderGeometryBuffer( vsOutput IN )
{
    psOutput OUT    = (psOutput)0;
    OUT.color.rgb    = tex2D( gColorMap, IN.texcoord );
    OUT.color.a        = 1;
    OUT.normal.xyz    = IN.normal * .5f + .5f;
    OUT.normal.z    = 0;
    OUT.depth        = IN.depth.x / IN.depth.y;
    return OUT;
}

[g-buffer]

[lighting shader]


///////////////////////////////////////////
//    G L O B A L     V A R I A B L E S
///////////////////////////////////////////
//float4        gAmbient;
//float4        gLightAmbient;
//float4        gMaterialAmbient;
float4x4    gInverseViewProjection;
float4        gLightDiffuse;
float4        gMaterialDiffuse;
float3        gLightDirection;
float3        gCameraPosition;
float        gSpecularIntensity;
float        gSpecularPower;
sampler2D    gColorMap;
sampler2D    gNormalMap;
sampler2D    gDepthMap;
///////////////////////////////////////////
vsOutput vsDeferredShaderDirectionalLighting( vsInput IN )
{
    vsOutput OUT    = (vsOutput)0;
    OUT.position    = float4( IN.position, 1.f );
    OUT.texcoord    = IN.texcoord;// - float2( .5/800, .5/600 );

    return OUT;
}
///////////////////////////////////////////
float4    psDeferredShaderDirectionalLighting( vsOutput IN ) : COLOR
{    
    float4 pixel                = tex2D( gColorMap, IN.texcoord );
        if( (pixel.x+pixel.y+pixel.z) <=0 )  return pixel;

    float3 surfaceNormal        = (tex2D( gNormalMap, IN.texcoord )-.5f)*2.f;
    float4 worldPos                = 0;
        worldPos.x                = IN.texcoord.x * 2.f - 1.f;
        worldPos.y                = -( IN.texcoord.y * 2.f - 1.f );
        worldPos.z                = tex2D( gDepthMap, IN.texcoord ).r;
        worldPos.w                = 1.f;
        worldPos                = mul( worldPos, gInverseViewProjection );
        worldPos                /= worldPos.w;
        
    //if( surfaceNormal.r + surfaceNormal.g + surfaceNormal.b <= 0 )
    //    return 0 ;
        
    float lightIntensity        = saturate( dot( surfaceNormal, -normalize(gLightDirection) ) );
    float specularIntensity        = saturate( dot( surfaceNormal, normalize(gLightDirection)+(gCameraPosition-worldPos)));
    float specularFinal            = pow( specularIntensity, gSpecularPower ) * gSpecularIntensity;

    //float4 ambient                = ((gAmbient+gLightAmbient)*gMaterialAmbient);

    return float4( ( gMaterialDiffuse * gLightDiffuse * lightIntensity).rgb, specularFinal);

};

[lighting shader]

Thanks in advance

Advertisement

For you lighting artifact, there are a few things I can think of:

- the half-texel offset needed when sampling (but you said you already applied that)

- some artifact from using multi-sampling (but you said you disabled that)

- some problem with the way your are storing/restoring your normals

How does your lighting artifact react when the object moves or rotates?

Also, make sure you're using point sampling (not linear filtering) when reading from your g-buffer.

Also, in your g-buffer shader, you need to re-normalize the normal in the pixel shader. I don't think it's the cause of your problem though, since your artifact is on the edge

The quickest way to get to the source of the problem would be to debug it in PIX. Step through the lighting shader for those bad pixels to see where the values look wrong.

Also, you should have the ability, in your game, to be able to show the G-buffer render targets, so you can instantly see if anything looks unusual there (you can also look at them in PIX).

Phil T, Thanks for reply.

The image above is a snap shot of the model rotating along the y-axis, sitting at the origin.

here's the setup code:


D3DXMATRIX    origin, world, view, projection, worldViewProjection, inverseView, rotationX, rotationY;

    D3DXMatrixIdentity( &world );
    D3DXMatrixIdentity( &origin );
    D3DXMatrixTranslation( &world, 0, 0, 0 );
    D3DXMatrixRotationX( &rotationX, SMYUTI_DegreeToRadian( 0.f ) );
    D3DXMatrixRotationY( &rotationY, SMYUTI_DegreeToRadian( ang+=( 50.f/fps ) ) );
    D3DXMatrixPerspectiveFovLH( &projection, D3DX_PI/4.f, float(SCREEN_WIDTH)/float(SCREEN_HEIGHT), 1.f, 500.f );
    D3DXMatrixLookAtLH( &view, &CameraPosition, &D3DXVECTOR3(0,0,0), &D3DXVECTOR3(0,1,0) );
    D3DXMatrixInverse( &inverseView, 0,  &(view*projection) );

    world                = origin * rotationX * rotationY * world;
    worldViewProjection    = world * view * projection;

Also when you say point sampling, you mean this, right:


dev->SetSamplerState(0, D3DSAMP_MINFILTER,D3DTEXF_POINT);
dev->SetSamplerState(0, D3DSAMP_MAGFILTER,D3DTEXF_POINT);
dev->SetSamplerState(0, D3DSAMP_MIPFILTER,D3DTEXF_POINT);
dev->SetSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_CLAMP);
dev->SetSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_CLAMP);
dev->SetRenderState( D3DRS_MULTISAMPLEANTIALIAS , FALSE );

Everything in the g-buffer looks fine except in the normal buffer of the g-buffer. I know for fact this is where the problem is at. So I too am assuming the issue is with the way that I store the normal data. I read somewhere that you could store data In view space to futher reduce the artifacts. I tried this and failed? Would you know how to, and if so, care to elaborate?

What are the surface formats for your g buffers? I'm assuming color and normal are 4 8 bit channels, and depth is a single channel floating point 32 bit?

Storing depth in viewspace can help if you're compressing depth to a 16 bit value. But before trying to squeeze as much into your g buffers as possible, you should try to get what you have working.

Storing a world space normal should work just fine in your RGBA render target (assuming you map from (-1, 1) to (0, 1) and back, as you do).

Again, I'll recommend learning to use PIX (or whatever graphics debugger works for you), since you could probably diagnose this in a matter of minutes.

Also, your half pixel offset should be added to your texture coordinates, not subtracted (you have this commented out in your lighting shader's vertex shader). I doubt that's the source that particular artifact though, since you use the same texture coordinate to sample from all 3 gbuffer render targets.

The surface formats are all 4 8bit channels


if( FAILED(graphics->device()->CreateTexture( w, h, 1, D3DUSAGE_RENDERTARGET,
        D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &g_colorRT, 0 ) ) )
        return false;

    if( FAILED(graphics->device()->CreateTexture( w, h, 1, D3DUSAGE_RENDERTARGET,
        D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &g_normalRT, 0 ) ) )
        return false;

    if( FAILED(graphics->device()->CreateTexture( w, h, 1, D3DUSAGE_RENDERTARGET,
        D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &g_depthRT, 0 ) ) )
        return false;

    if( FAILED(graphics->device()->CreateTexture( w, h, 1, D3DUSAGE_RENDERTARGET,
        D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &g_lightRT, 0 ) ) )
        return false;

I've also tried 4 16bit channels, D3DFMT_A16B16G16R16. but that same line is there. Here is the artifact again carrying over to the light shader. The light is on one side of the model and the camera is on the oppiste side pointing back at the model. So the screen should be completely black.

kz6n.png

I've tried using PIX but only manged to get it working once. I'll have another look at it.

That means for depth you're only getting 8 bits (since you're only using the r channel). That's definitely not enough (although I doubt it's the reason for your lighting artifact).

Just use D3DFMT_R32F for the depth buffer for now - it won't cost you any more memory, since you're already using a 32bit format (just wasting the B, G and A channels).

I switched to the 32 bit depth buffer that did not work. After further investigation I discovered that there's z-fighting or depth fighting going on. Some of the color from the other side of the cube is winning the depth test some how. I've tried some of the more basic methods for resolving the problem like adjusting the far and near planes. That did not work. So far only adjusting the perspective's field of view from D3DX_PI/4.f to D3DX_PI/3.f works but that alters the shape of the model abit too much. Any solutions? I'm going to try enabling backface culling next and adjusting the models winding order.

Edit:

Okay so yeah applying back face culling was the solution. Thanks for the help Phil T.

Can anyone answer the second problem, about the model not rendering when the camera sits directly above it, along the y-axis?

This topic is closed to new replies.

Advertisement