Shadow Mapping general questions

Started by
-1 comments, last by Seabolt 12 years, 2 months ago
Hey guys, I'm implementing a omni-directional shadow mapping algorithm, but I'm getting some weird artifacts, and I can't exactly explain them. So this may be an annoying post, but any help would be appreciated. Here's the offending code:

How I setup my cameras:

// Up vectors for the cameras
WVector upVecs[] =
{
{ 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.0f },
{ 0.0f, -1.0f, 0.0f }, // Positive Y up vec for our "world up" vector
{ 0.0f, 1.0f, 0.0f } // Negative Y up vec for our negative "world up" vector
};
WVector forwardVecs[] =
{
{ 1.0f, 0.0f, 0.0f }, // Positive X
{ -1.0f, 0.0f, 0.0f }, // Negative X
{ 0.0f, 1.0f, 0.0f }, // Positive Y
{ 0.0f, -1.0f, 0.0f }, // Negative Y
{ 0.0f, 0.0f, 1.0f }, // Positive Z
{ 0.0f, 0.0f, -1.0f } // Negative Z
};
mOrthographicCamera = false; // We need a perspective camera
// Go through all our faces.
for( int currentFace = 0; currentFace < numCameras; ++currentFace )
{
// Get our camera
WCamera* camera = gCameraManager->GetCameraByHandle( mShadowMapFaces[ currentFace ].mCameraHandle );
ASSERT( camera );
// Set our up vectors
camera->SetUpVector( upVecs[ currentFace ].x, upVecs[ currentFace ].y, upVecs[ currentFace ].z );
camera->SetDestUpVector( upVecs[ currentFace ].x, upVecs[ currentFace ].y, upVecs[ currentFace ].z );
// Set our eye position and look at target
WVector eye = { mLight->GetPos()->x, mLight->GetPos()->y, mLight->GetPos()->z };
WVector forward = forwardVecs[ currentFace ];
WVector target = eye;
target += forward;
camera->SetEye( &eye );
camera->SetDestEye( &eye );
camera->SetTarget( &target );
camera->SetDestTarget( &target );
// Set our viewport size
camera->SetViewport( 0,0, mMapSize, mMapSize );
// Set our field of view to 90 degrees. We want to match each face edge perfectly.
camera->SetFov( 90.0f );
camera->SetTargetFov( 90.0f );
// Hard coded near far plane right now. We need a better solution someday.
camera->SetNearFar( 1.0f, mLight->GetAttenuation()->w);
// We're not orthographic, and we aren't trying to animate. Though we may want to in the future...
camera->SetOrthographicView(mOrthographicCamera);
camera->Pause();
}


Here's the code I use to output my shadow depth:

VS_OUT VertexShaderModel (VS_IN In)
{
VS_OUT Out;
Out.Position = mul (matWVP, In.Pos); // Transform position of vertex into screen
Out.WorldPos = mul( matW, In.Pos );
return Out;
}
float4 PixelShaderModel (VS_OUT In) : COLOR
{
return float4(saturate( length( In.WorldPos - PointLtPos[0].xyz ) / PointLtAttenuation[0].w ), 0, 0, 1);
}


And here's the code I use to test if I'm in the shadow (per pixel):

float3 pixelToLight = ShadowLtPos[ shadowMapIndex ].xyz - worldPos; //inObjPos is the pixel's position in world space
float pixelDepth = saturate( length(pixelToLight.xyz) / ShadowLtPos[ shadowMapIndex ].w ); //the depth of current pixel divided by the light's attenuation
pixelToLight.xyz = normalize( pixelToLight ); //compute attenuation factor
//sample the cubic shadow map using the inverse of light direction float
float shadowMapDepth = saturate( texCUBE(textureSampler3D, -pixelToLight ).x + 0.01f ); //do the depth comparison + a little shadow bias.
return pixelDepth >= shadowMapDepth;


I get some interesting artifacts with this, I'll attach images. Any obvious mistakes you guys see, or things I should consider?
The little triangles are my biggest worries. In PIX it shows that they are reading properly, they just seem to have the wrong values consistently. Also, my view frustums are actually covering those areas and generating proper depth maps for those areas. So I'm stumped.
Perception is when one imagination clashes with another

This topic is closed to new replies.

Advertisement