variance shadow map and z metric

Started by
3 comments, last by nini 15 years, 11 months ago
Hi community , I have the following pipeline : deferred shading using linear z , so i want to reuse the linear z texture for the shadow comparison but it's not ok... Here is what i do : // LINEAR Z TEXTURE stock z component in view space and transmit it to the pixel shader and divide by far_plane. // VARIANCE SHADOW MAP render from the light point of view and stock in G32R32F d = z/far_plane in Red channel and d*d in Green channel // do the habitual VSM comparison to obtain the occlusion factor (like in nvidia paper) with this hlsl function : Here texture4 is a samplerCUBE where i render the 6 faces for a pointlight ... Ldir is a normalized vector wich point from viewspace fragment pos to viewspace lightpos... I don't know if the viewspace coordinate is the cause of all the troubles ... when i walk thru the scene or rotate the camera , the shadow seems to move ! i dunno if it's the metric (z/far_plane) where z is calculated in view space or the view space Light vector... here is the func in hlsl : [c++] float VSMGetShadowTerm(float z,float3 Ldir) { float2 avg = texCUBE(Texture4,Ldir).xy; if(z<=avg.x) return 1; else { float E_x2 = avg.y; float Ex_2 = avg.x * avg.x; float variance = E_x2-Ex_2; float mD = z-avg.r; // DELTA Z float mD2 = mD*mD; float p = variance/(variance+mD2); return p; } } [/c++] Thanx in advance for your help.
Advertisement
Deferred rendering + VSM? Sounds a lot like my last renderer. [smile]

For point lights, you don't want to use z/far_plane as your depth metric. The reason why is because reconstructing this value requires using the view transform used when rendering your shadow map, which you won't have when perform your lighting pass (since you have 6 of them! one per cube face). Instead, just store the distance from the light source to the pixel divided by the maximum range of the light. Something like this is fine:

// in vertex shaderfloat3 vPositionVS = mul( vPositionOS, matWorldView );// in pixel shaderfloat fDepth = length(vPositionVS) / fLightRange;vOutput = float4(fDepth, fDepth * fDepth, 1.0f, 1.0f);



Then in your lighting shader, all you need to do is make sure you have everything aligned with your cube map faces and it's a piece of cake. Probably the easiest way is to make sure the cube map is aligned in world space, so that the top face points in positive Y, right in positive X, etc. Then querying your shadow map is like this:

// pixel shaderfloat3 vPositionWS = CalcPosFromZ(fViewSpaceZ);float3 vLightToPosWS = vPositionWS - vLightPosWS;float fLightDepth = vLightToPosWS / fLightRange;vLightToPosWS = normalize(vLightToPosWS);float fOcclusion = VSMGetShadowTerm(fLightDepth, vLightToPosWS );
Thanx for your help MJP

Original post by MJP
Deferred rendering + VSM? Sounds a lot like my last renderer. [smile]

// --> YEP I AGREE WITH THIS

For point lights, you don't want to use z/far_plane as your depth metric. The reason why is because reconstructing this value requires using the view transform used when rendering your shadow map, which you won't have when perform your lighting pass (since you have 6 of them! one per cube face)



Then in your lighting shader, all you need to do is make sure you have everything aligned with your cube map faces and it's a piece of cake. Probably the easiest way is to make sure the cube map is aligned in world space, so that the top face points in positive Y, right in positive X, etc.

// --> HOW DO YOU ENSURE UR CUBEMAP IS ALIGNED ? (texture matrices ?)
// can you help me on that...thanx again
Quote:Original post by nini
// --> HOW DO YOU ENSURE UR CUBEMAP IS ALIGNED ? (texture matrices ?)
// can you help me on that...thanx again


That's easy: when you construct the view matrix you use for rendering the shadow map to each of the cubemap faces, make that view matrix look in the direction indicated by that cubemap index. Each cubemap face corresponds to a major axis, using the D3DCUBEMAP_FACES enum type. So basically you want to do this for each of your faces:

D3DXMATRIX MakeCubemapViewMatrix (D3DCUBEMAP_FACES eCubemapFace, D3DXVECTOR3 vLightPos){    D3DXVECTOR3 vLookAt (0, 0, 0);    D3DXVECTOR3 vUp (0, 0, 0);    switch (eCubemapFace)    {        case D3DCUBEMAP_FACE_POSITIVE_X:            vLookAt.x = 1.0f;            vUp.y = 1.0f;	    break;        case D3DCUBEMAP_FACE_NEGATIVE_X:            vLookAt.x = -1.0f;            vUp.y = 1.0f;	    break;	case D3DCUBEMAP_FACE_POSITIVE_Y:            vLookAt.y = 1.0f;            vUp.z = -1.0f;	    break;	case D3DCUBEMAP_FACE_NEGATIVE_Y:            vLookAt.y = -1.0f;            vUp.z = 1.0f;	    break;	case D3DCUBEMAP_FACE_POSITIVE_Z:            vLookAt.z = 1.0f;            vUp.y = 1.0f;	    break;	case D3DCUBEMAP_FACE_NEGATIVE_Z:            vLookAt.z = -1.0f;            vUp.y = 1.0f;	    break;    }    vLookAt += vLightPos;    D3DXMATRIX matView;    D3DXMatrixLookAtLH(&matView, &vLightPos, &vLookAt, &vUp);    return matView;}


ok ur very valuable for me , but still doesn't work , i was already calculating cubemap views like you...

I think my problem is from the reconstruction part , cause i reconstruct View space pos not world space pos from the distance buffer ...

in fact i store linear z (ie z component in view space/far_plane) as the distance buffer.

And when doing lighting pass :
in the VS :

Out.ScreenDir =mul(float4(Out.Pos.xy*far_plane,far_plane,far_plane),matProjectionInverse);

where Out.Pos.xy are clip space vertices (-1,1)

in the PS :
get linearz from texture , and ViewSpacePos = screendir*z;

My question is :

in order to have worldspace : i mul screendir with matViewInverse and do
WorldSpacePos = screendir*z+vViewPosition ?

thanx anyway for ur help...

This topic is closed to new replies.

Advertisement