Sign in to follow this  
wswqwps

D24S8 depth sampling wrong

Recommended Posts

wswqwps    122
Hi, I wrote a simple sample, it is like this: 1. Create a rendertarget texture and a D24S8 depth/stencil texture, get surface from both of them and set as rendertarget and depth/stencil buffer. 2. Write depth value 0.7 onto the depth surface with a shader, the pixel shader is like this:
void ps_main( 
float4 incolor : COLOR0,
out float4 outcolor : COLOR0,
out float outdepth: DEPTH  )
{
outcolor = incolor;
outdepth = 0.7f;
}
It works fine. 3. Sample from the two textures to and render to other surfaces. The pixel shader is:
texture ColorTex;
sampler sam0 = sampler_state
{
Texture = <ColorTex>
}

texture DepthTex;
sampler sam1 = sampler_state
{
Texture = <DepthTex>
}

void ps_main( 
float2 intex0 : TEXCOORD0,
float2 intex1 : TEXCOORD1,
out float4 outcolor : COLOR0,
out float outdepth: DEPTH  )
{
outcolor = tex2D( sam0, intex0 );
outdepth = tex2D( sam1, intex1 );//Here should be 0.7
}

In Pix I check the two textures and they are right. But I debug the hlsl code and found that tex2D() function on the depth texture give a value(0,0,0,0). The same code works fine on xbox360. Is it caused by format D24S8? Or D24FS8 can work? I would like to know how the PC hardware use the tex2D() to sample and convert the depth value.

Share this post


Link to post
Share on other sites
D3D9 doesn't support sampling the depth values of regular depth surface formats, although nVidia has a hack for shadowing, which has worked since GeForce3. When sampling a depth texture, it compares the depth value in the texture against the depth value in a projected texture read, and returns a PCF filtered result of whether you're in shadow or not.

I believe that both nVidia and AMD/ATI have custom FOURCC formats available which work as depth formats, and which can later be read for their depth values. I've never tried them though.

Some discussion is here, about RAWZ/INTZ nVidia formats and DF16/DF24 AMD/ATI formats.

Share this post


Link to post
Share on other sites
jpventoso    178
Quote:
Original post by Namethatnobodyelsetook
nVidia has a hack for shadowing, which has worked since GeForce3. When sampling a depth texture, it compares the depth value in the texture against the depth value in a projected texture read, and returns a PCF filtered result of whether you're in shadow or not


I've just found out that ATI supports it too, at least on a Radeon 2400+...

Share this post


Link to post
Share on other sites
wswqwps    122
Quote:
Original post by Namethatnobodyelsetook
When sampling a depth texture, it compares the depth value in the texture against the depth value in a projected texture read, and returns a PCF filtered result of whether you're in shadow or not.


Do you mean that [the projected texture] is the one sampled by tex2Dproj()? But who does it compare with, the value in current depth buffer or not? I'd like to know more details, thanks a lot.

Share this post


Link to post
Share on other sites
You create a texture of a depth format, typically with no mips.
You render your scene from the light's point of view to that texture, saving the view/proj matrix you're using for later.

When rendering your scene normally, you perform a lookup in the depth texture you've created above. To sample the texture, you use your position, transformed by world, and a modified version of your light's view/proj matrix. Send this to your pixel shader as a 4D texcoord (use COUNT4|PROJECTED in fixed pipe). When you sample the texture it won't actually return the value in the depth texture, but a "shadow value", based on comparing the texcoord's z and/or w with what's stored in the texture. If the value are equal, it returns 1, as you're rendering the pixel that was front-most from the light's view. If the value in z/w is more than what's in the texture, this pixel is behind another one from the light's point of view, so it returns 0. Typically you'll get 0 or 1, but it does have some filtering and will return some gray values.

When sampling the texture, typically you'll need some sort of bias built into the light's projection matrix to avoid artifacts. You'll also want to multiply your light's view/proj matrix by a matrix to scale the XY from the typical (-1,1) range to (0,1)... and I think the Y get inverted.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this