Sign in to follow this  
maxest

DX clears depthstencil buffer behind my back in Debug-mode

Recommended Posts

That's really strange. Look at the code
CRenderer::setRenderTarget(&sceneRenderTarget);
CRenderer::setDepthStencilSurface(&sceneDepthStencilSurface); // scene depth-stencil

CRenderer::clear(true, true, false, CVector3(0.0f, 0.0f, 0.0f));

CRenderer::setDepthStencilSurface(&someDifferentDepthStencilSurface);
CRenderer::setDepthStencilSurface(&sceneDepthStencilSurface); // again scene depth-stencil
After the last two lines of code above, I simply get black screen after anything is rendered. But when I do:
...

CRenderer::setDepthStencilSurface(&someDifferentDepthStencilSurface);
CRenderer::setDepthStencilSurface(&sceneDepthStencilSurface); // again scene depth-stencil
CRenderer::clear(true, true, false, CVector3(0.0f, 0.0f, 0.0f));
It's all okay. So it looks like sceneDepthStencilSurface gets some rubbish-data. That's annoying cause I need to "switch for a moment" a depth-stencil surface (and render target) to do some rendering and then get back to the previous depth-buffer. Moreover, if I use default back-buffer's depth-stencil instead of additional sceneDepthStencilSurface, it's all fine. And this only happens in Debug mode, not Retail. Is this some driver bug or what? Can I do this and be sure that it would work on any machine at Retail-DX-mode?

Share this post


Link to post
Share on other sites
I imagine this is similar to how the debug runtimes clear the backbuffer when you use SWAPEFFECT_DISCARD. This is to help point out that expecting the old data to remain is an error.

When you create a depth buffer, there is a flag that says "I don't need the data after switching to another surface". This is the "Discard" flag passed to CreateDepthStencilSurface. And device creation flag "D3DPRESENTFLAG_DISCARD_DEPTHSTENCIL" allow you to specify this for the default Z surface.

It sounds like you've told D3D that the surface may be discarded, and the debug runtimes ensure you're behaving correctly by forcing a clearing. In retail mode, you may get lucky and the data is fine, but you can't really rely on that, as it may change with an update to D3D, or if another D3D app is running on the same PC (which I suppose could include any Aeroglass desktop).

Share this post


Link to post
Share on other sites
I'd like to point out additionally that this is Debug DX doing exactly what it's supposed to -- making it incredibly clear that you are doing something wrong.

Share this post


Link to post
Share on other sites
Thanks for this thourough explanation, both of you :) It's okay now.
However, there's still some issue with that. I create my depth-stencil surfaces in s uch a way that I first create a texture with one mip-map and then use GetSurfaceLevel to access it and use as depth-stencil buffer. The reason I want this texture object is becuase I use hardware shadow mapping. So now my code looks this:

if (format == dssfShadowMap)
{
CRenderer::D3DDevice->CreateTexture(width, height, 1, D3DUSAGE_DEPTHSTENCIL, D3DFMT_D24X8, D3DPOOL_DEFAULT, &texture, NULL);
texture->GetSurfaceLevel(0, &surface);
}
else
{
CRenderer::D3DDevice->CreateDepthStencilSurface(width, height, D3DFORMAT(format), D3DMULTISAMPLE_NONE, 0, false, &surface, NULL);
}

But what if I want the shadow map not to be discarded? I don't know where to specify this "discard" flag in that case.

[Edited by - maxest on July 21, 2009 7:14:06 PM]

Share this post


Link to post
Share on other sites
This may actually be a separate issue. Creating a depth texture is actually not legal in D3D. nVidia supports this to do it's shadowing technique. I'm not sure if ATI cards handle it too these days, but they didn't for the longest time. It's an nVidia specific extension as far as I know. ATI has their own vendor specific ways of rendering to a depth texture, using custom FOURCC format codes.

The nVidia technique has been plagued with a bug for a long time (7, 8 years). When using the debug runtimes, if your depth texture is larger than the original device depth surface, then lookups will fail, randomly. Whether it fails seems to depend on something when the surface is created, as a Reset may change it's behaviour. It always seems to work fine with retail runtimes.

nVidia has never acknowledged the issue, that I've seen. Microsoft has refused to look into it for me unless I could prove it's not a vendor driver bug. They suggested I show them a repro case on an ATI card, which is, um, difficult for an nVidia specific extension. The fact that the bug is dependant on a specific Microsoft runtime doesn't seem to sway their interest in looking into it. Perhaps MS is still a bit annoyed with how the original XBox chipset deal turned out.

Anyway, perhaps THIS is your issue. If so, remember, you're using an nVidia specific technique. You'll need a second code path to handle things the ATI way.

Share this post


Link to post
Share on other sites
So now I know why my shadow mapping works randomly in Debug-mode. I do something like:

render to shadow map
render something else with differen depth-stencil
render something with shadow map

And this worked well on 4 GeForces I've tested (GF6150M, GF6600, GF6800, GF8400M). So I suppose it's gonna work. I'll also need to check some ATIs and if they find a Shader Model 3.0 GPU where this hardware shadow mapping doesnt work, I'll implement this additional path (probably use fetch4).
Thanks again

Share this post


Link to post
Share on other sites
I've checked ATI Radeon HD 4670 and both, OpenGL and Direct3D9 correctly do the shadow mapping along with bilinear percentage closer filtering. I'm really curious from which ATI GPU on this feature is accessible. Anyone?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this