Depth shadow mapping problem

Started by
25 comments, last by DaJudge 20 years, 6 months ago
Hi all, I am still working on my depth shadow mapping test app and I think I''m almost there. I have the scene rendered from the perspective of the light source onto a texture and when I do projective texture mapping with the backbuffer (the render target of the light) it looks pretty much like I think it''s supposed to do: All objects get projected onto other objects in the way you''d expect their shadow would do. Now my simple problem is that when I want to use the ZBuffer instead of the backbuffer (as done in the Hardware Shadowmapping demo that comes with the NVSDK) it seems like all texture lookups to the ZBuffer texture return black as color value. The result is that I end up with a completely black scene being drawn. Am I missing something? Maybe one of you can help me with that... Thanks, Alex
Alexander Stockinger
Programmer
Advertisement
What API are you using ? In any case, make sure to use a depth-texture as a render target, not an RGB one. If you try to directly visualize the contents of a depth texture as RGB, then the result can very well be all black. That depends on the z value range in the texture, that gets remapped from 24bit to 8bit - quite a dynamic range mismatch. 65535 depth values will actually all map to a single colour: black.

But depth textures aren''t meant to be directly rendered anyway. Just add the depth compare operation, and check the results of the shadowing.
Hi,

quote:
What API are you using ?


DirectX 8

quote:
In any case, make sure to use a depth-texture as a render target, not an RGB one.


Hm, I can''t see how I''d get that working. For using a texture as a render target you have to specify D3DUSAGE_RENDERTARGET but for a depth buffer you have to specify D3DUSAGE_DEPTHSTENCIL. And as it seems to me specifying both flags always results in an error saying "Invalid format specified for texture". So does the attempt to create a texture with only D3DUSAGE_RENDERTARGET and a depth format.

I just don''t get this right. I DO have a completely working sample in the NVSDK. Howevery contrary to what you said it DOES set a RGB texture as render target and a depth texture as ZBuffer. It then selects the depth texture into texture stage one and does projective texture mapping with that plus a little fragment program for composing that with lighting information from the vertex shader.

As far as I can see I am doing exactly the same (a few code-structural differences are there of course) but only get a black scene. So there HAS to be some difference I just can''t seem to be able to find.


Thanks in advance,
Alex
Alexander Stockinger
Programmer
Oh well, I can''t help you very much with D3D. AFAIK, D3D8 was very limited when it came to advanced effects such as shadowmaps. You might want to switch over to D3D9. But I could also be wrong, I''ve never used D3D.

Fact is, that rfom a hardware point of view, any more or less modern GPU (GF3+) can select pretty much any combination of colour/depth/stencil planes as a render target. The API has to support that functionality, though. In OpenGL, you can simply select the planes you need, and you will get the appropriate pbuffer. For shadowmapping, you don''t need an RGB part, nor do you need a stencil part (although some hardware might enforce one, or ad 8bit padding). A z-texture is totally sufficient, everything else is wasted memory.

Now, if D3D forces you to allocate an RGB part in additional to the depth component, then a colour readback will obviously return the RGB part of that texture. Basically, the GPU needs to know which component layer it has to bind onto a texture sampler. For shadowmapping, it is the z-layer. In OpenGL, that''s simply a GL_DEPTH_COMPONENT16/24/32 internal format. In D3D, I don''t know, sorry.
It HAS to work with DX8. The working sample is in DX8 as well...

Back on topic just explain that rendertarget / zbuffer mechanism in DX for clarity.

When you want to render to a texture you create a texture with a D3DUSAGE_RENDERTARGET flag. If you additionally want a Depth buffer you create another texture with a D3DUSAGE_DEPTHSTENCIL flag. Then you select both by basically calling

device->SetRenderTarget(rgb_surface, zbuffer_surface); 


For verification I then select the rgb_texture into texture stage 0 and render it using projective texture mapping. That works perfectly well. All geometry gets projected as desired, so the transformation / texture coordinate generation part apparently works well. When I then select the depth texture into texture stage 0 all I get is a black image.

Any ideas?

Thanks,
Alex
Alexander Stockinger
Programmer
quote:Original post by DaJudge
For verification I then select the rgb_texture into texture stage 0 and render it using projective texture mapping. That works perfectly well. All geometry gets projected as desired, so the transformation / texture coordinate generation part apparently works well. When I then select the depth texture into texture stage 0 all I get is a black image.

That's perfectly normal from the point of view of the GPU, as I explained in my post above. A depth component texture has no colour, it has a depth value. If the GPU binds a depth texture to a texture unit, then there is no colour defined. Most GPUs will perform a rough range remapping, for convenience, but you're likely to get black over a large range of depth values.

The shadowmapping algorithm requires you to do a compare between the dpeth component of the texture, and the third texture coordinate. The result of that comparison defines a colour, but not the depth component on its own.

Now, there are two possibilities to your problem. Either D3D requires you to manually activate the comparison operation, as in OpenGL. In that case, do so, otherwise you'll get a black image.

Or, D3D automatically enables the comparison, as soon as a depth texture is bound. In that case, there is most probably an error with the way the third texture coordinate is computed, ie. the shadow comparison fails.

Consult your D3D reference.

quote:
device->SetRenderTarget(rgb_surface, zbuffer_surface);

You mean, D3D8 requires you to define an RGB buffer, even if not needed ? Ouch.

Can't you simply say: device->SetRenderTarget(NULL, zbuffer_surface) ? The shadowmapping algorithm doesn't need an RGB surface, so basically, you're wasting a lot of memory there.


[edited by - Yann L on October 14, 2003 11:55:16 AM]
No, I don''t think you could set a null render target. I believe in DX9 one can set render target and zbuffer independently but still I don''t think that the render target can be null. But that''s not really a that evil thing since you can disable color writes and reuse that texture somewhere else (I intend to do fullscreen postrender effects anyway...).
So I will just look around in the sample source code if there is a single line setting some render state but I don''t think I''ll find something.

Thanks for the moment,
Alex
Alexander Stockinger
Programmer
OK, when you create your shadow map texture, you use CreateTexture() for your depth surface (as well as your RGB surface of course). At this point you will have an RGBA texture and a Depth texture. Do your rendering into the shadow map as normal. When you want to use the shadow map, set the DEPTH texture to the texture stage, not the RGB texture.

//Setup:
m_pRGBA = m_pDev->CreateTexture(..., D3DFMT_R8G8B8A8, DUSAGE_RENDERTARGET,...);
m_pDepth = m_pDev->CreateTexture(..., D3DFMT_D24X8, D3DUSAGE_DEPTHSTENCIL, ...);

//Render ShadowMap:
m_pDev->SetRenderTarget(m_pRGBA, m_pDepth);
...Render Here....

//Use ShadowMap
m_pDev->SetTexture( dwStage, m_pDepth );

Then in the shader, assuming you have your texture matrix setup correctly, the comparison is automatically done given you a value from 0 to 1 in all channels.

Hope this clears it up a bit.


[edited by - blue_knight on October 14, 2003 12:55:41 PM]
That is exactly what I am doing. For verification I tried to render the depth texture as a fullscreen quad which should give me, as far as I understand it, a greyscale image showing the depth values of the pixels in the scene.
Well, it doesn''t. ZBuffer DOES work during the rendering process (all intersecting / non z-ordered geometry ends up correctly) but it doesn''t seem to me the Z-Buffer is correctly used as a texture. Things start with B/W image that appears to be unwritten memory (more or less random rectangular shaped noise areas etc.) and after a couple of seconds (completely random) it ends up with a white image.

I just don''t get the point what I am doing wrong.


Thanks for your help so far and I hope someone can give me I hint how to get that thing right.

Thanks,
Alex
Alexander Stockinger
Programmer
Hmm, well, my opinion on this might not really be relevant, as this is all very D3D specific. So I can only speak from a hardware point of view. Static pixel noise generally means, that either the memory was not included in the primary pbuffer target surface, or that the buffer you bound was never assigned to a valid render target before. The former one is not probable, as you say your faces are correctly sorted when rendering the RGB portion. So it''s probably the later.

quote:
For verification I tried to render the depth texture as a fullscreen quad which should give me, as far as I understand it, a greyscale image showing the depth values of the pixels in the scene.

Under OpenGL, yes. But not under Direct3D. I just checked it, and D3D will automatically enable the depth compare as soon as a depth target is bound to the texture unit. Therefore, you''ll never get a greyscale image, but the comparison result between the depth texture and the third texture cooordinate. When rendering a fullscreen quad, then the result will be pretty much meaningless.

There surely is a way in D3D to copy the depth component of a depth texture into the RGB components, so that you could easily check it. It would also help if you posted your code. People more used to D3D might be able to quickly spot an error.

This topic is closed to new replies.

Advertisement