Shadow Mapping (DepthStencilSurface/RenderTarget)

Started by
12 comments, last by B_old 18 years, 1 month ago
Hello, I want to implement shadow mapping and stumbled over some general questions I don't understand. I took a look at the demo in the directx sdk. They create a D3DFMT_R32F-texture and a depthstencilsurface (with CreateDepthStencilSurface()) of the same size. Obviously the demo works. 1. Instead of CreateDepthStencilSurface() i tried CreateTexture(D3DUSAE_DEPTHSTENCIL) and then GetSurfaceLevel(). That works too. Is it the _same_ ? 2. (and more important) I have not understood why I need both a colorsurface and a depthstencilsurface? Somewhere I read about a programm that uses D3DFMT_D24S8 if supported by hardware, or D3DFMT_R32F otherwise. Why does the dx-demo use both? I hope I could explain my problem. It would be cool if someone could explain this to me, thanks for reading anyway.
Advertisement
I forget the details, but I'm pretty sure that only Nvidia hardware supports DST's (Depth Stencil Textures) and they use it as a form of shadow-mapping optimization (not sure, but might be related to PCF). I'm pretty sure ATI don't/can't have this due to Nvidia holding the patent on it.

This would explain why the SDK does it both ways - use Nvidia's proprietary extension if available, but have a "lowest common denominator" path for older or non-NV chipsets.

Quote:Original post by B_old
1. Instead of CreateDepthStencilSurface() i tried CreateTexture(D3DUSAE_DEPTHSTENCIL) and then GetSurfaceLevel(). That works too.
Is it the _same_ ?
It should be (refer to my previous comment), but only if IDirect3D9::CheckDeviceFormat() states it's okay.

Quote:Original post by B_old
2. (and more important)
I have not understood why I need both a colorsurface and a depthstencilsurface?
After you've constructed the shadow map you need to feed it in as an input to the next stage. That is, you need the depth information in a form that a pixel shader can sample it. Thus you need the R32F texture. However, in constructing the original SM you still need to do depth comparison/culling, so you need a regular DSS to make sure only the correct depths get written to the texture.

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

OK.

I got the impression that I could use a DST instead of a R32F, or did I misunderstand you?


EDIT:
Well, I cannot create a rendertarget which has a D24S8 format.
Guess I have not understood it yet.
Quote:Original post by B_old
I got the impression that I could use a DST instead of a R32F, or did I misunderstand you?
If the hardware supports it, then yes you probably can. It's a subtle difference - but a DST is a texture, whereas normal depth buffers are just surfaces. In shadow mapping this is important - you can't feed a surface into a pixel shader.

Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

If you use D3DFMT_R32F as Shadow map you always need an additional Depth Stencil Surface. The SDK only includes this “default” way for shadow mapping.

With nVidia Hardware (since GeForce 3) you can create a Texture with a Depth Stencil Format and use a surface of this texture as Depth Stencil Buffer. In This case you don’t need an additional Texture with D3DFMT_R32F. You will render only to the depth stencil buffer. After this you can use the Texture as shadow map. But it will work a little bit different. You should take a look in the nVidia programming guide.

Newer ATI hardware supports another way of shadow mapping. There are two FourCC formats that can use as Texture and Depth Stencil surface. But you have to do the depth compare by your own in the shader. Some chips support a feature called Fetch4. You can use it to read 4 values at once. You can find details about this in the ATI SDK.
Yes I think I understand the texture/surface issue.
Now I only need to find out, how to create a DST.

Thanks for the info so far!

EDIT:
Demirug: I did not see your post. Sounds very interesting, I should give it a try.
What exactly is written to the color buffer anyway? Suppose the rendertarget is RGBA. I don't see any flag that gets set that says encode depth in the color buffer.
Quote:Original post by Anonymous Poster
What exactly is written to the color buffer anyway? Suppose the rendertarget is RGBA. I don't see any flag that gets set that says encode depth in the color buffer.


You will normally use only one channel formats (like R32F) and you will write the depth in this channel. To do this you have to forward the Z value from the vertex shader to the pixel shader.

It is possible to encode the depth value in a normal RGBA 8 Bit texture. But this will require some additional work in the pixel shader. I have seen this in Splinter Cell 3 in the 1.1 shader path.


It's possible to use shadowmap with XRGBA format, I seen it in paul's project demo which uses dx8. I can copy the code and it works, but I have no clue what is in that texture. It's a rendertarget so you can't save the texture to disk it'll return a D3D_ERRINVALIDCALL.
I have seen paul's project demo too.
Doesn't he say on his page, that the demo requires render-to-depth-texture support? But the demo uses 2 textures anyway.

"Note that you must create a corresponding color surface to go along with your
depth surface since Direct3D requires you to set a color surface / z surface pair
when doing a SetRenderTarget()."

This quote is from a nvidia paper.
So how is it possible to only use a depth stencil texture?
I haven't found anything about that yet.

This topic is closed to new replies.

Advertisement