Cubic shadow mapping mess - Direct3D9

Started by
7 comments, last by Woodchuck 18 years, 9 months ago
Hello. I'm trying to implement cubic shadow mapping using standard method, but I'm having problems with figuring out how to do basic things. It would be nice if someone can help me with this and asnwer my questions. I wan't cubic shadow mapping (e.g. using 6 x 512 x 512 texture) so I need to do: 1. Create one cube map render target 6 x 512 x 512 of the format R32F containing depth values. I will store depth values in it. But tell me please why most of the demos allocate A8R8G8B textures for depth storage instead? They usually store depth as float4(depth, depth, depth, 1) from within pixel shaders then. It looks like a huge waste to me. Is there some good reason to do so? 2. Create one color buffer A8R8G8B8 as render target of the size 512 x 512. This step confuses me a lot. I've seen few demos in the internet and they always allocate this buffer, but some of them never use it to render to - I've found an explanation that it's because Direct3D requires color buffer to be bound while rendering (while rendering depth pass). Is this true? IN LOOP: 3. Render the scene from light's perspective: set 512 x 512 color buffer (needed by Direct3D??) disable color writing for each cube side: - set light world-view-matrix - set 512 x 512 depth/stencil buffer to appropriate surface of the previously allocated cube map - render geometry Is this more or less correct way? 4. Render the scene normally with shadows: Here comes big question to me. I've heard that cube shadow maps are not supported by any hardware. So, do I need to perform depth comparison by myself in pixel shader: if (shadow_texel_depth > pixel_depth) draw pixel with lighting else don't draw pixel (possibly kill fragment using pixel shader 3.0 ?) I imagine, when using single depth texture (no cubic shadow mapping), I would have the depth comparison done automatically? And please explain me one more thing: If I didn't use cubic shadow mapping - e.g. spot light can be done with single depth texture -, then which Direct3D API call tells Direct3D to automatically test depth for each processed pixel? Thanks very much to whoever can help!
Advertisement
Ummm...No one has experience with cubic shadow mapping?
I've been doing a lot of searching & googling, but it seems there're in fact not many resources for this...

If you know of any paper talking of technical details of cubic shadow mapping, please let me know.

Thanks guys.
Here is a demo I found (with code) that uses an A8R8G8B8 cube map with the depth stored in alpha, since there isn't support for cube depth maps(i.e D24S8 etc):
http://www.sjbrown.co.uk/cubicshadows.html

1) I supsect the demos you've seen that use the A8R8G8B8 format were written when that was the most simple format to target when writing out the depth values i.e. not much support for depth or floating point shadow maps.

2) D3D requires a bound color render buffer. You can use the same one for all your shadow maps as long as the dimensions (and bit depth?) match. Set COLORWRITEENABLE to false (and/or set the pixel shader to NULL).

4) I only know of NVidia hardware doing automatic depth comparissons with D24*8 textures (although ATI may do something similar now, I dunno :). If you use the R32F format or similar, I don't think you'll get automatic comparisson, and you'll have to do the test yourself.

Have a gander on the NVidia website for the GPU Programming Guide. It's got a good section on shadow mapping on NV cards and what formats you should use to get their optimisations (auto comparissons, hardware PCF etc). I'm sure they'll be similar material at ATI.com.

T
No HW supports native cube-map shadow maps, so you are right that you need to store depth yourself.

The easiest way to kill the fragment conditionally that works on all hw is to set alpha to zero if the object is in shadow, otherwise set the alpha to some other value.

Then use
D3DRS_ALPHATESTENABLE = true
D3DRS_ALPHAREF = 0
D3DRS_ALPHAFUNC = D3DCMP_GREATER

Alternately, you can render the pass-fail shadow test into dest alpha, and then light later, blending with this value. This method is somewhat slower, but allows you to factor out your shadow shaders from your light shaders.
Quote:Original post by SimmerD
The easiest way to kill the fragment conditionally that works on all hw is to set alpha to zero if the object is in shadow, otherwise set the alpha to some other value.

Then use
D3DRS_ALPHATESTENABLE = true
D3DRS_ALPHAREF = 0
D3DRS_ALPHAFUNC = D3DCMP_GREATER

Alternately, you can render the pass-fail shadow test into dest alpha, and then light later, blending with this value. This method is somewhat slower, but allows you to factor out your shadow shaders from your light shaders.


Umm...wouldn't that result in ugly single-sample shadow maps, i.e. you can't do PCF with that method?


Regardless, you do have one option for using native HW support (it sounds slow, but as far as I can tell Carmack is using this method for idNext because he's using 2048x2048 maps and the memory usage goes through the roof if you store 6 of those):

-Render a normal shadow map with 90d FOV
-In the main scene, stencil out the area that the shadow map will affect (by using a method similar to shadow volumes probably, and using a 90d frustum from the light)
-Render that part of the light
-Repeat for the other 6 faces

Personally, I don't like that, as it requires 12 SetRenderTarget calls PER LIGHT, which are very slow. What I do right now is render a full 512x512x6 cubemap to an R32F target, like you do, and then in the pixel shader do multiple texCUBE lookups to the shadow cubemap (working in HLSL). The sampler is obvious, and the 3D vector lookup is just the light to point vector. It works well and allows for PCF, and the code can compile to SM2.0 (up to 4 samples; above that and I need 2.x. The code is still pretty inefficient, too). The result is that I can do 7 SetRenderTargets, and in fact, after I start using a shadow map atlas (SimmerD's idea, you can check his posts for it) I can reduce that to 2 SRT calls...for EIGHT lights.
re PCF : yes that's true.

That's another reason to prefer rendering the shadow to dest alpha, or using alpha blending in addition to alpha test.
PCF ?
Percentage closer filtering. You offset your shadow lookup vector slightly, sampling the SM multiple times. Each sample increments one to your shadow term, and then at the end you divide by the number of samples you took. The end result is that you end up with soft and antialiased shadows, greatly increasing shadow quality.
thank you.

This topic is closed to new replies.

Advertisement