Archived

This topic is now archived and is closed to further replies.

Shadow mapping approaches and graphics card independence

This topic is 5110 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I''ve been tinkering with using shadow mapping in D3D9. There seem to be several approaches for performing this technique. For example: 1. Creating a depth stencil texture, and the hardware interprets this texture format when the texture is projected so that it does shadow mapping. 2. Creating a float texture and using a pixel shader to handle it. 3. Creating a regular texture and a separate depth stencil surface. Approach 1, as far as I can tell, does not work on Radeon cards. Whenever I try to create the texture the debug runtime tells me that this usage (D3DUSAGE_DEPTHSTENCIL) is not supported for textures. Many of the examples I''ve found for shadow mapping use this technique. I''m attempting approach 3, but I''m not sure if it''s possible without a pixel shader. I don''t particularly mind using one, but I was hoping just to stick to the fixed function pipeline for simplicity. So I favour approaches which can use it. Anyway, I can create my texture, get it''s surface for setting the render target, and create a depth stencil buffer for using when rendering from the light''s point of view. I am projecting my texture correctly on the scene when it is drawn from the camera''s point of view. But I cannot figure out how to get D3D to use the depth stencil buffer I''ve created. Does anyone have any ideas, or am I forced to use a pixel shader if I want shadow mapping to use the same technique on GeForce and Radeon cards? Thanks for your time, - Pete
Why you shouldn''t use iostream.h - ever! | A Good free online C++ book

Share this post


Link to post
Share on other sites
why don''t you use a surface instead of a texture as in the readeon sdk shadow map sample?
in this sample you use a vertex shader to put the z data from rendering from light''s point of view into a render target (surface).then you lock the z buffer using getdepthstencilsurface() and compare the z data.I think this means that you need a vertex shader to put out the z values and a pixel shader to compare the z values between the two buffers for each pixel.

Share this post


Link to post
Share on other sites
Thanks for your reply.

You''re right, I think I have to use a pixel shader when using a separate depth stencil buffer.

I''ve not had some time to try this (and try it on my GF3 too) but hopefully I will tonight or tomorrow.

The ATI sample doesn''t work with my GF4 in work (latest drivers & DirectX SD), but of course it does work at home on my Radeon. I''d really like to use the same method for Radeon and GeForce cards, but maybe it''s not possible. I''m limited to Pixel Shader 1.1 as far as I know if I want it to work on my GF3.

Thanks again.

- Pete


Why you shouldn''t use iostream.h - ever! | A Good free online C++ book

Share this post


Link to post
Share on other sites
In case anyone is following this thread out of interest, this is the approach I''m thinking about taking at the moment:

From the light''s point of view:

Separate texture (render target) and depth surface for rendering from the light''s point of view.

Vertex Shader 1.0 for packing the depth into vertex colours. I may in the end pack it into a texture coordinate instead, I''m not sure.

From the camera''s point of view:

Pixel Shader 1.1 for using the light texture''s colour as depth.

I''ve not gotten very far yet, just writing the first vertex shader. If anyone could forewarn me if I''m going down a dead-end that would be great!

- Pete


Why you shouldn''t use iostream.h - ever! | A Good free online C++ book

Share this post


Link to post
Share on other sites