Jump to content
  • Advertisement
Sign in to follow this  
ElFrogo

Bypassing auto hardware shadow maps

This topic is 4825 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to allow my texture alpha values to show 'holes' in my shadow map (ie, so that tree textures can become shadow mapped). It seems that the NVIDIA hardware (and newer versions of the ATI) have hardware shadow mapping built into the shading language, so that when a tex2Dproj is performed, shadow map calculates are automatically computed for you. I've tried stepping around it, (ie enabling alpha testing before rendering the objects) and for some reason, i can't seem to alter the shadow map image. Has anyone else ran into this problem? Or atleast offer some advice? thanks FROG!

Share this post


Link to post
Share on other sites
Advertisement
This has nothing to do with tex2Dproj, nothing. It's all about how you genereate the shadow map. Enabling alpha test when rendering to the shadow map should do the trick.

Share this post


Link to post
Share on other sites
Sunray, simply enabling alpha testing wont work if you are using floating point textures.

ElFrogo, what you'll need to do is use a ARGB texture and store the floating point value (the depth) in the RGB channels and the alpha of the texture in the alpha channel.

Take a look at this thread here where I explain how to pack a float into 24bits:

http://www.gamedev.net/community/forums/topic.asp?topic_id=322318

Your shadow mapping pixel shader then becomes:


PS_OUTPUT ShadowMapPS(VS_OUTPUT vs)
{
PS_OUTPUT o;

//
// Sample the texture
//
float4 textureSample = tex2D(TextureSampler, vs.textureCoords);

//
// Pack the depth value into 24 bits (RGB) and place the texture alpha into (A)
//
o.colour = float4(PackDepth24(vs.depth), textureSample.a);

return o;
}




Then enable alpha testing and you should be good to go. This will work with any ps 2.0 card, not just cards with hardware shadow maps.

Share this post


Link to post
Share on other sites
hey buttza, thanks. but won't that limit my precision to 24 bits?


Also, won't a simple clip(x) work? or is texkill still too performance heavy on sm3.0 cards??

EDIT: Let me rephrase that: When rendering an object into my r32 texture, I called clip(-1) on objects with an alpha value <.2f; However, all the pixels were drawn. I modified the algo, to call clip(-1) on ALL pixels drawn, and still, they all came through in the shadow map. I'm starting to think that my understanding on how exactially the pixel data gets into the 'shadow map.' If we're doing a texkill on every pixel, shouldn't nothing be rendered!?

FROG!

[Edited by - ElFrogo on June 1, 2005 9:59:14 AM]

Share this post


Link to post
Share on other sites
ElFrogo, yes it will limit you to 24bits, but in my experience it's enough. 16bit shadow maps should be enough in small areas or for small lights.

Thanks for the suggestion on using clip, i'd never thought of using that. I tried it and it is certainly fast enough with no apparent difference between my way and using clip. I'll use clip in my next game though.

I simply used:

clip(-0.4f + textureSample.a);

And it worked fine, I'm not sure why you're having problems. I'm using sm2.0 btw.

Share this post


Link to post
Share on other sites
Hey buttza, thanks for the help. Turns out it was a shader pipeline issue.

Clip works very well, i agree.

Thanks for the input

FROG!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!