float fDepth;
fDepth = In.Depth; // In.Depth is calculated by z/w
Output.RGBColor.r = fDepth;
fDepth -= Output.RGBColor.r; // calculate the precision loss in 8-bit and 256-bit. e.g. fDepth = 0.123456, r = 0.123, fDepth = fDepth - r (fDepth now becomes 0.000456)
Output.RGBColor.g = fDepth*256.0f; // amplify the loss and store it in g.
fDepth = fDepth*256.0f - Output.RGBColor.g;
Output.RGBColor.b= fDepth*256.0f;
fDepth -= fDepth*256.0f - Output.RGBColor.b;
Output.RGBColor.a= fDepth*256.0f;
Precision in Shadow Mapping in DX10
I'm currently trying to render shadow with the most basic shadow mapping technique.
There are several problems I've encountered. Can anyone post any thoughts on these? Thanks.
1: If I use position.z/position.w to calculate the depth value for each pixel and store them in a gray-scale texture, the depth precision is limited to 8-bit.
2: If I try to create the shader resource view for the 24 bit depth map that was automatically generated by DirectX , DirectX says the DXGI_FORMAT_D32_FLOAT format can't be bound to a shader input.
I've attempted the following to try to solve the first method.
using all 4 channels (RGBA) to store the depth value. but when it came down to encoding a 32 bit float into 4 8-bit color values, somehow all G,B,A are getting zeros by the following algorithm(debugged using PIX):
PS_OUTPUT Output;
Is there something wrong with the algorithm?
Also, is there a way to bind the depth-stencil buffer as a shader resource?
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement