Questions about RenderToTexture with D3DFMT_R32F

Started by
2 comments, last by neneboricua19 18 years, 10 months ago
Hi all, I am new of DirectX, I need to store the Depth Buffer in a texture and I have some questions about that. With OpenGL I would use the extension GL_SGIX_depth_texture. With DirectX the fastest way I can see is to set a D3DFMT_R32F texture as render target and draw the depth as a monochromatic color. The alternative is to lock the surface... but this is time expensive, I guess. 1. Is this the best way? When I draw onto the D3DFMT_R32F texture, I use a pixel shader: float4 PixelShader(…):Color { float4 result; result.x = 1.5; //this is the only channel that goes to the render target result.yzw = 0.0; //not important return result; } 2.Is it right that just the x component will be written to the render target? Suppose that the alpha test is enabled, then how looks my PixelShader like, to profit from this? float4 PixelShader(…):Color { float4 result; result.x = 1.5; //this is the only channel that goes to the render target result.a = 0.2; // alpha value } (suppose that the alpha test kills all fragments with alpha<0.5) I have noticed that this pixelshader ignores the alpha test, hence the pixel is written 3.Is it right that when : - the render target is D3DFMT_R32F - the alpha test is enabled - and the pixel alpha content is assigned by a HLSL pixel shader happens that the alpha test is ignored? Thank you, Diego
Advertisement
You are correct in that if you need to store the depth buffer in a texture, the best way to do it is to use a D3DFMT_R32F texture. As the format indicates, you're storing data in the red channel. So when you output the result from the pixel shader, only the red/x component will be stored.

As far as alpha blending goes, it is dependent on the particular video card you have. Not all video cards support alpha blending on floating point render targets. In fact, I believe that the only card to do so at this time is the Nvidia GeForce 6xxx series. You'll need to check the caps bits on your card to see if alpha blending on floating point render targets is supported.

neneboricua
Thank you,
So it should be an hardware problem, but why does it works using OpenGL?

Diego
Quote:Original post by diegor82
Thank you,
So it should be an hardware problem, but why does it works using OpenGL?

Diego

It might not necessarily be a hardware problem. It could be that the hardware physically supports it to a certain degree, but not in the particular way that the DirectX standard wants it to. That may be the reason why the functionality isn't exposed via DirectX.

In OpenGL, all of this is done through vender-specific extensions, which don't have to adhere to any standard at all. Each vendor is free to make their extension behave in any way they want. Extensions from different vendors that conceptually do the same thing won't necessarily go about it the same way and may produce slightly different results.

neneboricua

This topic is closed to new replies.

Advertisement