XMMatrixPerspectiveFovLH unit cube z in clipping space goes from 0,1 or -1,1 ?

Started by
5 comments, last by MegaPixel 11 years, 7 months ago
hi all,

very simple question coz I need to know whether I need to rescale the z range in 0,1 or not and I couldn't find any explanation on the xnamath docs.

If it's like the old dx convention z should be in 0,1 in clipping space (the openGL convention was different in fact we had -1,1 for z in clipping space).

The function that I'm using to project my scene is XMMatrixPerspectiveFovLH

thank in advance ;)
Advertisement
I haven't used that function directly, but I'm 99.9% sure it is scaled from (0,1). I have never heard of any of the D3D functions using a (-1,1) range.

In any case, it should be fairly easy to try out. Pass the position.z value to your pixel shader, and color the output based on if the z value is positive or negative. Then you will know for sure what you are dealing with.
Yeah the D3D viewport transform works with [0, 1] range for Z in clip space, and the DirectXMath perspective functions are designed to work with that.

Yeah the D3D viewport transform works with [0, 1] range for Z in clip space, and the DirectXMath perspective functions are designed to work with that.


thanks for the infos.

One side track question:

What are the conditions to output just depth?

I tried setting pixel shader to null and render target to null with rtCount to 0 and with depthStencil only. But I can't see any depth in PIX output. Only if I specify a render target along with its depth stencil I'll be able to see depth.

The format for depth stencil is D32_FLOAT and R32_FLOAT for the render target. Texture is R32_TYPELESS.
For now I output z/w in the pixel shader, but I'll remove the pixel shader if I can get the depth output only to work.
Then probably I'll go for linear z...

I'd like to use nvidia nsight but I have an intel integrated gpu which is not compatible with nvidia nsight. Just waiting to get a new nvidia card...
You shouldn't need to have a render target bound, you can just render to a depth buffer only. To get around the PIX issue with 32-bit depth buffers, you can try just using a full screen shader that samples the depth buffer and outputs it to the screen for visualization. Just make sure you rescale or linearize the depth in your pixel shader, otherwise it will look like everything is white.

You shouldn't need to have a render target bound, you can just render to a depth buffer only. To get around the PIX issue with 32-bit depth buffers, you can try just using a full screen shader that samples the depth buffer and outputs it to the screen for visualization. Just make sure you rescale or linearize the depth in your pixel shader, otherwise it will look like everything is white.


I've read that you suggested this trick to get linear depth on the fly:


float getLinearDepth(in float zw){
return lightProjector._43 / (zw - lightProjector._33);
}


is zw == z/w ? or z*w ?

I currently output z/w on the render target (which is also the one that woudl result in the depth buffer) ...

[quote name='MJP' timestamp='1347511217' post='4979581']
You shouldn't need to have a render target bound, you can just render to a depth buffer only. To get around the PIX issue with 32-bit depth buffers, you can try just using a full screen shader that samples the depth buffer and outputs it to the screen for visualization. Just make sure you rescale or linearize the depth in your pixel shader, otherwise it will look like everything is white.


I've read that you suggested this trick to get linear depth on the fly:


float getLinearDepth(in float zw){
return lightProjector._43 / (zw - lightProjector._33);
}


is zw == z/w ? or z*w ?

I currently output z/w on the render target (which is also the one that woudl result in the depth buffer) ...
[/quote]

I've verified to be z/w, infact it works perfect.

thanks

This topic is closed to new replies.

Advertisement