How to read depth buffer value at a pixel to CPU and get its world coordinate

Started by
7 comments, last by MJP 10 years, 5 months ago
Hi,Everybody.I want to get a simple function using SharpDX, get the mouse's world position when the mouse is moving,and display the coordinate at the screen. So I must first get the mouse's cooresponding world position. When I develop with VC++ and OpenGL, I can do it like this:
  1. glReadPixels((int)winX, (int)winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
  2. gluUnProject(winX, winY, winZ, modelview, projection, viewport, &posX, &posY, &posZ);
but with SharpDX, I don't known how to read pixel(x,y)'s depth value in depth buffer.
Any help is appreciated, thank you!
Advertisement

You're probably better off doing it some other way than reading back memory from the card. Aside from totally botching your frame rate the z buffer values are not directly usable. AFAIK the z value is not stored linearly.

Fruny: Ftagn! Ia! Ia! std::time_put_byname! Mglui naflftagn std::codecvt eY'ha-nthlei!,char,mbstate_t>

For this work i'm using Compute Shader to get pixel data from textures(Render Targets).

???? ?? ??? ????

Persian Gulf

Projects a 3D vector from screen space into object space.


XMVECTOR XMVector3Unproject(
  [in]  XMVECTOR V,
  [in]  float ViewportX,
  [in]  float ViewportY,
  [in]  float ViewportWidth,
  [in]  float ViewportHeight,
  [in]  float ViewportMinZ,
  [in]  float ViewportMaxZ,
  [in]  XMMATRIX Projection,
  [in]  XMMATRIX View,
  [in]  XMMATRIX World
);

Is this all you need?

Like Endurian mentioned, doing this will kill CPU/GPU parallelism so you probably don't want to do it unless you really don't care about performance. You can mostly avoid performance issues if you wait at least a frame or two before reading back the results.

To read back data from a texture, you need to create a "duplicate" texture with the same size and format that uses STAGING usage, and has CPU read access. Then you can use CopyResource to copy the contents from the depth buffer to your staging texture, and then call Map to get a pointer to the raw texel data. You can then pass the depth value to XMVector3Unproject like oler117 suggested to calculate the world space position. Just make sure you're using a FLOAT format for your depth buffer, otherwise you will have to manually convert from 24-bit integer to floating point.


AFAIK the z value is not stored linearly.

It's not, but reverse-projecting with the projection matrix will account for that.

A possible enhancement of the method detailed by MJP would be to create a staging texture of the size of one pixel (or the area you want to sample) and simply use DeviceContext.CopySubresourceRegion. Also in order to avoid stalls when reading pixels back to CPU, you should consider using a pool of staging texture for the previous frames (at least 2 frames, but depending on your FPS, It might go higher).

A possible enhancement of the method detailed by MJP would be to create a staging texture of the size of one pixel (or the area you want to sample) and simply use DeviceContext.CopySubresourceRegion.

From MSDN:

Note If you use CopySubresourceRegion with a depth-stencil buffer or a multisampled resource, you must copy the whole subresource. In this situation, you must pass 0 to the DstX, DstY, and DstZ parameters and NULL to the pSrcBox parameter.

A possible enhancement of the method detailed by MJP would be to create a staging texture of the size of one pixel (or the area you want to sample) and simply use DeviceContext.CopySubresourceRegion.

From MSDN:

Note If you use CopySubresourceRegion with a depth-stencil buffer or a multisampled resource, you must copy the whole subresource. In this situation, you must pass 0 to the DstX, DstY, and DstZ parameters and NULL to the pSrcBox parameter.

Good catch! Though it should be still possible: ;)

  • Create a texture with same size as depth buffer, not bindable to any stage (but not staging) and format R24G8_Typeless (assuming the depth buffer is for example a D24_UNorm_S8_UInt)
  • Create a 1x1 staging texture with again this format R24G8_Typeless.
  • Copy the whole depth buffer to the first texture with DeviceContext.CopyResource()
  • Copy from first texture to staging texture on a 1x1 region with DeviceContext.CopySubresourceRegion()

I tested this scenario and it seems to work with a d3d11 device.

You can also just create a 1x1 render target texture, and use a simple pixel shader or compute shader that samples the appropriate texel from your depth buffer and writes the depth value to the 1x1 render target. If you do it this way you can have the GPU automatically handle the conversion from D24->F32, which is an added bonus.

This topic is closed to new replies.

Advertisement