Advertisement Jump to content
Sign in to follow this  
LaneLane

OpenGL How to read depth buffer value at a pixel to CPU and get its world coordinate

This topic is 1872 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,Everybody.I want to get a simple function using SharpDX, get the mouse's world position when the mouse is moving,and display the coordinate at the screen. So I must first get the mouse's cooresponding world position. When I develop with VC++ and OpenGL, I can do it like this:
  1. glReadPixels((int)winX, (int)winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
  2. gluUnProject(winX, winY, winZ, modelview, projection, viewport, &posX, &posY, &posZ);
but with SharpDX, I don't known how to read pixel(x,y)'s depth value in depth buffer.  
Any help is appreciated, thank you!

Share this post


Link to post
Share on other sites
Advertisement

You're probably better off doing it some other way than reading back memory from the card. Aside from totally botching your frame rate the z buffer values are not directly usable. AFAIK the z value is not stored linearly.

Edited by Endurion

Share this post


Link to post
Share on other sites

Projects a 3D vector from screen space into object space.

XMVECTOR XMVector3Unproject(
  [in]  XMVECTOR V,
  [in]  float ViewportX,
  [in]  float ViewportY,
  [in]  float ViewportWidth,
  [in]  float ViewportHeight,
  [in]  float ViewportMinZ,
  [in]  float ViewportMaxZ,
  [in]  XMMATRIX Projection,
  [in]  XMMATRIX View,
  [in]  XMMATRIX World
);

Is this all you need?

Share this post


Link to post
Share on other sites

Like Endurian mentioned, doing this will kill CPU/GPU parallelism so you probably don't want to do it unless you really don't care about performance. You can mostly avoid performance issues if you wait at least a frame or two before reading back the results.

To read back data from a texture, you need to create a "duplicate" texture with the same size and format that uses STAGING usage, and has CPU read access. Then you can use CopyResource to copy the contents from the depth buffer to your staging texture, and then call Map to get a pointer to the raw texel data. You can then pass the depth value to XMVector3Unproject like oler117 suggested to calculate the world space position. Just make sure you're using a FLOAT format for your depth buffer, otherwise you will have to manually convert from 24-bit integer to floating point.

 


AFAIK the z value is not stored linearly.

 

It's not, but reverse-projecting with the projection matrix will account for that.

Edited by MJP

Share this post


Link to post
Share on other sites

A possible enhancement of the method detailed by MJP would be to create a staging texture of the size of one pixel (or the area you want to sample) and simply use DeviceContext.CopySubresourceRegion. Also in order to avoid stalls when reading pixels back to CPU, you should consider using a pool of staging texture for the previous frames (at least 2 frames, but depending on your FPS, It might go higher).

Edited by xoofx

Share this post


Link to post
Share on other sites

A possible enhancement of the method detailed by MJP would be to create a staging texture of the size of one pixel (or the area you want to sample) and simply use DeviceContext.CopySubresourceRegion.

 

From MSDN:

Note  If you use CopySubresourceRegion with a depth-stencil buffer or a multisampled resource, you must copy the whole subresource. In this situation, you must pass 0 to the DstX, DstY, and DstZ parameters and NULL to the pSrcBox parameter.

Share this post


Link to post
Share on other sites

 

A possible enhancement of the method detailed by MJP would be to create a staging texture of the size of one pixel (or the area you want to sample) and simply use DeviceContext.CopySubresourceRegion.

 

From MSDN:

Note  If you use CopySubresourceRegion with a depth-stencil buffer or a multisampled resource, you must copy the whole subresource. In this situation, you must pass 0 to the DstX, DstY, and DstZ parameters and NULL to the pSrcBox parameter.

 

 

Good catch! Though it should be still possible: ;)

  • Create a texture with same size as depth buffer, not bindable to any stage (but not staging) and format R24G8_Typeless (assuming the depth buffer is for example a D24_UNorm_S8_UInt)
  • Create a 1x1 staging texture with again this format R24G8_Typeless.
  • Copy the whole depth buffer to the first texture with DeviceContext.CopyResource()
  • Copy from first texture to staging texture on a 1x1 region with DeviceContext.CopySubresourceRegion()

I tested this scenario and it seems to work with a d3d11 device.

Edited by xoofx

Share this post


Link to post
Share on other sites

You can also just create a 1x1 render target texture, and use a simple pixel shader or compute shader that samples the appropriate texel from your depth buffer and writes the depth value to the 1x1 render target. If you do it this way you can have the GPU automatically handle the conversion from D24->F32, which is an added bonus.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!