# pick 3D cursor position sampling depth buffer

This topic is 2140 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I am working on a method to pick in the 3d world and get the coordinates of where the mouse clicked on an object. Using the mouse x and y in viewport space along with sampling the depth buffer I should be able to figure this out, but I am a bit lost here. I can sample the my depth buffer(R32F format) by locking the surface and reading the surface bytes, but I am not sure how to understand that data.



if(_d3dDevice->GetRenderTargetData(_views[view].gBuffer._depthBufferSurf, _views[view].gBuffer._offscreenR32FSurface) != D3D_OK)

ErrorMessenger::ReportMessage("Failed to GetRenderTargetData!", __FILE__, __LINE__);

//Lock surface data

D3DLOCKED_RECT surfaceData;

BYTE* bytePointer = (BYTE*)surfaceData.pBits;

//Get the index where the mouse is located

DWORD index = (point.x * 4 + (point.y  * (surfaceData.Pitch)));

//Dont know what the values I am retrieving represent. Depth buffer is from 0.0f to 1.0f but I am getting values of 107, 114 ect...

BYTE depth = bytePointer[index];


##### Share on other sites

This is really weird, I just answered almost the exact same question...

anyway just compute the address of the texel, and cast it to a float.

float depth = reinterpret_cast<float*>(bytePointer + index);


##### Share on other sites

OK, that seems to be giving me the correct value. So the next question is turning my x, y, and now depth value into a 3d position. My lacking math skills are holding me back and the only resources I have found online say to perform an inverse transformation on my x,y and z. But the inverse of which matrices I am unsure and my experimentation's have thus far failed.

##### Share on other sites

Use the inverse of your view * projection matrix. This will take you from projection space, to view space, back to world space.

Also your X and Y values need to be normalized so that they're of the range [-1, 1], where (-1, -1) is the bottom left of the screen and (1, 1) is the top right.

##### Share on other sites

Im sorry to keep bugging on this subject, but I am getting some wonky results. Here is the code

POINT point = _inputReader->GetCursorAbs();

//Adjust mouse position to client space
point.x -= _views[view].location.left;
point.y -= _views[view].location.top;

if(point.x < 0 || point.y < 0)
return nullptr;

//Normalize mouse coordinates to [-1,1]
D3DXVECTOR3 v;
v.x =  ( ( ( 2.0f * point.x ) / _appWidth  ) - 1 );
v.y = -( ( ( 2.0f * point.y ) / _appHeight ) - 1 );
v.z =  1.0f;

//Get depth data into offscreen surface for reading
if(_d3dDevice->GetRenderTargetData(_views[view].gBuffer._depthBufferSurf, _views[view].gBuffer._offscreenR32FSurface) != D3D_OK)
ErrorMessenger::ReportMessage("Failed to GetRenderTargetData!", __FILE__, __LINE__);

//Lock surface data
D3DLOCKED_RECT surfaceData;

//Get the index where the mouse is located
DWORD index = (point.x * 4 + (point.y  * (surfaceData.Pitch)));

//Find depth
static float* depth = new float;
BYTE* bytePointer = (BYTE*)surfaceData.pBits;
depth = reinterpret_cast<float*>(bytePointer + index);

//Unlock
_views[view].gBuffer._offscreenR32FSurface->UnlockRect();

if(depth)
{
//Solve mouse world position
D3DXMATRIX invViewProj;
D3DXMatrixMultiply(&invViewProj, &_camera->GetViewViewMatrix(0), &_camera->GetViewProjectionMatrix(0));
D3DXMatrixInverse(&invViewProj, 0, &invViewProj);

D3DXVECTOR4 out;
D3DXVec3Transform(&out, &D3DXVECTOR3(v.x, v.y, *depth), &invViewProj);

*pos = LWXVector3(out.x / out.w, out.y / out.w, out.z / out.w);
}



This is really bugging me. If I can get this down it will open up tons of functionality that I need. Thank you again

##### Share on other sites

What is it that you are seeing that is wonky?  Are the results totally different than expected?  Do they come back infinite?  Try a test value that doesn't rely on the depth buffer and uses a known camera view position.  It is usually a good thing to try it with the camera at the origin and not rotated in world space first to ensure that everything else is working.  Just pass in a known value that you can determine what should be coming out of your conversion method.

Then try rotating the camera by 90 degrees on the Y axis, and reverify that it works.  Continue doing that until you are confident that your method works correctly.

##### Share on other sites

Ok guys. Thank you for all your help. I have realized what is wrong and have fixed the problem.

##### Share on other sites

I might need something similar soon, so maybe you can post what the problem was and the solution to it ?

Thanks

• 13
• 18
• 29
• 11
• 20