pick 3D cursor position sampling depth buffer

Started by
6 comments, last by jkristia 10 years, 7 months ago

I am working on a method to pick in the 3d world and get the coordinates of where the mouse clicked on an object. Using the mouse x and y in viewport space along with sampling the depth buffer I should be able to figure this out, but I am a bit lost here. I can sample the my depth buffer(R32F format) by locking the surface and reading the surface bytes, but I am not sure how to understand that data.




            if(_d3dDevice->GetRenderTargetData(_views[view].gBuffer._depthBufferSurf, _views[view].gBuffer._offscreenR32FSurface) != D3D_OK)

                ErrorMessenger::ReportMessage("Failed to GetRenderTargetData!", __FILE__, __LINE__);



            //Lock surface data

            D3DLOCKED_RECT surfaceData;

            _views[view].gBuffer._offscreenR32FSurface->LockRect(&surfaceData, 0, D3DLOCK_READONLY);

   

            BYTE* bytePointer = (BYTE*)surfaceData.pBits;



            //Get the index where the mouse is located

            DWORD index = (point.x * 4 + (point.y  * (surfaceData.Pitch)));

 
            //Dont know what the values I am retrieving represent. Depth buffer is from 0.0f to 1.0f but I am getting values of 107, 114 ect...
           //Read surface data

            BYTE depth = bytePointer[index];
Advertisement

This is really weird, I just answered almost the exact same question...

anyway just compute the address of the texel, and cast it to a float.

float depth = reinterpret_cast<float*>(bytePointer + index);

OK, that seems to be giving me the correct value. So the next question is turning my x, y, and now depth value into a 3d position. My lacking math skills are holding me back and the only resources I have found online say to perform an inverse transformation on my x,y and z. But the inverse of which matrices I am unsure and my experimentation's have thus far failed.

Thank you for any help you can provide

Use the inverse of your view * projection matrix. This will take you from projection space, to view space, back to world space.

Also your X and Y values need to be normalized so that they're of the range [-1, 1], where (-1, -1) is the bottom left of the screen and (1, 1) is the top right.

Im sorry to keep bugging on this subject, but I am getting some wonky results. Here is the code

POINT point = _inputReader->GetCursorAbs();

    //Adjust mouse position to client space
    point.x -= _views[view].location.left;
    point.y -= _views[view].location.top;

    if(point.x < 0 || point.y < 0)
        return nullptr;

    //Normalize mouse coordinates to [-1,1]
    D3DXVECTOR3 v;
    v.x =  ( ( ( 2.0f * point.x ) / _appWidth  ) - 1 );
    v.y = -( ( ( 2.0f * point.y ) / _appHeight ) - 1 );
    v.z =  1.0f;
    
    //Get depth data into offscreen surface for reading
    if(_d3dDevice->GetRenderTargetData(_views[view].gBuffer._depthBufferSurf, _views[view].gBuffer._offscreenR32FSurface) != D3D_OK)
        ErrorMessenger::ReportMessage("Failed to GetRenderTargetData!", __FILE__, __LINE__);

    //Lock surface data
    D3DLOCKED_RECT surfaceData;
    _views[view].gBuffer._offscreenR32FSurface->LockRect(&surfaceData, 0, D3DLOCK_READONLY);
    
    //Get the index where the mouse is located
    DWORD index = (point.x * 4 + (point.y  * (surfaceData.Pitch)));

    //Find depth
    static float* depth = new float;
    BYTE* bytePointer = (BYTE*)surfaceData.pBits;
    depth = reinterpret_cast<float*>(bytePointer + index);

    //Unlock
    _views[view].gBuffer._offscreenR32FSurface->UnlockRect();

    if(depth)
    {
        //Solve mouse world position
        D3DXMATRIX invViewProj;
        D3DXMatrixMultiply(&invViewProj, &_camera->GetViewViewMatrix(0), &_camera->GetViewProjectionMatrix(0));
        D3DXMatrixInverse(&invViewProj, 0, &invViewProj);

        D3DXVECTOR4 out;
        D3DXVec3Transform(&out, &D3DXVECTOR3(v.x, v.y, *depth), &invViewProj);
                
        *pos = LWXVector3(out.x / out.w, out.y / out.w, out.z / out.w);
    }
 

This is really bugging me. If I can get this down it will open up tons of functionality that I need. Thank you again

What is it that you are seeing that is wonky? Are the results totally different than expected? Do they come back infinite? Try a test value that doesn't rely on the depth buffer and uses a known camera view position. It is usually a good thing to try it with the camera at the origin and not rotated in world space first to ensure that everything else is working. Just pass in a known value that you can determine what should be coming out of your conversion method.

Then try rotating the camera by 90 degrees on the Y axis, and reverify that it works. Continue doing that until you are confident that your method works correctly.

Ok guys. Thank you for all your help. I have realized what is wrong and have fixed the problem.

I might need something similar soon, so maybe you can post what the problem was and the solution to it ?

Thanks

This topic is closed to new replies.

Advertisement