Sign in to follow this  
Followers 0
These_Violent_delights

Querying the depth buffer

2 posts in this topic

Posted (edited)

PLUGIN_EXPORT void WINAPI OnFilledBackBuffer(filledBackBuffer *p, DWORD paramSize) {

  std::cout << "Locking now" << std::endl;
  D3DLOCKED_RECT *pLRect;

  if (SUCCEEDED(p->depthBbuffer->LockRect(pLRect, NULL, D3DLOCK_READONLY)))
  {
      std::cout << "Lock successful" << std::endl;
      p->depthBuffer->UnlockRect();
      std::cout << "Unlocked" << std::endl;
  }
  else
  {
      std::cout << "Failed lock attempt" << std::endl;
  }

}

Hello everyone.
 
I work for a company that is using an old computer game for visualisations. The computer game has an open plugin architecture, but we don't have access to the source code, or ability to input our own shaders etc.

Essentially what we want to do is query the depth buffer to get an approximate (we can deduce to a degree) value of the depth per pixel on screen. We want to do this in order to simulate a ladar/lidar scan of an image - much like a drone could potentially do.

Through one of the engine callbacks we are given a struct containing two IDirect3DSurface9 pointers, one for the depth buffer and one for the render target. This callback is called once the back buffer has been filled, and is aptly named 'OnFilledBackBuffer'.

I am currently attempting to lock the depth buffer via the code above, however this always failed to lock, and then causes a CTD.

I suspect this is because the depthBuffer is not lockable... How else should I be doing this?

Thanks

Guy

Edited by These_Violent_delights
0

Share this post


Link to post
Share on other sites

Posted (edited)

I am currently attempting to lock the depth buffer via the code above, however this always failed to lock, and then causes a CTD.

Yep:  "The only lockable format for a depth-stencil surface is D3DFMT_D16_LOCKABLE."
That shouldn't CTD though, it should return a failure code...

Ideally you would make sure that the game creates an INTZ format depth buffer, which is readable by the GPU, and then at the end of the frame, issue a draw call that tells the GPU to read the depth-buffer and copy it into another (mappable) texture, and then use GetRenderTargetData to copy it to the CPU side.

You may have to go down the route of hooking all the D3D9 functions so that you can insert your own code at a different point during the game's rendering processes... You can use your own shaders, and control any other part of their rendering code by doing this. i.e. you can make a fake d3d9.dll, which you fully control, and then their game will send all of its d3d9 calls to you, and you can send them on to the real d3d9.dll after making any necessary adjustments.

Another option would be to bind your own colour-target and their depth-target, and then render a LOT of full-screen quads at varying depth values, with different colours, and use the depth-test to basically quantize and copy ranges of depth values into your colour target.

e.g. render a quad that's at NDC z=254/255, with depth test set less_equal. Use a pixel shader that writes 254/255 to the colour target. Now render a quad that's at z=253/255 and use a pixel shader that writes 253/255 to the colour target, etc... 200ish fullscreen draw calls later and you've got an 8bit version of the depth buffer in a regular render target that you can then copy over to the CPU.

Edited by Hodgman
1

Share this post


Link to post
Share on other sites

Thanks for the reply Hodgman,

Sadly the d3d9.dll is hash checked by the engine... you don't happen to know of Kegetys do you? :D

I will look into copying it onto another buffer and copy it back to the CPU side - thanks!

Guy

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0