Jump to content
  • Advertisement
Apollo13

3D D3D9 - How to linearize depth buffer obtained from closed source application

This topic is 411 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey guys, I hope for help on the following problem - hopefully an easy one for you, but I am an absolute beginner...

I want to simulate a Lidar in a closed source D3D9 game (ArmA2-based I think), meaning I want to get the pixel positions in view space. For that I intercept the D3D9 calls (via proxy dll) and retrieve the handle to the depth buffer texture (R32F) which I have identified .  I've attached an example of the contents of this depth buffer (just StretchRect'ed the contents to the back buffer) and the corresponding rendered image.

Now what I am trying to do is transform the depth buffer values to view space but I keep failing on this. I am trying to do it as described in this stackoverflow post (the only change I made after I saw that it does not work is to delete the first line 'z = depth*2.0 - 1.0' and instead directly using the depth value, because as far as I know the projection matrix(from MSDN: https://msdn.microsoft.com/de-de/library/windows/desktop/bb147302(v=vs.85).aspx,IC511470.png, Q = Zf/(Zf-Zn) )is already adjusted to output a depth in [0,1] - and thus should also be directly usable in inverted form for the transformatino from clip space to view space)

I should mention that I can query the game API for the view frustum values (left + right angle, top + bottom angle, near and far plane), but as you can see a large part of the depth buffer is black

So here are my questions:

1) What I don't quite understand is that apparently the whole depth buffer value interval is inverted, meaning far objects are dark (a color of (1,1,1,1) is white) . Are there any other projection matrices commonly used together with D3D9, that have this behavior and which I could try?

2) Is the approach shown in the mentioned stackoverflow post a valid one? Especially I'd really like to know if this division by w after applying the inverse projection matrix is correct - I thought, this division is necessary to get from clip space to normalized device coordinates. But why is it necessary here?

3) Is there any other approach I could use to get the pixel positions in view space?

Thanks in advance!

Color2.png

Depth2.png

Edited by Apollo13

Share this post


Link to post
Share on other sites
Advertisement

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!