Are you familiar with the various spaces in the rendering pipeline? For example, your transform typically goes from object space to model space to view space, and they are all simple transformations that just change the orientation and location of the origin of their previous spaces.
The projection matrix is different though, since it warps the geometry of the scene so that a frustum shaped chunk of the scene fits into a cube. This non linear behavior is what I suspect is your issue.
So the steps in the process that you need to implement in order to find out if this is the case are all in your shader:
- For the pixel currently being calculated, find its view space position. You will need to instrument your shader for this - either passing the view space position in your attributes, or passing an inverse projection matrix in your constant buffers.
- When you do the offsets in your depth samples, these are now being applied to that view space position. They will also be in your regular units as well (i.e. meters or whatever unit you use), so it is more logical to think about how large the radius is.
- However, to look up the where that 3D view space offset location is in your depth buffer, you need to re-project the point and find its location in the depth buffer. This can either use the projection matrix, or you can just do the simple math version on the xy coordinates (since those are what is needed to find the depth buffer location).
Have you tried to implement any of these steps yet? If so, which ones are you getting hung up on?