I'm officially stumped. I've tried any and every way there is, even the bad ones.
I have a special setup where I write the the ACTUAL view-space depth from my shaders to a texture, so what we have is (R, G, B, DEPTH). Texture is 16f so no issues with precision. Especially considering I'm using this value in many other places.
Anyways,
following the article here: http://www.opengl.org/wiki/Compute_eye_space_from_window_space
or just about any other article on the subject, anything I do works. Yes, it works, but the edges are... imprecise. Basically, the further from the center of the camera, the worse it gets. But not by much.
The eye_direction vector is set up like this, in the fullscreen vertex shader:
eye_direction = vec4(gl_Position.xy * nearPlaneHalfSize, -1.0, 1.0);
or, in other words
eye_direction = vec4((texCoord.xy * 2.0 - 1.0) * nearPlaneHalfSize, -1.0, 1.0);
if you will.
nearPlaneHalfSize is as per the article:
this->FOV = config.get("frustum.fov", 61.0f);
...
float halfTan = tan(this->FOV * pio180 / 2.0);
nearPlaneHalfSize = vec2(halfTan * wnd.SA, halfTan);
And in the fragment shader, reconstructing coordinates as seen from camera:
// reconstruct eye coordinates
vec4 cofs = eye_direction * matview; // transpose mult
cofs.xyz *= depth * ZFAR;
Now all we have to do is remove camera position and that's it.
Except.. It's ALMOST there.
Looking at other peoples reconstructions, I see that this works beautifully.
So, considering that my depth value is in view-space, is this the problem?
Because I believe depth-texture values usually are flat against the camera plane, while my depth value is.. non-flat