Sign in to follow this  
Boulotaur2024

Screen-space to camera-space conversion without any matrix at hand

Recommended Posts

OK I know it sounds really stupid but... do you know if conversion from screen-space to camera-space coordinates is possible without using any matrix transformation at all ?

 

I'll explain a bit why I'm asking this. Basically I'm trying to mod a certain dx9 game by hooking it and applying various effects on top of it. I've got a few SSAO implementations working nicely already. But I noticed some of these SSAO shaders required to have camera space positions at hand (more specifically I'm trying to port Scalable Ambient Obscurance to DX9) and they most certainly don't work properly without them.

 

Oh and... retrieving the game original projection matrix is not possible in my case :/

 

So all I have is screen space position from my very simple Vertex shader :

VSOUT FrameVS(VSIN IN)
{
	VSOUT OUT = (VSOUT)0.0f;
	OUT.vertPos = IN.vertPos;
	OUT.UVCoord = IN.UVCoord;
	return OUT;
}

Which is certainly insufficient to convert from screen-space to camera-space/view-space position, right ?

 

Now something that sill puzzles me to this day, and you're going to laugh at me because I copy/pasted some magic code that works -to some extent- but that I can't really understand :

const float fovy               = 40.0 * 3.14159265 / 180.0; // 40 deg in radian
const float invFocalLenX   = tan(fovy * 0.5) * width / height;
const float invFocalLenY   = tan(fovy * 0.5);

vec3 uv_to_eye(vec2 uv, float eye_z)
{
   uv = (uv * vec2(2.0, -2.0) - vec2(1.0, -1.0));
   return vec3(uv * vec2(invFocalLenX, invFocalLenY) * eye_z, eye_z); // get eye position
}

vec3 fetch_eye_pos(vec2 uv)
{
   float z = texture2D(tex1, uv).a; // Depth/Normal buffer
   return uv_to_eye(uv, z);
}

Correct me if I'm wrong but from screen-space this code should get me back to eye-space position (I guess it's the same as camera-space position right ?)

 

... And it doesn't use any matrix transformation at all...

... And It does work at least for my HBAO shader. So I was mostly happy copy/pasting this without having to understand how the magic happens... but now that it doesn't work at all with SAO (Scalable Ambient Obscurance) I realize I'm mostly clueless about how all these things work and what's the missing piece of the puzzle for that matter

 

Sorry for sounding so ignorant. I am smile.png

 

Share this post


Link to post
Share on other sites
vec3 uv_to_eye(vec2 uv, float eye_z)
{
   uv = (uv * vec2(2.0, -2.0) - vec2(1.0, -1.0));
   return vec3(uv * vec2(invFocalLenX, invFocalLenY) * eye_z, eye_z); // get eye position
}
is the same as
vec3 uv_to_eye(vec2 uv, float eye_z)
{
   vec4 imageCoord = vec4(uv, eye_z, 1.0);
   vec4 screenCoord = mat4( 2.0, 0.0, 0.0, 0.0,
                            0.0,-2.0, 0.0, 0.0,
                            0.0, 0.0, 1.0, 0.0,
                           -1.0, 1.0, 0.0, 1.0 ) // note that this is column major
			* imageCoord;
   vec4 eyeSpaceCoord =  mat4( invFocalLenX*eye_z,                0.0, 0.0, 0.0,
                               0.0,                invFocalLenY*eye_z, 0.0, 0.0,
                               0.0,                               0.0, 1.0, 0.0,
                               0.0,                               0.0, 0.0, 1.0 ) // note that this is column major
			* screenCoord;

   return eyeSpaceCoord.xyz; // eyeSpaceCoord.w == 1
}
which is the same as
vec3 uv_to_eye(vec2 uv, float eye_z)
{
   vec4 imageCoord = vec4(uv, eye_z, 1.0);

   vec4 eyeSpaceCoord =  mat4( 2.0 * invFocalLenX*eye_z,                       0.0, 0.0, 0.0,
                               0.0,                      -2.0 * invFocalLenY*eye_z, 0.0, 0.0,
                               0.0,                                            0.0, 1.0, 0.0,
                               -invFocalLenX*eye_z,             invFocalLenY*eye_z, 0.0, 1.0 ) // note that this is column major
			* imageCoord;

   return eyeSpaceCoord.xyz; // eyeSpaceCoord.w == 1
}
modulo any typos I might have built in.

So what that code does can be expressed as a matrix multiplication. It just happens that it doesn't use the buildin matrix type and operators.

So to circle back to the original question:

OK I know it sounds really stupid but... do you know if conversion from screen-space to camera-space coordinates is possible without using any matrix transformation at all ?

Well that depends on the definition of "using any matrix transformation". If by "using any matrix transformation" you mean explicitely using the mat4 * vec4 operator of GLSL, then yes, it is possible to do it without. If by "using any matrix transformation" you mean any code or formula that could be converted into a matrix multiplication (like above), then no. The performed operation is linear and every linear operation can be expressed by a matrix multiplication. Edited by Ohforf sake

Share this post


Link to post
Share on other sites

screen-space-mat = world * view * projection converts world positions to (-1, 1) and a depth (0-1)

 

For a perspective projection (referencing Luna's "3D Game Programming"):

 

x = worldX / ( worldZ*R*tan(a/2) ), where R is the aspect ratio (width/height) and a is the vertical fov.

 

y = worldY / ( worldZ*tan(a/2) )

 

The "magic" code is inverting those equations by guessing at the fov of the perspective matrix. It's merely using the vector math that would result from matrix multiplication.

 

ph34r.png Ninja'd (sort of)

 

EDIT: missed the normalization step for worldX/worldZ. However, the word description above is what's happening. The principle is:

 

screen-space = world * view * projection

 

[...] bracketed expressions resolve to identity matrix.

so, inverse-world * screen-space * inverse-projection = [ inverse-world * world ] * view * [ projection * projection-inverse ] = view.

Edited by Buckeye

Share this post


Link to post
Share on other sites

It cannot be done without the original projection matrix parameters, since the screen space position is a factor of the projection matrix used...

Remember projection and the perspective divide gets you into NDC coordinated, those are then scaled by the viewport parameters to produce your window/screen coordinates.
 

Share this post


Link to post
Share on other sites

Ok I suspected the answer would be 'no' but I wanted to give it a shot anyway.

*sigh* 

 

Thanks for the detailed answers ! I wish I was as comfortable as you with vector maths

 

EDIT : if a projection matrix is absolutely needed to go from screen-space to camera-space, wouldn't I be able to build it myself on the CPU before passing it to my shader (with arbitrary nearZ/farZ of course) ?

Like so : http://stackoverflow.com/posts/18406650/revisions

 

 

It cannot be done without the original projection matrix parameters, since the screen space position is a factor of the projection matrix used...
 

Ok. Disregard my previous question I had missed that part

Edited by Boulotaur2024

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this