Recreating eye vector from screenPos + matProj in post

Started by
8 comments, last by turanszkij 5 years, 7 months ago

I'm looking to find the eye vector (world-space normalized vector from camera) to any given pixel on the screen, given the (-1,1) screen position and a full set of matrices (world, view, projection, inverses of same.)  It seemed to me that I couldn't just multiply screen position by inverseProj, because proj isn't really reversible, and sure enough, I have some weird behavior suggesting that to be the case (although I'm having a hard time figuring out how to "print debug" this in a way where I can be sure what's happening.)  I've done some Googling but haven't been able to find anything-- maybe it's a weird problem that nobody cares about, maybe it's obvious to everyone except me :)

This is kind of an idle question, because I know there are some other well-documented techniques (recovering the vector from a depth read, as in rebuilding world pos from a depth map) but for my purposes, recovering the eye vector without accessing depth would be preferable.

I'm working in HLSL, in DX9, in a closed source engine, but I don't imagine that matters.  I'm trying to create pseudo-geometry in post, concentric spheres centered on the camera, for playing with fogging techniques.  I want to get world position for various vectors at multiple, arbitrary depths and use those for simplex noise look-ups.  I'm just a hobbyist, and an out-of-practice one at that.  Any kind of help or pointing me to a source I missed would be appreciated.

Advertisement

So basically you want to retrieve the ray of each pixel in your pixel shader? I did it a bit differently to avoid computing heavy stuff on each pixel.

Since I'm using a deferred rendering technique I simply calculate the ray direction of the 4 corners of the camera frustum on the CPU and pass it to my vertex shader during the light pass and let it interpolate for the pixel shader. The light pass of a deferred renderer is basically just a big quad filling the screen so it's easy to pull in this case. So it would be useful to know if you are doing forward rendering or deferred.

You should be able to multiply the screen-space position with inverseViewProjection matrix, divide by .w, and subtract from camera position, and finally normalize. I haven't tested this now, but should look something like this:


float4 worldPos = mul(float4(screenPos.xy, 1, 1), inverseViewProjection);
worldPos.xyz /= worldPos.w;
float3 eyeVec = normalize(cameraPos - worldPos);

I think that you are looking for this. But as ChuckNovice already mentioned, the frustum corners solution would be faster to compute.

I got this in one of my utility HLSL that you can use to retrieve the world space position of your pixel so you can subtract it from the camera position as turanszkij suggested :


//---------------------------------------------------------------------------------------
//		Convert from screen space to world space.
//---------------------------------------------------------------------------------------
inline float3 ScreenSpaceToWorldSpace(float depth, float2 screenCoordinate, float4x4 viewProjectionInverseMatrix) 
{
	float4 position = float4(screenCoordinate.x * 2.0f - 1.0f,
		-(screenCoordinate.y * 2.0f - 1.0f), depth, 1.0f);

	// transform to world space
	position = mul(position, viewProjectionInverseMatrix);
	position /= position.w;
	return position.xyz;
}

 

the "screenCoordinate.x * 2.0f - 1.0f" part is only to convert from 0,1 range to -1,1 as I'm actually working with values that come from texture coordinates there.

Thanks a lot!  Got it working. But it required sending a 0 for scrPos.z ( depth) instead of a 1.  Afraid I don't understand why, if ChuckNovice's code is correct-- normalized vector from camera shouldn't change with depth.


	float4 worldPos = mul(float4(IN.ScrPos.xy, 0, 1), InverseViewProjMatrix);
	

Checked visually against writing eye manually for each object in the scene, no apparent difference.

Since it's a closed source engine, I can't control what gets sent to the shader, so I'd have to recompute my frustrum corners from my matrices in the shader if I wanted those.  I'm doing forward rendering right now, but may adapt to somebody else's deferred renderer if my forward experiments look okay.  I may check out their shader looking for recomputation of frustrum corners, as last time I tried to do that, I must've screwed up my math somewhere.

In case anyone's curious, here's a camera staring roughly down the positive Z, where I'm either sending z=1 or z=0.  The pimple pic is for z=1.  Positive/negative Z axis is where the difference is most apparent.

 

Edit: Oh, duh, I was only dividing worldPos.xy by w, not xyz.  There's my problem.  Thanks again!

z1.png.99b4b4357d613d6c485802ad21269bd1.pngz0.png.e7b242e28863d8fb91b7b89ffa4ba29b.png

25 minutes ago, bandages said:

Afraid I don't understand why, if ChuckNovice's code is correct-- normalized vector from camera shouldn't change with depth.

The code I provided is a generic function to reconstruct the world position of a pixel. You indeed don't need the depth if all you want is a ray.

8 minutes ago, ChuckNovice said:

The code I provided is a generic function to reconstruct the world position of a pixel. You indeed don't need the depth if all you want is a ray.

Sorry Chuck, see the edit.  I was getting different (normalized) vectors at different depths, but it wasn't a problem with your code, it was a letter I forgot to type :)

12 minutes ago, bandages said:

Sorry Chuck, see the edit.  I was getting different (normalized) vectors at different depths, but it wasn't a problem with your code, it was a letter I forgot to type :)

Yes but you and turanszkij are both right in the sense that you don't need to consider the depth only to get the ray direction. You can safely always pass 1.0 instead of sampling the depth unless you also plan to use the real pixel world position for something else in that same shader.

Yes, this is usually also called "ray unprojection" from screen (projected) space to world space. I also taken that piece from my "getPositionFromdDepthbuffer" helper function.

This topic is closed to new replies.

Advertisement