Reconstructing pixel 3D position from depth

Started by
72 comments, last by MJP 16 years, 4 months ago
Quote:But I assume that if my random ray points towards (+1,0,0), the sample point will always be on the right of the original point, right?

Yeah, in *view space*.

Quote:But it still shaded itself, depending on the camera rotation.

I'll try to explain. Lets use your wall as an example. Lets pick the "blue" one in your figure. Lets say it normal is (0,0,1) in *world space*. If we look at the wall in direction (0,0,-1) (also in world space) the normal will be (0,0,1) in *view space* also. If you rotate the camera the normal will still be (0,0,1) in world space but it will change in *view space*.

Now consider your ray. You set it to be constant (1,0,0) in *view space*. But the view normals will change as you rotate the camera, thus shading changes.

If you insted consider your ray to be (1,0,0) in *world space* and transform it to view space (i.e. it will chance as the camera rotates) then the shading will be constant.
Advertisement
Ah, of course. And that's why that test with the fixed vector was not correct. But the "error" I was talking about also happens when I render in the normal way (with 16 random rays). Well, the normals are correctly transformed to view-space then. And I understand it doesn't really matter in which direction the rays point. As long as I have multiple rays in all directions, ~50% of them should pass. The other 50% are discarded, or as you do, brought to the right side. However, this percentage depends on the camera rotation in my case.

So far I have this shader:
void main(			float2 iTex		: TEXCOORD0,			float4 iViewDir		: TEXCOORD1,			float4 iPos		: TEXCOORD2,	out     float4	oColor		: COLOR0,		uniform	sampler2D	sceneDepth,	uniform sampler2D	noiseNormalMap,	uniform half3		camPos		= V3_CAM_POS ,	uniform	float4x4 	projMatrix	= MPROJ){// Sample sphere// x points to cover a sphere shape that surrounds the pixel center coordinateconst half  sampleCount	= 16;const half3 sampleSpherePos[16] = {		float3(0.527837, -0.085868 ,0.527837)  ,		float3(-0.040088, 0.536087, -0.040088)  ,		float3(-0.670445, -0.179949, -0.670445)  ,		float3(-0.419418, -0.616039, -0.419418)  ,		float3(0.440453, -0.639399, 0.440453) ,		float3(-0.757088, 0.349334, -0.757088) ,		float3(0.574619, 0.685879,0.574619) ,		float3(0.03851, -0.939059, 0.03851) ,		float3(0.527837, -0.085868 ,0.527837)  ,		float3(-0.040088, 0.536087, -0.040088)  , 		float3(-0.670445, -0.179949, -0.670445)  ,		float3(-0.419418, -0.616039, -0.419418)  ,		float3(0.440453, -0.639399, 0.440453) ,		float3(-0.757088, 0.349334, -0.757088) ,		float3(0.574619, 0.685879,0.574619) ,		float3(0.03851, -0.939059, 0.03851) 		};			// Get normal and depth	float4  pixel	= f4tex2D( sceneDepth, iTex.xy );  	//   	float 	pDepth  = pixel.r;				// depth as z/w  	float3  pNrm    = pixel.gba;				// eye-space normal  	// Reconstruct the eye/view space position	float3  ePos	 = pDepth * iViewDir.xyz;			// Get a normal/noise for the dithering later on	half3 noise = h3tex2D( noiseNormalMap, iTex.xy * half2(4, 3) /*scale depends on screen ratio*/ );	      noise = noise * 2.0 - 1;	// get normal		// Let the scale depend on the environment wideness	half3 sampleScale = /*SSAO_params.zzw*/ half3( 200,200,1 )		* saturate(pDepth / 5.0 ) 	// make area smaller if distance less than 5 meters    		* (1.0 + pDepth   / 8.0 );	// make area bigger if distance more than 32 meters  	float depthRangeScale = /*veiwdist*/500 / sampleScale.z * 0.85;	sampleScale.xy *= 1.0 / pDepth;	sampleScale.z  *= 2.0 / 500 /*PS_NearFarClipDist.y*/;	// Loop through samples	half 	pntCount = 0;	float	occl	 = 0;		for (half i=0; i<sampleCount; i++)	{		// Generate a random new point around the center point (in eye-space)		float3 ray = normalize( reflect( sampleSpherePos.xyz, noise.xyz) );	// Offset ray (in local-space)	        float3 pnt = ePos + ray * sampleScale.xyz;		// Generate new eye-space point				// Check if its in the hemisphere. Normal must be in eye-space as well!		// Dark_Nebula suggested to bring the ray on the other side, but for testing,		// I just discard the points "behind" the normal so far		if ( dot( ray, normalize(pNrm) ) > 0.0 )		{			// Calculate the texture coordinates for this point,			// Bring the coordinate back into clip-space			float2 ss = ( pnt.xy / pnt.z ) *  float2( projMatrix[0][0], projMatrix[1][1] );			float2 sn = ss * 0.5 + 0.5;			// Get depth for this other point			float  pntDepth = f1tex2D(sceneDepth,sn);			// The bigger the difference between the 2 depths, 			// the less addition.		        float zDif    = 50.0 * max( pDepth - pntDepth, 0.0 );			      occl += 1.0/(1.0+zDif*zDif);		  		       ++pntCount;	// Add 1 to sampleCount. The average is "occl / pntCount"		}	} // for	// Average	occl /= pntCount;oColor.rgb = occl;oColor.a = 1;}

Sorry for the huge listing. How to use these other code boxes with syntax highlighting?

Yet again, thanks for being patient with me :)
Rick

[Edited by - spek on December 8, 2007 4:19:30 AM]
Quote:Original post by hibread

Well, yes "I" think you should have an issue... atleast as far as my understanding goes. There is a thread here i started (the OpenGL forum maybe wasn't the most appropriate forum to post to... but) where i speak of this issue.

Basically it comes down to the fact that z doesn't necessarily have to be positive in opengl or negative in D3D. Since we are generally storing normals in view space but render pixels using projected view space, you can in a sense "look around corners" to see faces that actually point away in view space. This is amplified hugely by any normal mapping since even a face which for example has a normal facing toward you, can have pixels adjusted to face away.

Im dumb founded by this discovery though really. Many sources say to use the z = sqrt( 1 - x^2 - y^2), yet "I believe" it just doesn't work perfectly in reality. And when i say not perfectly, i mean quite obvious lighting errors in certain circumstances.

Within that thread i linked above, Lord_Evil suggests polar coordinates (spherical coordinate system). Thats what i use, and it seems to work bloody fantastic. Not only for the accurate results, but those accurate results (atleast for my implimentation) come at basically no expense; which is a surprise if you ask me. Im comparing standard 3x8bit normal with packing/unpacking 2x16bit variables for spherical coords using trig functions etc.


Very interesting...thank you for the heads up. I'll have to look into this more when I actually have some time to tear apart my shaders again. I'd love to ask those Guerilla programmers how they're getting around this issue, or why they don't consider it important. Spherical coordinates sounds like they could do the trick though.

Quote:Original post by spek

Sorry for the huge listing. How to use these other code boxes with syntax highlighting?



Use the "source" tags. :)

This topic is closed to new replies.

Advertisement