Reconstructing pixel 3D position from depth

Started by
72 comments, last by MJP 16 years, 5 months ago
Alright, one more try with the DX dwarf:



Some awkward things:

1) ssao looks like calculated per face, not per pixel
2) black edge around the mesh makes it look like cel-shaded (not bad actually, but not desired here )
3) see the legs? There's a vertical white line going through the ssao output

[Edited by - FoxHunter2 on December 5, 2007 2:19:34 PM]
Advertisement
Quote:Original post by spek
That was "positive" thinking, it was indeed negative. I wonder if I would ever figured that out on my own. Well, thank God the depth seems to be correct now. Although I wonder what the difference is (first I multiplied the position with the modelViewProjection matrix instead of modelView).

Now the viewDirection. It's hard to tell if its allright or not. Like I posted before, if I output the viewDirection, I get this 4 colors on the screen:
                              Z inverted:  viewDir   = mul( ModelView, in.pos )                                           viewDir.z = *-1;    green    | yellow               aqua    | white(0,1,-1) | (1,1,-1)             (0,1,1) | (1,1,1)---------|---------             -----------------black    | red                  blue    | purple(0,0,-1) | (1,0,-1)             (0,0,1) | (1,0,1)

When I rotate the camera around a point, the "cross" tilts like this:
 \ |  \|   |   | \ 

Sorry for this stunning ASCII art, but I'm trying to determine if the viewVector is correct. If it is, I can proceed :)

Thanks for the help hibread!
Rick


I'm getting a similar thing, except I get the aqua/white/blue/purple grid if I just try to output the viewdir. If I multiply by depth I get the same colors, but in a gradiant. If I then add the camera position I get the green/yellow/black/red type grid, but it moves with the camera (it still seems to be view aligned, instead of world aligned).

I actually got the other method working, getting the projected depth, then creating the projected point and multiplying by the inverse view projection matrix to get back into a worldspace position. I wanted to avoid that extra math of having to multiply back by the inverse matrix every time I needed the position, though (which can be a lot when I get into effects like depth of field and volumetrics).
Quote:Original post by FoxHunter2
I also wonder why the result changes when I change my FarPlane from like 1000 to 4 and vice versa (for testing purposes right now). Seems like there is a dependency between (radius,scale) and the FarPlane, or my code is just wrong.


This should happen, because you're making comparisons with normalized depths. When you're taking all those samples from your depth buffer, which is of the range [0, 1], those depths really have no meaning without being un-normalized (IE, multiplied by the distance the far clip plane). If you want to, you could do set distanceScale = the distance to your far clip plane, which would cause the depth comparison to be made in regular eye-space.

Quote:Original post by FoxHunter2

1) ssao looks like calculated per face, not per pixel


Indeed it does...have you checked to ensure your depth buffer looks right? It looks as though you might be rendering the same depth per face.

Quote:Original post by MJP
Indeed it does...have you checked to ensure your depth buffer looks right? It looks as though you might be rendering the same depth per face.


The depth buffer (R32F) looks good, like interpolated, at least no faces are visible like in the SSAO output. I think it's the ssao shader that does something wrong, although I'm just using your code.

Okay, something I found out: Using a too low sampleRadius (like 0.001) results in floating point inaccuracies resulting in bad sampleTexCoord values.
Using a higher sampleRadius fixes this, however it creates a halo around the model. Playing around with both parameters fixes this a bit, however.

[Edited by - FoxHunter2 on December 5, 2007 5:12:58 PM]
@MJP & Hibread
The disco viewVector is gone, now I have a steady color (by calculating the farplane in eye-space on the CPU). I don't understand why the multiplication with the modelview matrix isn't working. Come to think of it, its not the first time I'm stuck with that matrix, but I never digged deep into it. But that's more a question for the OpenGL section I think.

As for the positions / SSAO, I think I finally have the right points. I can create an "aura" around the pixels by rendering the surrounding depths. That is only possible if the re-calculated clip-space(is that a position in pixel units?) is correct. So, I'm happy that it seems to work. In fact, I would send a kiss if I was a girl :)

There is one thing I wonder though. I "generate" a new point around the original point like MJP did:
  // Calculate texcoords for getting the depth  float4 clipPoint = mul( projMatrix, float4( newPoint,1 ) );  float2 tx = 0.5 *  clipPoint.xy / clipPoint.w  + float2(0.5, 0.5);   tx.y = 1-tx.y; // I'll have to flip both axes, could that be correct?  tx.x = 1-tx.x;

The article I posted earlier seems to do it more simple:
  // pnt is still in eye-space  float2 ss = ( newPoint.xy / newPoint.z ) * float2( 0.75, 1.0 );  float2 tx = ss * 0.5 + 0.5;  float depth = tex2D( depthMap, tx );// OR ALTERNATIVELY  float3 ss = se.xyz*vec3(.75,1.0,1.0);  float  depth = tex2DProj( depthMap, ss * 0.5 + ss.z * vec3(.5) );

Both solutions aren't working. Well, the texture coordinates aren't completely wrong, but they seem to move faster than the original ones (as if these new generated points are closer or zoomed on).


The SSAO is finally working a little bit. Although it looks a lot more like cartoon shading now. It works like an "outline" / edge detector shader now (like the teapot from FoxHunter2's screenshot). And distorting it with a noise texture just makes the entire screen... 'crispy'. I haven't blurred it yet though. And the depth is still on a 16 bit texture. Maybe FoxHunter2 has found some improvements? You seem to have the same problems. Especially the outlines are a little bit weird:
float  zDif       = 50.0*max( originalDepth - generatedPointDepth, 0.0 );       occlusion += 1.0/(1.0+zDif*zDif);       ...       occlusion /= sampleCount;

The 'max' function should discard pixels "behind" the original point, right? The sky should be way behind... However, if I understand it right, a flat surface also occluded itself if you are not looking straight forward to it (which makes the occlusion also view dependant). Maybe I should also know the pixel normal and create a half sphere of samplepoints on top of it, instead of just generating x points around the pixel. 1/2 of the pixels are a waste of time anyway, since they are behind. Right? But I didn't see the article doing that, and its results were prety good.

Everybody thanks for helping!
Rick

[Edited by - spek on December 5, 2007 6:36:00 PM]
Quote:The article I posted earlier seems to do it more simple:

Since we are not interested in the z coordinate it's possible to simplify the projection. The author also asumes a fov of pi/4 and an aspect of 4:3.

Doing the perspective multiplication will show how:

[x', y', z', w'] = [x,y,z,1] * P = [xa, yb, zc+d, z]

where P is the projection matrix on standard d3d form:
| a 0 0 0 |
| 0 b 0 0 |
| 0 0 c 1 |
| 0 0 d 0 |

Going from homogenous coordinates we divide by w' => [xa/z, yb/z, (zc+d)/z]
But we're only interested in [x',y'] = [xa/z, yb/z]

So we can do it just by ([x,y] * [a,b])/z witch is what the author does.
He just choose a=0.75 and b=1, giving the fov and aspect asumed.

Quote:It works like an "outline" / edge detector shader now (like the teapot from FoxHunter2's screenshot). And distorting it with a noise texture just makes the entire screen... 'crispy'.

Yeah, random sampling is necessary to make it look good. Blurring is tricky, however:)

Quote:Maybe I should also know the pixel normal and create a half sphere of samplepoints on top of it, instead of just generating x points around the pixel.

This is what I do. It removes self occlusion and make sure all samples are put to good use. I simply check if the point is in the hemisphere around the normal, if not, I just move the point radius * some constant units along the normal. It ofcourse introduces some additional cost, so if anyone have a better idea I would love to hear it.

Hope this helps.
>> He just choose a=0.75 and b=1, giving the fov and aspect asumed.
Then probably these factors are wrong in my case. Like I said, the coordinates I get are not totally wrong, but the perspective seems to be slightly different which will shift the coordinates. Hard to explain what I exactly mean :) I don't see how these factors are related to the fov though... I tried a square screen (so ratio = 1) with a=1,b=1, but that didn't work.

I wonder how the article did it get good looking, it's not doing any tricks to avoid "self-occluding", unless its in a different pass. If I apply the same trick, it looks as if I'm drunk, seeing everything double. I use dither like this:
float3 noise = 2 * tex2D( noiseMap, texcoords ) -1;...// generate samplepointpnt = originalPnt + reflect( lookUpOffset, noise ) * scale;

The offset array is just a list of (fixed) points around the originalpoint. Noise is a normal from a noisy normalMap. The texture coordinates don't change, so I see the same noise pattern all over the screen. Probably when flat surfaces won't occlude themselves, this problem will be gone.


So, the first I should try is using the normal to create a half hemisphere like you said. But... how to do that? I have the (world) normal in my shader. I can figure out something with cos and sin stuff, but probably there is a far more simpler way with vector math.

Other boys&girls here that experience this problem, or even better, fixed it?
Greetings,
Rick
Quote:I don't see how these factors are related to the fov though...

They are related to your view frustum and hence also to fov. See here for clarification.

Just use your projection matrix to set a and b. I.e. a = projection[0][0], b = projection[1][1].

Quote:So, the first I should try is using the normal to create a half hemisphere like you said. But... how to do that?

Use the dot product. If the dot product between your normal and the vector you use to derive the sampling position is less than zero, the angle between the vectors are greater than 90 degrees, and hence the sampling point is not in the hemisphere.

Quote:I wonder how the article did it get good looking

Yeah, it's hard to get it to look good. Lots of parameter tweeking:) I've found that you should keep the radius quite small. This effect naturaly works best with local occlusion.
Of course, dot product. I must say I created some nice artistic effects along the way, but of course, that is not what I want for SSAO. What I do now is this:
const half3 sampleSpherePos[16] = {		float3(0.527837, -0.085868 ,0.527837)  ,		float3(-0.040088, 0.536087, -0.040088)  ,                .... 		float3(0.03851, -0.939059, 0.03851) 		};	float3 dir = normalize(reflect( sampleSpherePos.xyz, noise.xyz));	// Offset	float3 pnt = originalPos + sampleScale.xyz * dir;			// Generate new eye-space point	// check if its in the hemisphere	                half inside = ( dot( dir, pixelNormal ) > 0.1 );	if (inside == 1)	{		.. add sample	}

This kicks out ~50% of the samples, but its not correct. Most probably because the normal is in worldspace, while the positions are in eye-space (thus the direction as well, right?). I tried multiplying the normal with the ModelView
matrix, but that is not working.

Is there also a way to create the hemisphere positions with the help of the normal? Now I skip ~50%. That speeds up the shader, but it would be more efficient (if the calculation is simple) if all the positions are in that hemisphere.


Calculating texture coordinates with the factors from the projectionmatrix works! Thanks for that, I guess it makes the shader somewhat faster, especially when doing it for 16 samples.

Pff, almost every step from this shader is causing problems for me. On the bright side, its the most advanced shader I made so far, so I'm learning alot now :)

Greetings,
Rick

This topic is closed to new replies.

Advertisement