• Create Account

## World-space reconstruction

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

7 replies to this topic

### #1Kaptein  Prime Members

2216
Like
0Likes
Like

Posted 25 May 2014 - 06:46 PM

I'm officially stumped. I've tried any and every way there is, even the bad ones.

I have a special setup where I write the the ACTUAL view-space depth from my shaders to a texture, so what we have is (R, G, B, DEPTH). Texture is 16f so no issues with precision. Especially considering I'm using this value in many other places.

Anyways,

following the article here: http://www.opengl.org/wiki/Compute_eye_space_from_window_space

or just about any other article on the subject, anything I do works. Yes, it works, but the edges are... imprecise. Basically, the further from the center of the camera, the worse it gets. But not by much.

The eye_direction vector is set up like this, in the fullscreen vertex shader:

eye_direction = vec4(gl_Position.xy * nearPlaneHalfSize, -1.0, 1.0);

or, in other words

eye_direction = vec4((texCoord.xy * 2.0 - 1.0) * nearPlaneHalfSize, -1.0, 1.0);

if you will.

nearPlaneHalfSize is as per the article:

        this->FOV   = config.get("frustum.fov", 61.0f);

...

float halfTan = tan(this->FOV * pio180 / 2.0);
nearPlaneHalfSize = vec2(halfTan * wnd.SA, halfTan);


And in the fragment shader, reconstructing coordinates as seen from camera:

    // reconstruct eye coordinates
vec4 cofs = eye_direction * matview; // transpose mult
cofs.xyz *= depth * ZFAR;


Now all we have to do is remove camera position and that's it.

Except.. It's ALMOST there.

Looking at other peoples reconstructions, I see that this works beautifully.

So, considering that my depth value is in view-space, is this the problem?

Because I believe depth-texture values usually are flat against the camera plane, while my depth value is.. non-flat

Edited by Kaptein, 25 May 2014 - 06:47 PM.

### #2Ashaman73  Members

13651
Like
1Likes
Like

Posted 25 May 2014 - 11:03 PM

First try to increase the precision of the depth buffer. 16f is really not much (I too, use 16f render targets). Either use a 32f depth buffer, or use atleast two 16f channels. Use some simple encoding/decoding for 2 channels, something like this:

vec2 encode(float depth) {
vec2 result = vec2(depth)*2048.0;
result.x = floor(result.x);
result.y = fract(result.y);
return result;
}
float decode(vec2 input) {
return dot(input,vec2(1.0/2048.0));
}

You can combine the 2 channel depth with a 2 channel, compressed normal buffer into a single 4x16f render target.

Edited by Ashaman73, 25 May 2014 - 11:05 PM.

Ashaman

### #3JohnnyCode  Members

1061
Like
1Likes
Like

Posted 26 May 2014 - 09:25 AM

thre are two types of depth, the z depth and w depth. Ones is post non lineared, the w is linear (z done non linear by linear w ).

Try to multiply view matrix with projection matrix by hand and see the 4th row of it - the w construting row. You will notice it just contains third view matrix row in the very multiplied matrix 4th row:

view matrix   projection matrix

[a,b,c,p1    ]

[a,b,c,p2    ]

[vx,vy,vz,vp]         BLA

[0,0,0,1      ]         [0,0,1,0]

if we consider column matricies and multiplication order (A*B)(v)=B(A(v)) , then we multiply view column with projection row.

first component in 4th row of result:

[a,a,vx,0]*[0,0,1,0]=vx

second component in 4th row of result:

[b,b,vy,0]*[0,0,1,0]=vy

third component in 4th row of result:

[c,c,vz,0]*[0,0,1,0]=vz

fourth component in 4th row of result:

[p1,p2,vp,1]*[0,0,1,0]=vp

thus, bottom 4th row of view*projection matrix is actualy the third row of view matrix.

If a vector gets tranformed by this matrix, it actualy remembers its view space z in final w component. That is  the linear z, the w komponent, and by help of device z,near,far, view z, - the final  vector is post devided.

### #4Kaptein  Prime Members

2216
Like
0Likes
Like

Posted 26 May 2014 - 12:53 PM

First try to increase the precision of the depth buffer. 16f is really not much (I too, use 16f render targets). Either use a 32f depth buffer, or use atleast two 16f channels. Use some simple encoding/decoding for 2 channels, something like this:

vec2 encode(float depth) {
vec2 result = vec2(depth)*2048.0;
result.x = floor(result.x);
result.y = fract(result.y);
return result;
}
float decode(vec2 input) {
return dot(input,vec2(1.0/2048.0));
}

You can combine the 2 channel depth with a 2 channel, compressed normal buffer into a single 4x16f render target.

I might just implement that, because that would leave me 2 extra channels i could put stuff in.. I guess

The thing is though, the view range isn't all that big. I think 1024 at the highest setting I've tried.

My "precision loss", as in, what I can visibly see is wrong, is coming from window-space. Basically the further the fragment (x, y) is away from center of screen, the worse it gets, again, not by much though.

It's enough to derail my super-cheap ground fog project. I don't have nearly enough GPU or CPU resources to do anything fancy in my game, so I have to use whatever I have left.

The fog is basically just reconstructing the (x, y, z), then applying a 2x simplex3D and hoping it looks nice.

The 3D nature of the thing does make for some general issues, but they are not related. I have verified the problem by scaling the (x, y, z) so that I can see colors at various angles and fragment positions, and from there verified that indeed the reconstruction is not completely right.

All this still doesn't mean my depth value is not simple precision loss. There may have been a time where I believed the implausible to be impossible. Not so much anymore.

Are renderbuffer depthbuffers still faster than depth textures? Because I gained a (slight) speed-up from not using a depth-texture in my FBOs.

Edited by Kaptein, 26 May 2014 - 01:01 PM.

### #5JohnnyCode  Members

1061
Like
0Likes
Like

Posted 26 May 2014 - 06:31 PM

forget about your presicioing being too low! It is just unoaght (16 bit)

### #6JohnnyCode  Members

1061
Like
0Likes
Like

Posted 26 May 2014 - 06:53 PM

how damn, I realy didnt wanna kill a man ()

u close though

proj*(view*(v))=v...... then v*1.0/v.w .. rasterized fed coordinates to depth test and rasterizer displayed

### #7JohnnyCode  Members

1061
Like
0Likes
Like

Posted 26 May 2014 - 07:02 PM

(world*view*proj)*(objectspaceV)=(raster)?

(raster?)*1.0/raster.W= depth tested pixel function fed component to pixel function

you may test it be reconstructing texture coordinte reading Gbuffer rendered very (right?)

### #8Kaptein  Prime Members

2216
Like
0Likes
Like

Posted 27 May 2014 - 10:00 AM

I don't read from a depth texture, JohnnyCode (if I understoof you right).

But... In the end I gave up, and just used shitty depth texture after all. Now I lost alot FPS because depth-renderbuffer is just alot faster.

It's alot of FPS to gain by not using depth-renderbuffer when rendering basically the entire scene.

Anyways,

in conclusion, without raymarching the "ground" fog, it won't look all that good.

I ended up combining cheap-ass fog-depth (intersection with fog-height and ground) + simplex3d + smoothing top and bottom.

http://fbcraft.fwsnet.net/deferredfog.png

Edited by Kaptein, 27 May 2014 - 10:03 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.