Sign in to follow this  
Kaptein

OpenGL World-space reconstruction

Recommended Posts

Kaptein    2224

I'm officially stumped. I've tried any and every way there is, even the bad ones.

I have a special setup where I write the the ACTUAL view-space depth from my shaders to a texture, so what we have is (R, G, B, DEPTH). Texture is 16f so no issues with precision. Especially considering I'm using this value in many other places.

 

Anyways,

following the article here: http://www.opengl.org/wiki/Compute_eye_space_from_window_space

or just about any other article on the subject, anything I do works. Yes, it works, but the edges are... imprecise. Basically, the further from the center of the camera, the worse it gets. But not by much.

 

The eye_direction vector is set up like this, in the fullscreen vertex shader:

eye_direction = vec4(gl_Position.xy * nearPlaneHalfSize, -1.0, 1.0);

or, in other words

eye_direction = vec4((texCoord.xy * 2.0 - 1.0) * nearPlaneHalfSize, -1.0, 1.0);

if you will.

 

nearPlaneHalfSize is as per the article:

        this->FOV   = config.get("frustum.fov", 61.0f);

        ...

        float halfTan = tan(this->FOV * pio180 / 2.0);
        nearPlaneHalfSize = vec2(halfTan * wnd.SA, halfTan);

And in the fragment shader, reconstructing coordinates as seen from camera:

    // reconstruct eye coordinates
    vec4 cofs = eye_direction * matview; // transpose mult
    cofs.xyz *= depth * ZFAR;

Now all we have to do is remove camera position and that's it.

Except.. It's ALMOST there.

 

Looking at other peoples reconstructions, I see that this works beautifully.

So, considering that my depth value is in view-space, is this the problem?

Because I believe depth-texture values usually are flat against the camera plane, while my depth value is.. non-flat tongue.png

Edited by Kaptein

Share this post


Link to post
Share on other sites
Ashaman73    13715

First try to increase the precision of the depth buffer. 16f is really not much (I too, use 16f render targets). Either use a 32f depth buffer, or use atleast two 16f channels. Use some simple encoding/decoding for 2 channels, something like this:

vec2 encode(float depth) {
   vec2 result = vec2(depth)*2048.0;
   result.x = floor(result.x);
   result.y = fract(result.y);
   return result;
}
float decode(vec2 input) {
   return dot(input,vec2(1.0/2048.0));
}

You can combine the 2 channel depth with a 2 channel, compressed normal buffer into a single 4x16f render target.

Edited by Ashaman73

Share this post


Link to post
Share on other sites
JohnnyCode    1046

thre are two types of depth, the z depth and w depth. Ones is post non lineared, the w is linear (z done non linear by linear w ).

Try to multiply view matrix with projection matrix by hand and see the 4th row of it - the w construting row. You will notice it just contains third view matrix row in the very multiplied matrix 4th row:

 

view matrix   projection matrix

[a,b,c,p1    ]  

[a,b,c,p2    ]

[vx,vy,vz,vp]         BLA

[0,0,0,1      ]         [0,0,1,0]

 

if we consider column matricies and multiplication order (A*B)(v)=B(A(v)) , then we multiply view column with projection row.

 

first component in 4th row of result:

[a,a,vx,0]*[0,0,1,0]=vx

 

second component in 4th row of result:

[b,b,vy,0]*[0,0,1,0]=vy

 

third component in 4th row of result:

[c,c,vz,0]*[0,0,1,0]=vz

 

fourth component in 4th row of result:

[p1,p2,vp,1]*[0,0,1,0]=vp

 

thus, bottom 4th row of view*projection matrix is actualy the third row of view matrix.

 

If a vector gets tranformed by this matrix, it actualy remembers its view space z in final w component. That is  the linear z, the w komponent, and by help of device z,near,far, view z, - the final  vector is post devided.

 

 

Share this post


Link to post
Share on other sites
Kaptein    2224

First try to increase the precision of the depth buffer. 16f is really not much (I too, use 16f render targets). Either use a 32f depth buffer, or use atleast two 16f channels. Use some simple encoding/decoding for 2 channels, something like this:

vec2 encode(float depth) {
   vec2 result = vec2(depth)*2048.0;
   result.x = floor(result.x);
   result.y = fract(result.y);
   return result;
}
float decode(vec2 input) {
   return dot(input,vec2(1.0/2048.0));
}

You can combine the 2 channel depth with a 2 channel, compressed normal buffer into a single 4x16f render target.

 

I might just implement that, because that would leave me 2 extra channels i could put stuff in.. I guess

 

The thing is though, the view range isn't all that big. I think 1024 at the highest setting I've tried.

My "precision loss", as in, what I can visibly see is wrong, is coming from window-space. Basically the further the fragment (x, y) is away from center of screen, the worse it gets, again, not by much though.

It's enough to derail my super-cheap ground fog project. I don't have nearly enough GPU or CPU resources to do anything fancy in my game, so I have to use whatever I have left.

 

The fog is basically just reconstructing the (x, y, z), then applying a 2x simplex3D and hoping it looks nice.

The 3D nature of the thing does make for some general issues, but they are not related. I have verified the problem by scaling the (x, y, z) so that I can see colors at various angles and fragment positions, and from there verified that indeed the reconstruction is not completely right.

 

All this still doesn't mean my depth value is not simple precision loss. There may have been a time where I believed the implausible to be impossible. Not so much anymore.

 

EDIT: An additional question.

Are renderbuffer depthbuffers still faster than depth textures? Because I gained a (slight) speed-up from not using a depth-texture in my FBOs.

Edited by Kaptein

Share this post


Link to post
Share on other sites
JohnnyCode    1046

how damn, I realy didnt wanna kill a man ()

 

u close though

 

 

proj*(view*(v))=v...... then v*1.0/v.w .. rasterized fed coordinates to depth test and rasterizer displayed

Share this post


Link to post
Share on other sites
JohnnyCode    1046

(world*view*proj)*(objectspaceV)=(raster)?

(raster?)*1.0/raster.W= depth tested pixel function fed component to pixel function

 

 

you may test it be reconstructing texture coordinte reading Gbuffer rendered very (right?)

Share this post


Link to post
Share on other sites
Kaptein    2224

I don't read from a depth texture, JohnnyCode (if I understoof you right).

 

But... In the end I gave up, and just used shitty depth texture after all. Now I lost alot FPS because depth-renderbuffer is just alot faster.

It's alot of FPS to gain by not using depth-renderbuffer when rendering basically the entire scene.

 

Anyways,

in conclusion, without raymarching the "ground" fog, it won't look all that good. sad.png

 

I ended up combining cheap-ass fog-depth (intersection with fog-height and ground) + simplex3d + smoothing top and bottom.

 

http://fbcraft.fwsnet.net/deferredfog.png

Edited by Kaptein

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now