# OpenGL Help Converting from NDC to World Coordinates

This topic is 1506 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm trying to take the corners of my camera frustum in NDC and convert them to into world space and use those coordinates to generate something like cascading shadow maps. I've been trying to mimic the gluUnProject function (from https://www.opengl.org/wiki/GluProject_and_gluUnProject_code) except instead of starting with screen coordinates then going to NDC i just pass the NDC as arguments. Problem is the output of my unproject function is incorrect. Here's a visualization of my scene and camera to help explain it. Its not to scale or anything but it should help you understand.

My camera is places at (0, 0, 5) looking at (0,0,0) which is where the monkey is placed. The near plane is set to 0.000001(for testing purposes) and the far plane is set to 50. So the z value of the near coordinates should be roughly 5(which it is) and the z value for the far coordinates should be about -45 but for some reason this comes out to be 3.294118.

Here's what my unproject looks like this:

void unproject(float x, float y, float z){
vec4 in;
vec4 out;
mat4 m;
mat4 A;
vec4 world_coord;
vec3 object_coordinate;

A = mvp_matrix;

m = inverse(A);

//in the sample i was following, theres a bit of code to convert screen coords to NDC
//i just set in equal to my parameters which are already in NDC
in = vec4(x,y,z,1.0);

out = m * in;

out[3]=1.0/out[3];

world_coord[0] = out[0]*out[3];
world_coord[1] = out[1]*out[3];
world_coord[2] = out[2]*out[3];

}



I call unproject() like this. Once for every corner of the frustum and once for a point i'll use to set lookAt for my shadow maps:

unproject(-1.0, -1.0, -1.0);//near bottom left
unproject(-1.0, -1.0, 0.999999);//far bottom left
unproject(-1.0, 1.0, -1.0);//near top left
unproject(-1.0, 1.0, 0.999999);//near top left
unproject(1.0, -1.0, -1.0);//near bottom right
unproject(1.0, -1.0, 0.999999);//far bottom right
unproject(1.0, 1.0, -1.0);//near top right
unproject(1.0, 1.0, 0.999999);//far top right
unproject(0.0, -1.0, 0.0);//the point at the bottom middle of NDC


When i print the output of this function i get:

-0.000001 -0.000001 4.999999 //near bottom left
-2.108860 -1.318038 3.294118 //far bottom left
-0.000001 0.000001 4.999999 //near top left
-2.108860 1.318038 3.294118 //near top left
0.000001 -0.000001 4.999999 //near bottom right
2.108860 -1.318038 3.294118 //far bottom right
0.000001 0.000001 4.999999 //near top right
2.108860 1.318038 3.294118 //far top right
0.000000 -0.000001 4.999998//the point at the bottom middle of NDC


The z values of the near plane seem right but the x and y values seem wrong. And the x, y and z values of the far plane are wrong. It may have something to do with the inverted matrix because when i print it i get this:

MVP MATRIX
0.892592 0.000000 0.000000 0.000000
0.000000 1.428148 0.000000 0.000000
0.000000 0.000000 -1.000000 -1.000000
0.000000 0.000000 4.999998 5.000000
INVERTED MATRIX
1.120332 0.000000 -0.000000 0.000000
0.000000 0.700207 0.000000 -0.000000
-0.000000 0.000000 -2621440.000000 -524288.000000
0.000000 -0.000000 2621439.000000 524288.000000


500000 and 200000 seem a bit large for this but i got the same thing from separate inverse matrix calculators.

Any ideas?

Edited by theHOUSE16

##### Share on other sites

Oh, and the reason that i use 0.999999 when i call unproject for some z's is because i was getting undefined or -inf or something like that when i used 1.0 and i thought if i got an actual number output from the function it would be more useful in finding out what i was doing wrong.

##### Share on other sites

Maybe you first try to simplify your "unproject" function, to find the problem. I would write it just like this:

// Transforms the normalized device coordinate 'ndc' into world-space.
vec3 UnprojectPoint(vec3 ndc)
{
vec4 projPos = inverse(modelViewProjectionMatrix) * vec4(ndc, 1.0);
return projPos.xyz/projPos.w;
}


I personally prefer to pass inverse matrices to the shader directly instead of inverting them during shader execution,

since they won't change during the entire render pass.

I also wonder why you are using variable names like "in" and "out".

These identifiers are actually reserved as input/output semantics for function parameters for both GLSL and HLSL.

Edited by LukasBanana

##### Share on other sites

Just a tip... In many tutorials I've seen, rather than deriving the world coordinates of your pixel from NDC, you can just output them from your vertex shader and use them as input to your fragment shader. That's what I used when implementing cascaded shadow maps.

#version 420 // or whatever

layout (location = 0) in vec4 position;
layout (location = 0) out vec3 worldPos;

uniform mat4 gMVP;
unform mat4 gModel;

void main()
{
gl_position = gMVP * position;
worldPos = (gModel * position).xyz;

}

#version 420

layout (location = 0) in vec3 worldPos;
layout (location = 0) out vec4 fragColor;

void main()
{
// do whatever

}

Note that in your fragment shader, worldPos will be an interpolated vector, corresponding to the location of the pixel on the given primitive (exactly what you want). And of course if you want eye coordinates in your fragment shader, you'd transform by the view model matrix in the vertex shader.

Edited by mikev

1. 1
2. 2
Rutin
17
3. 3
4. 4
5. 5

• 26
• 10
• 11
• 9
• 9
• ### Forum Statistics

• Total Topics
633717
• Total Posts
3013509
×