Jump to content

  • Log In with Google      Sign In   
  • Create Account


Converting screen coordinates to 3D world coordinates


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
10 replies to this topic

#1 realh   Members   -  Reputation: 185

Like
0Likes
Like

Posted 04 April 2014 - 10:21 AM

I'm writing a game which could loosely be described as a strategy game, with a typical camera - it looks down onto the world from a high angle with a perspective projection.  The player will interact with the game by tapping or clicking things on the "ground" and will also be able to move the camera a bit.  For various reasons I'm writing my own engine instead of using a ready-made one.  It's based on OpenGL ES 2.0 (or an equivalent subset of OpenGL for PCs) with glm for the maths.

 

With the help of a diagram and schoolboy trigonometry I managed to come up with an equation for calculating the minimum zfar value to use in glm::perspective to render the ground at its farthest from the camera (at the top of the screen) but I don't know how to work out where a point on the screen corresponds to on the ground.  I think glm::unproject will be useful but the trouble is I don't have a meaningful z-coordinate to plug in to glm::unproject.  I know you can read the depth buffer to get this, but I also want to be able to work out which bits of the ground are visible (at the four corners of the viewport) and use this to limit the camera's movement (and to work out which bits of the terrain are outside the view and don't need to be rendered), so it would be better if I could do this before rendering anything.

 

I thought I could get a line joining the same NDC X and Y on the near and far clipping planes then use my diagram/equation to work out where this line crosses my ground plane, but it didn't work.  I think the main problem is that I assumed a linear mapping of Z between world space and device space, and I don't think this is the case for a perspective projection.  I'm also not sure of the Z values for the near and far planes to use as input to glm::unproject. Is it 1.0 for far and -1.0 for near?

 

Rambling on a bit, I had a lot of trouble understanding the perspective divide.  Am I right in thinking this is an extra step OpenGL automatically performs after the matrix transformations, and it just converts [x, y, z, w] into [x/w, y/w, z/w]?  And that an orthogonal projection matrix sets w to 1 and a perspective one sets it to z?  But z from which "space"?



Sponsor:

#2 cozzie   Members   -  Reputation: 1582

Like
1Likes
Like

Posted 04 April 2014 - 12:14 PM

I think you can multiply the coordinates with the inverse viewprojection matrix

#3 Buckeye   Crossbones+   -  Reputation: 4400

Like
1Likes
Like

Posted 04 April 2014 - 12:36 PM

I'm not familiar with what functions GLM has available. But the principle for picking the ground mesh should go something like:

 

Assuming that the depth buffer is set -1 to 1 for near to far planes (see link below):

 

Unproject a 3D vector posNear(mousex, mousey, -1.0) to get the world position at the near plane.

Unproject a 3D vector posFar(mousex, mousey, 1.0) to get the world position at the far plane.

3D vector rayDir = normalize(posFar - posNear)

 

You should be able to pick the ground mesh with posNear and rayDir.

 

If the ground is not a mesh, but a plane: solve for d where posNear + d*rayDir is a point in the plane.

 

EDIT: This post says depth buffer values are -1 to 1.


Edited by Buckeye, 04 April 2014 - 12:55 PM.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.


#4 Waterlimon   Crossbones+   -  Reputation: 2459

Like
0Likes
Like

Posted 04 April 2014 - 12:44 PM

If you are using the results of this unprojection for doing anything else than pure visuals, it might make more sense to raycast into your scene on the CPU. There are probably situations when you want your meshes outlines to not correspond with the actual borders of the object.


o3o


#5 realh   Members   -  Reputation: 185

Like
0Likes
Like

Posted 04 April 2014 - 02:20 PM

@cozzie: I think multiplying by the inverse of the MVP is basically what glm::unproject does. It's based on a similar function in GLU.

 

If the camera was looking directly down the vertical it would be relatively easy because the ground would have a fixed Z in device space.  But my camera is tilted (as if on the nose of a diving aeroplane) so the ground's Z in device space varies with Y.

 

Buckeye and Waterlimon, you confirm what I was thinking about getting a line between the same (X, Y) coordinates at Znear and Zfar, but I don't know how to work out where this ray intersects my terrain or other objects in world space.  Or even what the values are for Znear and Zfar are in the input to glm::unproject. I think it should be NDC, so -1 and 1 respectively?



#6 realh   Members   -  Reputation: 185

Like
0Likes
Like

Posted 04 April 2014 - 02:38 PM

Sorry, accidental double post.


Edited by realh, 04 April 2014 - 02:39 PM.


#7 Buckeye   Crossbones+   -  Reputation: 4400

Like
0Likes
Like

Posted 04 April 2014 - 03:00 PM

Disclaimer: I'm not familiar with GLM, in particular.

 

However, once you've calculated posNear and rayDir (see my post above), you may be able to perform intersection tests with spheres and triangles using something as described on this page (which I googled.)

 

For quick culling, you may want to create bounding spheres for your objects. I would think an "intersect sphere" function would be very quick. If you get a hit, and you need to fine-tune the hit location, look for intersections of the line with triangles in the object.

 

With regard to the near and far parameters for setting up the pick vectors: did you look at the link I provided above? What else did you find when you googled?

 

You should be able to do a quick test by using -1 as the z-component of the posNear vector, unproject it and check that it's the same as your eyepoint or camera position.


Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.


#8 realh   Members   -  Reputation: 185

Like
0Likes
Like

Posted 05 April 2014 - 08:35 AM

With regard to the near and far parameters for setting up the pick vectors: did you look at the link I provided above? What else did you find when you googled?
 

 

 

The first link raises an important point about the depth buffer using the range [0, 1] whereas NDC is [-1, 1], so thanks for posting it. The main thing I found by Googling was that you can read back the depth buffer to get the Z coord at a certain screen X, Y, but I couldn't find much about doing my own maths. I've realised the non-linear relatinship between Z in NDC and world space must be a red herring because my ray was in the latter, so a simple linear interpolation should find where it crosses Z=0 (assuming that's where my ground is). My mistake was to use 0 instead of -1 for Znear in NDC.

 

 

You should be able to do a quick test by using -1 as the z-component of the posNear vector, unproject it and check that it's the same as your eyepoint or camera position.

 

 

Isn't -1 on the near clipping plane rather than at the camera?



#9 Buckeye   Crossbones+   -  Reputation: 4400

Like
0Likes
Like

Posted 05 April 2014 - 09:25 AM


Isn't -1 on the near clipping plane rather than at the camera?

 

What did you get when you tried it? Something other than the camera position? To answer your question: yes, but it should be very close if you use the camera position to set up the view.


Edited by Buckeye, 05 April 2014 - 09:25 AM.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.


#10 realh   Members   -  Reputation: 185

Like
0Likes
Like

Posted 05 April 2014 - 10:31 AM

I haven't tried it yet. My Znear is quite a long way from the camera because in this game the field of view is wide in relation to the maximum height of visible objects (although it's a perspective projection this would probably be classed as 2.5D) and I read it's best to make the near-far range as small as possible to make the most of limited depth buffer precision.



#11 Buckeye   Crossbones+   -  Reputation: 4400

Like
0Likes
Like

Posted 05 April 2014 - 10:57 AM


I read it's best to make the near-far range as small as possible to make the most of limited depth buffer precision.

That's correct.

 

The pick method described will still work. It's just a matter of converting the mouse coordinates to the vector in world space that screen position represents. You'll still end up with the world position of the ground under that screen position.

 

As you mentioned, you can also use the depth buffer (if you have access to it). Make a vector of (mousex, mousey, depth-buffer-value) and unproject it to get the world position. Note, however, that method will give the world position of whatever object was rendered at that pixel - it may not be the ground plane.


Edited by Buckeye, 05 April 2014 - 10:59 AM.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS