how can I translate absolute screen coords to a location in 3d space?

Started by
6 comments, last by ggoodwin37 20 years, 9 months ago
I am working on a very simple 3d demo and I want an object to bounce around in 3 dimensions. I''d like to have the object switch directions whenever it hits the edge of the screen (approximately), so I need to calculate where the edge of the screen is in 3d space and use that for my bounds check. I have the world matrix for the object, and the view and projection matrices for the scene. Do I need to invert the matrices in order to work backward from the screen coords to the 3d coords? If so, how would I go about this (I''m using direct3d). Thanks!
Advertisement
You could test for a collision against the viewing frustum planes.

Or in OpenGL you can use gluUnproject to get world coordinates from screen coords. I''m sure there is some sort of DX equilvlant.
the basic camera in a game is made of
- a position
- an orientation
- far plane
- near plane
- Yfov
- apsect ratio
- width in pixels
- height in pixels

if you draw on paper a side view of the frustum, and a top view, it might help understand the calculations below. It's quite simple, really.


When camera at (0, 0, 0) and with identity orientation, the eight corners of the frustum are, in world coordinate

ynear = znear * tan(Yfov/2)
xnear = znear * tan(Yfov/2) * AspectRatio

yfar = zfar * tan(Yfov/2)
xfar = zfar * tan(Yfov/2) * AspectRatio

V0 (-xnear, -ynear, znear)
V1 ( xnear, -ynear, znear)
V2 ( xnear, ynear, znear)
V3 (-xnear, ynear, znear)


W0 (-xfar, -yfar, zfar)
W1 ( xfar, -yfar, zfar)
W2 ( xfar, yfar, zfar)
W3 (-xfar, yfar, zfar)

the plane are made of points

Left (V0, W0, W1, V1)
Right (V2, W2, W3, V3)
Floor (V0, W0, W3, V3)
Ceiling (V1, W1, W2, V2)
Near (V0, V1, V2, V3)
Far (W0, W3, W2, W1)


first, calculate V0, V1, ..... W2, W3 as above
Rotate the points with the camera orienation
translate the points to the camera position
build planes using the vertices (you only need 3 of them per plane)

you might need to invert the sign of znear and zfar, like

V0 (-xnear, -ynear, -znear)
V1 ( xnear, -ynear, -znear)
V2 ( xnear, ynear, -znear)
V3 (-xnear, ynear, -znear)


W0 (-xfar, -yfar, -zfar)
W1 ( xfar, -yfar, -zfar)
W2 ( xfar, yfar, -zfar)
W3 (-xfar, yfar, -zfar)

depends if you are using a left handed coordinate system, or a right handed coordinate system

left handed :
x pointing to your right
y pointing up
z pointing towards the monitor

right handed :
x pointing to your right
y pointing up
z pointing towards you




[edited by - oliii on July 6, 2003 8:23:33 PM]

Everything is better with Metal.

"Rotate the points with the camera orientation
translate the points to the camera position"

so I just transform each point by my view (or camera) matrix, right?

I had basically worked out those equations but I was having difficulty factoring in the camera position and orientation. Thanks for shedding some light on the subject! (I wish my question had been about lighting, because then that last line would have been extremely witty).

those xnear,xfar values etc. are a lovely thing. for every distance z to the camera you can just multiply them with z to get your "world limits" in x and y (if you already transformed the coords.. one matrix-vector multiplication or one vector-diff and three dot products).

above method is exact for points, needs a little tweaking for spheres (and basically tests the box around the sphere) but works without the need for any frustum plane normals.
f@dzhttp://festini.device-zero.de
can you please elaborate on transforming the specific world-limit points? I seem to be running into confusion when it comes to this area. Are the frustrum corners in world space or in view space? My objects are all in world space, so I want the frustrum corners in world space as well, right? So wouldn''t I need to apply the _inverse_ of the view matrix to each frustrum corner in order to get the points in world space? *sees stars* Any help would be appreciated.
your objects are in world space? i assume they are all static or bounding volumes then?

if you want to work in world space i would rather get the plane normals and transform them with the camera matrix (inverse view, so i rather look at the view matrix as the inverse of the camera). that should be the above approach.

mine would do something like this:
vec delta=object.position-cam.position;

z=delta dot cam.forward
x=(delta dot cam.right)/z
y=(delta dot cam.up)/z

[basically youre just transforming the point into camera space which isnt far from the screen space. screen x coord would be something like: sx = (.5 + (x / (2*xfar))) * resolution_width

but dont do this for every vertex except youre going to store them and send them as already transformed vertices so it wont be done a second time in hardware]

you would then just compare x,y,z with the respective ..near/..far values from above (for points.. for spheres it depends on if you check for inside or outside and you have to modify z with the radius before dividing).


[edited by - Trienco on July 8, 2003 4:19:41 AM]
f@dzhttp://festini.device-zero.de
do you mean to say that my object positions SHOULDN'T be in world space? I thought that world space is the common system where you want to set up everything relative to the whole "world". Am I missing something fundamental?


I'll try what you suggested. I am still trying to get comfortable with how everything works in 3d so I am probably doing something completely wrong...oh well, maybe a breakthrough is near. thanks for helping.

[edited by - ggoodwin37 on July 8, 2003 5:30:08 AM]

This topic is closed to new replies.

Advertisement