Screen Space to World Space

Started by
0 comments, last by someusername 17 years, 8 months ago
Hello, I am working on some camera thing, and I don't know how to solve a problem and would like you guys give me some ideas. What I am trying to do is transfrom all 3d objects in to the screen space and then average the position of all 3d objects in screen space to find the screen space center position. After that, I use the center position in the screen space and unproject back to the world space. So, the problem is that when I get the screen space of each object, screen_pos = world_pos * world_to_screen_mat; screen_pos.x = screen_pos.x / screen_pos.w; screen_pos.y = screen_pos.y / screen_pos.w; screen_pos.z = screen_pos.z / screen_pos.w; I have a w value, which I used to normalized my x, y, z screen position. However, when I try to find the average center position, for (eachObject) { center_screen_pos += eachObject->screen_pos; center_w += eachObject->w; } // Find the average screen pos and w center_screen_pos = center_screen_pos / numOfObject; center_w = center_w / numOfObject; // Convert the center screen pos before convert back to world space center_screen_pos.x = center_screen_pos.x * center_w; center_screen_pos.y = center_screen_pos.y * center_w; center_screen_pos.z = center_screen_pos.z * center_w; center_screen_pos.w = center_w; center_world_pos = center_screen_pos * screen_to_world_mat; By having the code above, it didn't work because it seems I have the problem with the w component in calculating the center_screen_pos. It seems that the way I use to average the w component is the right way to do it. I am so confused what about the w component and can anyone give me some suggestion! Btw, the result what I get is that when the objects close in the world space, everything is fine, but when the objects are far away, the world center position is way far off. Thanks
Nachi Lau (In Christ, I never die)www.sky-dio.com/Nachi
Advertisement
There are some things that I don't quite get with what you're doing.

For instance, you project the centers of some objects into screen-space, then compute their screen-space center, then project back to world space.
Why? Why not computing their world-space center and project that center into screen-space?

Furthermore, there is no "1-1" mapping of all points of a plane to all points of space. (lol, if this was true, we would -essentially- be able to see behind objects... think about it)
It's impossible to unproject a 2D (u,v) point back to world space, and expect to be left with a unique 3d point (x,y,z) (the original source point of the projection)
All the 3d points on their line, would project to the same point on the plane as well. You can't reverse the procedure.

What you present, is essentially a "mouse picking" problem, where you start with a point on the viewport, and you're usually left with a vector -pointing to the direction of the desired 3d point- that assumably starts at your current origin (the world space position of the camera usually)


Also... You seem to speak of "x,y,z screen position", or "screen W".
There's no such things. The screen space is two dimensional and every point is assigned a (u,v) pair of coordinates. Points that are meant to map onto the viewport, satisfy -1 <= u,v <= 1.
Also, the W component is not a concept of screen space. It's a homogeneous coordinate, only used to map point into screen space.
An actual physical point should have a W==1, this is why you always to normalize w.r.t W in order for the result to have a meaning.


Can you explain in further detail what *exactly* you're trying to do? I think you'll get more help that way.

This topic is closed to new replies.

Advertisement