mouse operations in 3D...how?
I am working on the big multiplayer game you may have heard a bit about...SOL. I recall that our designer posted a very impressive screenshot from early development in the visual arts forum...
I am (at the moment) lead programmer on this project, and have a rather large problem.
How does one convert a 2d mouse coordinate into a point (or vector if appropriate, I think it is) in world space. I thought it might have something to do with finding the inverses of all the matrices used to go world->screen for the scene, but that''s where my un-knowledge of advanced math lets me down.
Qatal
die or be died...i think
Yeah, that''s the way that you do it - or if you are using pure opengl or directx then there are functions that you can use, otherwise you just grab the inverses of all the matrices that you use and multiply them in reverse order (I hope that''s right)
or you can just get the inverse of the combined matrix (projection*world transformation etc), but I believe that it is actually easier - and more accurate if you make inverses of the projection matrix, and transform the world with inverse values (i.e. the rotation matrix for rotating +55 deg is the inverse for rotating around -55 deg) to come up with the "reverse" matrix.
Most of these matrices (and their inverses) can be found in the available documentation - in the OpenGL RedBook (which is available form gamedev.net, its around appendix E or something) and in the DirectX docs its somewhere in the matrices area, but its more scattered)
Basically what you are doing by transforming the 3d world coordiantes to 2d screen coordinates is doing a whole bunch of "vector space projections", where you grab a 3d point, transform it into "camera coordiantes" and then project it into the camera, and then into the 2d viewpoint, now if you do it the other way (which is what you want) you transform from the screen coordiantes into camera space, and then you project from 2d space to 3d space and then you translate it (rotate, translate, scale?, shear?) into the "true" world space.
A point to note here is that this shouldn''t be done every single frame if you can avoid it. I''m pretty sure that you can calculate the inverse projection matrix at the same time that you calculate the real projection matrix, and that should save you some time if you store it somewhere.
And like I said above its also faster if you create the inverse world transformation matrix in the same way as you would a normal transformation matrix but only it accepts negative values.
But I''m sure I''m jsut raving on here so good luck with what you want to do.
Dæmin
(Dominik Grabiec)
dominik.grabiec@student.adelaide.edu.au
or you can just get the inverse of the combined matrix (projection*world transformation etc), but I believe that it is actually easier - and more accurate if you make inverses of the projection matrix, and transform the world with inverse values (i.e. the rotation matrix for rotating +55 deg is the inverse for rotating around -55 deg) to come up with the "reverse" matrix.
Most of these matrices (and their inverses) can be found in the available documentation - in the OpenGL RedBook (which is available form gamedev.net, its around appendix E or something) and in the DirectX docs its somewhere in the matrices area, but its more scattered)
Basically what you are doing by transforming the 3d world coordiantes to 2d screen coordinates is doing a whole bunch of "vector space projections", where you grab a 3d point, transform it into "camera coordiantes" and then project it into the camera, and then into the 2d viewpoint, now if you do it the other way (which is what you want) you transform from the screen coordiantes into camera space, and then you project from 2d space to 3d space and then you translate it (rotate, translate, scale?, shear?) into the "true" world space.
A point to note here is that this shouldn''t be done every single frame if you can avoid it. I''m pretty sure that you can calculate the inverse projection matrix at the same time that you calculate the real projection matrix, and that should save you some time if you store it somewhere.
And like I said above its also faster if you create the inverse world transformation matrix in the same way as you would a normal transformation matrix but only it accepts negative values.
But I''m sure I''m jsut raving on here so good luck with what you want to do.
Dæmin
(Dominik Grabiec)
dominik.grabiec@student.adelaide.edu.au
Yes, I am using pure Direct3D - RM in fact, though everyone insists that its ancient. So what are these functions(?)
I have tried implementing the matrix thing, but its still hugely confusing (i like to understand my own code...) so I had a look around on the web and found a bunch of VR sites talking about something called a "pick ray". Is this the same thing? This I understood and have implemented it as follows:
(a)take the camera frame of reference
(b)set another frame to the camera position + an offset (mousex-320)*0.0064 works OK);
(c)get the orientation (dir and up) of the offset frame relative to the world (that''s simple...I assume theres matrices being used behind the scenes, but (?)).
(d)iterate through multiplications of that vector to cast the pick ray and return objects that are intersected.
would this work? I have it working for some camera positions, but not others. why?
Qatal
die or be died...i think
I have tried implementing the matrix thing, but its still hugely confusing (i like to understand my own code...) so I had a look around on the web and found a bunch of VR sites talking about something called a "pick ray". Is this the same thing? This I understood and have implemented it as follows:
(a)take the camera frame of reference
(b)set another frame to the camera position + an offset (mousex-320)*0.0064 works OK);
(c)get the orientation (dir and up) of the offset frame relative to the world (that''s simple...I assume theres matrices being used behind the scenes, but (?)).
(d)iterate through multiplications of that vector to cast the pick ray and return objects that are intersected.
would this work? I have it working for some camera positions, but not others. why?
Qatal
die or be died...i think
Casting a ray is one way of doing it, but an alternative is using the depth buffer to approximate the coordinates of the point. It is appoximate because you only know the point is within a frustrum of your calculated coordinates. It would take a bit of experimentation to figure out the exact boundary of that frustrum and then things like anti-aliasing will screw with that. It most likely would be a bit more sensible to set a bounding sphere around the calculated point with the radius proportional to the depth value. Since it is your user trying to click on whatever it doesn''t hurt to make it a bit easier on them.
now, I understood the concept ok, but how do you do it? My 3D programming knowledge ends where D3DIM starts
im not sure how much "too bigger" the sphere should be made, considering that the user has to be able to pick small ships out of a cloud or at great distances.
Could you give me some pointers as to how it can be implemented in RM? How do you access the depth buffer then?
Qatal
die or be died...i think
im not sure how much "too bigger" the sphere should be made, considering that the user has to be able to pick small ships out of a cloud or at great distances.
Could you give me some pointers as to how it can be implemented in RM? How do you access the depth buffer then?
Qatal
die or be died...i think
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement