Jump to content

  • Log In with Google      Sign In   
  • Create Account

Moving objects with the mouse in 3D


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
11 replies to this topic

#1 ebody   Members   -  Reputation: 122

Like
0Likes
Like

Posted 09 September 2008 - 04:05 AM

Hi, I assume no one is going to give the complete code how to move objects with the mouse in 3d space so I ask you at least for the necessary steps. I'd like to move something that the user picked but it has to move exacly the same as the mouse cursor, assuming the view is at any random angle. Axes will be constrained manully. Can someone help me please, and tell me what I should do?

Sponsor:

#2 CableGuy   Members   -  Reputation: 897

Like
0Likes
Like

Posted 09 September 2008 - 04:22 AM

This is how I would do it:
1. Determine which object was picked using ray-bounding shape intersection.
The origin of the ray is the camera position and the direction is determined by the mouse cursor.
2. Unproject the mouse coordinates the get back a 3d position and tell your object to move there.

When using gluUnproject, you supply it 3 coordinates in windows space and it returns a position in object space. obviously the x and y come from the mouse cursor. To get a direction for the first step you would unproject at to spots say z=0 and z=1 and subtract between them. For the second step you will have to specify a z value( either by projecting a known point to get it's window coordinates z value or using some vector math but it depends on your situation)

Hope this helps.

#3 haegarr   Crossbones+   -  Reputation: 4311

Like
0Likes
Like

Posted 09 September 2008 - 04:23 AM

Movement exactly like the mouse pointer means that a particular point of the moved object is to be constrained to a plane that is parallel to the view plane. For that point the co-ordinate frame origin of the object is fine.

So, prepare dragging by computing the object's frame origin in camera local space. In that space you are allowed to alter the horiziontal and vertical co-ordinates, but not the depth co-ordinate (in fact the restriction to the said plane). Use the mouse pointer position, view size, and projection mode, to compute the missing horizontal and vertical co-ordinates. You will presumbly need to transform the resulting position back into the global frame.

#4 mullwaden   Members   -  Reputation: 122

Like
0Likes
Like

Posted 09 September 2008 - 04:32 AM

Simple case, the camera is looking down along the z-axis, I then presume you would move the object in a XY-plane somewhere. So what you need is:
- the distance from that plane to your camera, in this case the distance of the object along the z-axis.
- You also need to know your projection matrix, for example if its a glu perspective thingy you would have a sort of pyramid with a base angle along one plane corresponding to the field of view you put in and the other angle = asin(ratio * sin(fovAngle));

With this you can calculate the size of the projection pyramids base at that z-value and map that to the screen.

if ratio = 1. fovAngle = 45.

2*(zDistance * cos(fovAngle/2)) = pW (width (and height) of the pyramids base.)

pW/screenWidth now gives some ratio for how many distance units in openGL moving the mouse a certain amount of pixels.

Warning, i wrote this quickly, may be wrong :)

#5 steven katic   Members   -  Reputation: 274

Like
0Likes
Like

Posted 09 September 2008 - 03:49 PM

Asking for help in forums is a good exercise in honing you skills at giving precise/unambiguous descriptions. For example:

haegarr says:
Quote:

Movement exactly like the mouse pointer means that a particular point of the moved object is to be constrained to a plane that is parallel to the view plane.


That might be one meaning. But I think ebody has a 3d editing tool type implementation in mind; and probably means he wants the model to follow the user's mouse pointer while still being constrained to a particular set of axes or a plane in the global 3D space: similar to how you can move objects in Max or Maya.

Would that be right ebody?

#6 steven katic   Members   -  Reputation: 274

Like
0Likes
Like

Posted 09 September 2008 - 07:01 PM

Gee you ask hard questions (I have some spare time).

I reckon that would be a great programming competition question too.
There is next to no documenation around on it, you could think it's too trivial to bother with or it is too complex to bother with, but in reality it's more of a very specialised domain. There are limited versions of it, with perhaps the stuff you see in Max and Maya being the most sophisticated.

Wouldn't you want to map the viewport plane down onto the 3D world plane (that movement is contrained to at the time e.g. XY Plane)? Like a plane(s)/movement version of the virtual trackball concept (that is popular for rotating objects).

#7 ebody   Members   -  Reputation: 122

Like
0Likes
Like

Posted 09 September 2008 - 08:07 PM

thank you for your replies, I'll try to implement them in my app right away.

I'll let you know if I managed to do it and if necessary I'll ask more questions.


Oh, I've got already one question, what values should I actually use for the third parameter in the gluUnproject function? Can I use any value or 0 and 1 like @CableGuy suggested. What meaning they have and what will happen if I take for instance 0.3 or evern 226?



@steven katic, exacly, I mean moving objecs like in Maya or Max, when an object is faster or slower then the mouse pointer it's pretty easy and that's what I've got now.

[Edited by - ebody on September 10, 2008 2:07:55 AM]

#8 steven katic   Members   -  Reputation: 274

Like
0Likes
Like

Posted 09 September 2008 - 09:21 PM

Quote:

@steven katic, exacly, I mean moving objecs like in Maya or Max, when an object is faster or slower then the mouse pointer it's pretty easy and that's what I've got now.


well that might sound like you're half way there.
How did you implement what you have now? i.e. What kind of variables do you use to get the mouse pointer to currently move an object? It may be just a case of extending your current implementation.


#9 ebody   Members   -  Reputation: 122

Like
0Likes
Like

Posted 09 September 2008 - 09:43 PM

since my last post I've made some changes in my app and now I'm using the gluUnProject function with the 3rd (z param) 0.3 (don't ask my why, I've just seen it somewhere) besides I use an additional API for vector calculations, anywhay, I've got this now: (slighly changed to make it more clear, it's c++ but I use some kind of pseudocode here)

here I retrieve mouse coordinates translated to the object coordinates:

Mouse2Modelview(old_mouse_pos, [out] mousePos_old);
Mouse2Modelview(new_mouse_pos, [out] mousePos_new);

then using another API I calculate the movement vector:

CINSVector movemet = mousePos_old.objectCoordiantes - mousePos_new.objectCoordiantes;

finally using the same API I set the translation (x-translation in this case):

transformMatrix.SetTranslation(movemet.x, 0.0f, 0.0f);

if I move the mouse pointer along the x axis (I have something like this RGB arrows for moving in maya) the object moves a little bit faster but not as fast as before and in the opossite direction if I rotate the scene 180 degrees

hmm, and if I tried to set the mouse pointer to the new object's position it wouldn't be faster any more, but the I had to project the object coordinates back to windows coordinates, could it work? then the mouse pointer would be snapped to the move-arrow of the "trackball"

Posted Image

http://img98.imageshack.us/my.php?image=model1en4.jpg

strange, no preview availible :(

this is how I understand it, on the right side (screen) I've got the mouse coordinates (XY) that are translated by the gluUnProject function into model coordinates on the left side

if the screen is paralell to the Y-axis the movement of an object is the same as that of the mouse pointer (the right red line) but if the screen (camera, eyes) are at some angle to the Y-axis the value of the translation is the orange line although the distance computed for the translation comes frome the model coordinates

(let's also say that the buttom end of the red an orangle lines is the start position of the mouse pointer and the upper end is the end of the movement)

[Edited by - ebody on September 10, 2008 5:43:59 AM]

#10 steven katic   Members   -  Reputation: 274

Like
0Likes
Like

Posted 11 September 2008 - 09:33 AM

Take a look at this:



OK, then...I don't know how to refer to images either...
its here: http://img183.imageshack.us/my.php?image=moveobjectsmalad4.jpg

Looks overly contrived?...probably because it is.

Here's the context:
It's a form a collision detection we want to do (which is usually a staple ingredient of games). And we want our interaction to be 'realtime'; i.e very responsive to the users mouse. All other similar examples of this type of interaction are based on some sort of reference space or object, be it visible
(such as a terrain in a game that is clicked on with the mouse so that an object moves to that position), or virtual (like the virtual track ball used to rotate objects in the 3D space).

So we can take one more little step and apply these well known concepts to the problem (Not for any special reason...except maybe for its initial simplicity..and it's interesting?). I am pretty sure this can be done at a lower level by using a bunch of matrix manipulation concepts if you like.

So, back to the pretty little picture and a bit of explanation.

It's a snapshot of moving the (cube) object from point B to point I in the 3D space, which to the user (looking thru the camera/viewport on the computer screen) will look like the object is following the mouse pointer from point A to E.

to be continued...(be back soon).

[Edited by - steven katic on September 11, 2008 4:33:51 PM]

#11 steven katic   Members   -  Reputation: 274

Like
0Likes
Like

Posted 12 September 2008 - 09:48 AM

OK what's happened is I think I've found myself trying to write some sort of article/tutorial to explain the diagram/implementation (I can tell you it's easier to implement than to explain an implementation in writing). So it is a work in progress that may sound a bit rough around the edges/incomplete at the moment (so by all means ask any questions ebody). So at the moment here is some more food for thought about that diagram:

Overview
As you can see in the diagram, the process is broken up into a series of steps from A to I and the result of each step is a piece of useful data that is used in the process of getting the object from B to I as the mouse pointer is moved from A to E.

If you take another look at the diagram, you may notice that EFGI is just a translated version of ADCB. A tell tale sign that ABDC may be invariant? Why not...let's treat ADCB (i.e.ABCD) as invariant. It will simplify our work dramatically (If you aren't familiar with the term invariant as used in computer science you can always look it up). So we are going to use ABCD as an invariant to move the object by the vector BI in 3D space when the mouse pointer is moved by the vector AE on the screen. As long as ABCD is maintained as invariant, moving the object with the mouse pointer can be done from any camera/viewport position and rotation (ok, there will be a few exceptions/special cases like when/if the camera/viewport plane is perpendiclar to the virtual plane, but we'll get to that in due course).

Here's how we can do it:
Calculate ABCD when the user picks the object. When the mouse pointer moves from A to E derive FGI. Then move The object by the vector BI. We can do this easily when ABCD is invariant: i.e. because the relationship between A, B, C and D doesn't change, we use it to derive FGI.

Here's the first step (when the object is picked):

A: get the 2d mouse pointer position (call it MousePointer2DPos).
B: derive the 3D position on the object that the mouse pointer hit (call it MousePointer3DPos).
C: map the MousePointer3DPos perpendicular ( and down in this case) onto the virtual plane. ( call it MappedMousePointer3DPos).
D: Derive the 2D position (on the screen viewport) of the MappedMousePointer3DPos (at step C) (call it MappedMousePointer2DPos )

Now let's stop for a moment. It all sounds a bit convoluted so far...even to me. Some of these steps should sound pretty ordinary to anyone famliar with
projecting and unprojecting back and forth (to and from) a 2D screen space and a virtual 3D world space. For example, gluUnProject() can be used to obtain the MousePointer3DPos in step B, and gluProject can be used to obtain the MappedMousePointer2DPos in step D. Another reason I stopped here is because steps A to D is only done once at the beginning of the whole process of getting the object from B to I. Basically we can group steps A to D into a sub-process called PickAnObject(). It's much like any other 'object picking' solution, but we have 2 additional pieces of information obtained from steps C and D. Typically we can use PickAnObject() the following way (using psuedocode):

ProcessMouseDown(int x, int y)
{
if we are in Select and Move Object Mode
if left mouse button is down
PickAnObject(x,y);
}

Well, so far nothing new, except C and D.
Forget about C for the moment, just look at it as something we have to do to get the data resulting at D. At D we have the MappedMousePointer2DPos. This point is used as the start point of our translation of the cube. As soon as we get the 2D mouse position on the screen as it moves(at E), we move D by vector AE to get the end point of the translation (at F). Note in the diagram that the position and distance of E is arbitrary. Note that the position and orientation of the camera/viewport is also arbitrary at the moment. This is probably one of the things I like about this implementation: it doesn't care about that information directly. All it needs to know is how and when to get the position and rotation as needed. Well it turns out the only time we need them is when we project and unproject so we'll just use gluProject() and gluUnProject(). That keeps things simple. If you are a matrix manipulation afficianado you can search for any optimizations at the matrix level if you wish. But for the moment, gluProject() and gluUnProject() will be satisfactory.

Now, on with the process.

E. get the 2d mouse pointer position when it moves.This is actually a repeat of step A(but obviously the mouse pointer position just moved to a new spot). Lets suppose the letters A to I in the diagram are geometric points for a moment.Then we can say the following: Our aim is to translate B to I when the mouse pointer is moved from A to E. That is, translate the object by the vector BI. But we need the point I. Here's one way to get it:

The same a before, we map from the screen and into the 3D space, but we will go backwards this time, to find the point I. This is described next.

F. derive the next MappedMousePointer2DPos position. This is just the point at D translated by the vector AE (i.e. the mouse pointer is moved from A to E).
This is a very interesting point ( is that a pun?). This is because it is the position on the 2D screen/viewport that G (in 3D space) would map to if it
were mapped onto the 2D screen/viewport. Well if we look carefully we can see that G is mapped to F indirectly via the relationship between C and D. The
relationship between C and D defines the mapping of the viewport to the Virtual plane. If we maintain this relationship as an invariant we can use it to freely move the object with the mouse.

G. [note: this has been edited]. Cast a ray from F on the viewing plane to hit the Virtual Plane in 3D space. This ray is cast the same way as the ray cast from C to D is cast, just in the opposite direction. This is the new MappedMousePointer3DPos.(Note: In a viewport using an orthographic projection the rays FG and CD will be parallel to each other and perpendicular to the viewing plane, simply because of the... err...orthogonal nature of the projection (needs verification?). And with a little bit more thought, you may come to realise that for viewports using the orthographic projection, we can accomplish alot of our task(s) without the Virtual Plane all together: and exploit the characteristics of the orthographic projection instead. But, then we may need to write more (other) code to take into account the position and rotation of the viewport/camera more directly in our calculations [ a trade off to consider?] )

I. translate G by vector CB. Then translate the object by vector BI (could we also just translate the object by vector CG?)

Let us know if this has helped ebody...
(You might have found your own better solution by now?)

ps. I think treating the letters A to I as both process steps and geometric points might confuse people? (The explanation seems to be screaming out for a demo implemenation too)

[Edited by - steven katic on September 14, 2008 1:48:32 AM]

#12 steven katic   Members   -  Reputation: 274

Like
0Likes
Like

Posted 17 September 2008 - 08:14 AM

Here is an implementation example, in C# XNA (unfortunately not opengl) that someone has done at:

http://www.ziggyware.com/readarticle.php?article_id=189

Which is perhaps a more simple method?:
"For plane translations we do not need to project anything from 3D to 2D before calculating differences. We have a convenient ray/plane intersection method at our disposal that can calculate the intersection points of mouse rays from the mouse start and end positions onto the plane on which we want to translate. The amount of translation is simply the difference between the first mouse intersection with the plane and the second mouse intersection with the plane, as these intersection points already exist in 3D. Adding this difference to the existing translation component results in the correct amount of translation for the corresponding mouse input." from the link above.

or perhaps not simpler (if I resort to splitting hairs about minimising 2D/3D projection use):
The XNA example implementation seems to continually project from 2D to 3D twice during the translation operation(for Start and End Points), where as in my example you would project twice (from 2D to 3D once,and from 3D to 2D once) at the start of the translation operation, then continually project once ( for End point) from 2D to 3D during the rest of the operation.

[Edited by - steven katic on September 17, 2008 6:14:52 PM]




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS