Multiple moving objects

Started by
11 comments, last by JM33 12 years, 10 months ago
I am trying to write a basic game and am looking for some advice. I am working with opengl for android. I'm sure I am going about this all backwards, but I am just learning as I go. So far I just have a basic 3d airplane & world to fly around in. Basically my airplane always stays at 0,0,0 and I have a created rotation & translation matrices that move my lanscape/sky around.

Now I want to add some rockets to my plane (because blowing stuff up is more fun). So I'm wondering what would be the best way to do this. My rockets will be attached to the plane, but when fired should travel straight away from the plane. (I want to keep this simple, so heat seekers and smart bombs are probably going to be totally out of my skill level).

I have tried a couple of ideas, and limited success. One is to keep the rocket with the plane, basically never moving until fired. The main problem I am having with this approach is that once the rocket is fired, if the plane moves, so does the rocket. The other way is to put this rocket in the world and make it move around as if it were attached to the plane. I really have struggled with this. I can't even get the rocket to move with the plane, let alone worry about what to do when it is fired. I tried creating an inverse of the matrix which moves the world. I don't know if my math was wrong, or if this is even the right idea.

Any help or suggestions would be appreciated. Just thought I would ask what the best / most common solution is before I keep struggling with something that won't work. Maybe there is another way I haven't considered, such as making the world stay still and moving my airplane and camera with gluLookat. Thanks in advance.
Advertisement
A "local space" describes a co-ordinate system within which co-ordinates are given. E.g. a model's mesh commonly uses a local space (the "model space") in which the positions of its vertices are defined. A local-to-global transformation is then used to place each vertex into the space called the "world". This looks like
p' := M * p
when using column vectors, where p denotes the model local vertex position, M the local-to-global transformation matrix, and p' the vertex position in global space.

The global space, or world space, is just a convention, usually the first space that is common to all objects including the camera. Hence it is no problem at all to use more inner spaces as well. Doing so, you can relate the rocket to the plane and the plane to the world. For a vertex it would look like
p' := M[sub]A[/sub] * M[sub]R[/sub] * p
so that the there is a local-to-parent transformation M[sub]R[/sub] for the rocket, and a local-to-global transformation M[sub]A[/sub] for the airplane. The local-to-global transformation for the rocket is then the product M[sub]A[/sub] * M[sub]R[/sub].

At the moment of firing the rocket, compute the product M[sub]A[/sub] * M[sub]R [/sub]a last time and use it as initial local-to-global transformation of the rocket, letting it detached from the airplane.
OK I think I understand this correctly. Basically I need to create a transform matrix for my rocket M[sub]R[/sub]. Since my airplane is always at 0,0,0 then M[sub]A[/sub] is essentially the identity matrix. And I am really only applying a transform to the rocket and not the airplane. My simplified code is like this:

gl.glLoadIdentity();

gl.glPushMatrix();

F22.draw(mGl); //Draw the Airplane


gl.glMultMatrixf(r1newRotMatrix, 0); // r1newRotMatrix is the transform matrix for Rocket #1
gl.glTranslatef(r1x, r1y, r1z); // Positions the rocket onto the wing

Rocket1.draw(mGl); // Draw the rocket
gl.glPopMatrix();


gl.glPushMatrix();

gl.glMultMatrixf(newRotMatrix, 0); // This transform matrix moves the landscape/sky

//Ground
//
gl.glPushMatrix();

gl.glTranslatef(0, -15000, 0);
gl.glScalef(100f, 150f, 100f);
if(Ground1 != null)
Ground1.draw(mGl); //Draw the Ground

gl.glPopMatrix();


//Sky
gl.glPushMatrix();

gl.glTranslatef(0, -15000, 0);
gl.glScalef(100f, 100f, 100f);
if(Sky1 != null)
Sky1.draw(mGl); //Draw the Sky

gl.glPopMatrix();
gl.glPopMatrix();


If this is correct, then I just need to work on the actual transform matrix for the rocket. As I currently have it, it still moves incorrectly compared to the world. When the rocket is not fired, its transform matrix is the identity matrix. When fire it translates forward and rotates based on the control of the airplane. Then after it has gone its max distance I reset the matrix back to the identity, ready to fire from the plane again. It just doesn't move correctly as I have it now.

Please let me know if I misunderstood something. For now I am going to just work on improving the rockets transform matrix. Thanks for your help!
GL uses column vectors. Whenever a transformation is multiplied with the top on the matrix stack, it is multiplied on the right side. Hence, a sequence


gl.glLoadIdentity();
gl.glMultMatrixf(r1newRotMatrix, 0);
gl.glTranslatef(r1x, r1y, r1z);

generates a matrix I * R * T on the top of matrix stack. This is legal but uncommon, because it rotates the object after it has been translated! You probably first want the object to be rotated into its orientation, and the rotated object to be translated into its final position. In other words


gl.glLoadIdentity();
gl.glTranslatef(r1x, r1y, r1z);
gl.glMultMatrixf(r1newRotMatrix, 0);

is probably desired. Well, if at every computation either R or T is guaranteed to be the identity matrix, then the chosen order is okay. But in that case r1newRotMatrix itself has to be composed as shown above as soon as the rocket is released.

In general, the transformation pipeline for the rocket is
V * M[sub]A[/sub] * M[sub]R[/sub]
where V denotes the view matrix, M[sub]A[/sub] the local-to-global matrix of the airplane, and M[sub]R[/sub] the local-to-parent matrix of the rocket. Herein M[sub]R[/sub] is probably composed from a rotation and translation
M[sub]R[/sub] := T[sub]R[/sub] * R[sub]R[/sub]
to attach the rocket at the airplane's wing (see the section above). Your camera is coincident with the airplane, i.e. the position and orientation of the camera in the world is 1:1 the same as those of the airplane:
C == M[sub]A[/sub]
With the view matrix being the inverse of the camera matrix
V = C[sup]-1[/sup]
we finally got
C[sup]-1[/sup] * M[sub]A[/sub] * M[sub]R[/sub] = M[sub]A[/sub][sup]-1[/sup] * M[sub]A[/sub] * M[sub]R[/sub] = M[sub]R[/sub]
[sub][/sub]for the rocket.

At the moment of detachment, the current local-to-world matrix of the rocket is
M[sub]R[/sub]'[sub]0[/sub] := M[sub]A[/sub] * M[sub]R[/sub]
Then for each frame, pick the forward vector of this matrix to be the direction of motion of the rocket, scale it by the product of speed and the time elapsed since the last frame, and accumulate it to the current position:
M[sub]R[/sub]'[sub]i[/sub] = T[sub]f[/sub](v, dt) * M[sub]R[/sub]'[sub]i-1[/sub] , i > 0
To compute its position in view space, you have to apply the view transformation:
M[sub]A[/sub][sup]-1[/sup] * M[sub]R[/sub]'[sub]i[/sub]
But herein M[sub]A[/sub][sup]-1[/sup] also changes from frame to frame without being compensated, because the object of interest is no longer the airplane but the unattached rocket. In other words, IMHO letting "the world move instead of the camera" is a disservice.


In other words, IMHO letting "the world move instead of the camera" is a disservice.


I will read over the rest more carefully, but as I understand this statement, I should probably use a different approach. Very simply, as it stands my airplane doesn't move around the world. My world is moving around the airplane. If I were to reverse this process, my world would be fixed and the airplane actually moving through it. I believe this would simplify things for me greatly. The rocket then would follow the same transforms as the airplane. At the moment of release it could just use the current forward vector of the airplane and continue on its own. Other things like collision detection and enemy/target placement would also be simplified, at least as best as I can visualize it without trying.

Am I wrong, or misunderstanding again? Would you do this differently, as I described? Sorry to bother you but I would rather start over with a better approach than continue fighting this.

[quote name='haegarr' timestamp='1306576648' post='4816735']
In other words, IMHO letting "the world move instead of the camera" is a disservice.


I will read over the rest more carefully, but as I understand this statement, I should probably use a different approach. Very simply, as it stands my airplane doesn't move around the world. My world is moving around the airplane. If I were to reverse this process, my world would be fixed and the airplane actually moving through it. I believe this would simplify things for me greatly. The rocket then would follow the same transforms as the airplane. At the moment of release it could just use the current forward vector of the airplane and continue on its own. Other things like collision detection and enemy/target placement would also be simplified, at least as best as I can visualize it without trying.

Am I wrong, or misunderstanding again? Would you do this differently, as I described? Sorry to bother you but I would rather start over with a better approach than continue fighting this.
[/quote]
Mathematically there is no difference. E.g. collision detection just requires a common space for all participating objects; no matter what space this is. But IMHO the difference lies in the application and simplicity of understanding. Instead of exposing a single object (the plane), one would handle all objects the same way.

The advantage is then that it becomes easier to pass over to a "soup of objects". ATM your code knows and handles the airplane and the rocket explicitly. This explicitness will become cumbersome when the count of objects grows. You will then deal with a collection of objects that participate in the scene, and hence require a common reference system. This is usually the "world", but can of course also be the view space. But what happens if you want to support a 2nd view, e.g. a rear view? Or what if you want to attach the camera to the rocket, e.g. because you want to provide a guided rocket? Or what if you want to grant a look at the airplane from outside? All this is simple when the common space is the world, but it is somewhat strange (although nevertheless possible) if the common space is a particular view space.


Just my 2 Cent.
Thanks haegarr. I think that if nothing else, it is easier for me to conceptualize a world full of objects and the plane flying through it. Even though it should be possible, everything really feels backwards as I am setup now. I don't think it will be much trouble to turn things around this early in my project. Like you said, simplicity of understanding, will be the main reason for me.

I really appreciate the time you took to explain the details for me. Thank you very much!
I am really struggling with this still. First of all, I am not quite sure I understand the way I would make my plane fly through the scene and follow it with the camera. If the camera moves, and technically opengl has no camera, then I still have to move the world around inversely of how the plane actually moves. The way I was attempting this originally accomplishes this, though maybe not the best way. Maybe I wasn't saying it correctly, but when I stated that the world moves around the plane, I guess I meant that the world is moving (inversely) around to give the effect of the camera moving through the world. (For what its worth, I also have a third person camera setup, and my plane is not actually at 0,0,0 but at 0,-4,-40. I get the same problems if I move it to 0,0,0 though)

Nevertheless I tried a couple of ways to make this work differently. One was to try to move the plane with the same transform matrix I was using to move the world. I started by adding a gluLookat call at the beginning of the code so the camera could follow the plane. The main problem I came across was that I ran into a gimbal lock issue. The plane would only rotate on the world's X and Z axis. (My controls are only setup to pitch and roll the plane). I also realized that the gluLookat call isn't really a camera, it is just a (inverted) transform matrix, so again, I was still moving the landscape around to simulate the camera.

My next approach I just accepted the fact that the camera (inverse landscape movement) has to be there, or else everything would just stay still. So I made an inverse matrix for the original transform matrix that I used to move my camera. I tested the matrix under the assumption that it should perform the exact opposite transformations as my original transform matrix. Therefore it should appear that the plane flies backwards and the controls are inverted (pitch up is now pitch down etc.) The matrix worked exactly that way. Then I started my drawing with the original transformation matrix (set my camera), and draw the landscape. Then I use glMultMatrixf with the inverse matrix I created, and draw the airplane. This works at first. The plane is moving along in front of the camera, but when I roll the plane seems to roll around the world's z-axis, not the local z axis. And if I pitch up/down the plane disappears completely from the scene. Even if I make a loop it never comes back.

If you are still with me here, I have two questions. First, am I on the right track with these attempts? Second, why don't the transform matrices work when applied to the object when they work fine to control the camera? My best guess is that I need to create the matrix differently for the objects than I do for the camera, but I am not sure where to begin. Any help would be greatly appreciated. If you would like me to post some code to help explain what I am doing I will gladly do so. Thanks!

I am really struggling with this still. First of all, I am not quite sure I understand the way I would make my plane fly through the scene and follow it with the camera. If the camera moves, and technically opengl has no camera, then I still have to move the world around inversely of how the plane actually moves. The way I was attempting this originally accomplishes this, though maybe not the best way. Maybe I wasn't saying it correctly, but when I stated that the world moves around the plane, I guess I meant that the world is moving (inversely) around to give the effect of the camera moving through the world. (For what its worth, I also have a third person camera setup, and my plane is not actually at 0,0,0 but at 0,-4,-40. I get the same problems if I move it to 0,0,0 though)

To clarify this world vs object moving thing ...

The transformation pipeline that effect a vertex given in model local co-ordinates is model local to world to view. Using column vectors, this looks like
V * M
In OpenGL this is named the MODELVIEW matrix. Now, if you apply some steering to the model, you do it so that its current placement in the world is changed, i.e. you compute a new model to world matrix "on top of" the old model to world matrix. Using column vector, this means that you multiply the steering on the left of the original model matrix:
M' := S * M
Replacing the original M in the MODELVIEW with this updated one then yields in
V * ( S * M )
Here the parentheses are for clarification only; mathematically they play no role because matrix multiplication is associative. Hence it is also possible to compute
( V * S ) * M =: V' * M
instead. As you can see, it plays no role whether the steering is applied to the model or else to the view. Really? Notice that in the 1st case it is applied on the left of M but in the 2nd case it is applied on the right of V. When you look at the VIEW matrix as the inverse of the camera, then you can also understand S as steering the camera, because
C' := V'[sup]-1[/sup] = ( V * S )[sup]-1[/sup] = S[sup]-1[/sup] * V[sup]-1[/sup]
but obviously you need to use the inverse steering then.

Although mathematically all 3 cases are equivalent, they do differ in efficiency: The question is how many matrix multiplications are computed when more than a single model is placed in the world. Then it appears that steering the airplane is generally the most efficient way.


Nevertheless I tried a couple of ways to make this work differently. One was to try to move the plane with the same transform matrix I was using to move the world. I started by adding a gluLookat call at the beginning of the code so the camera could follow the plane. The main problem I came across was that I ran into a gimbal lock issue. The plane would only rotate on the world's X and Z axis. (My controls are only setup to pitch and roll the plane). I also realized that the gluLookat call isn't really a camera, it is just a (inverted) transform matrix, so again, I was still moving the landscape around to simulate the camera.


The camera in a 3D application is nothing more than the specification of a co-ordinate frame, i.e. an origin (here often called the "eye point") and an orientation. gluLookAt gets the eye point directly, and the orientation is computed from the forward vector (pointing from the eye point to the target point), and the up vector (because the forward vector allows to constrain only 2 of the 3 degrees of freedom a orientation has in 3D space). As such it is a local space placed in the global space (the so-called "world") like any model space, too.

When you render a scene, then the placement of all models w.r.t. the world are determined (either they are fixed for static geometry, or computed on the fly when depending on the animation system). You can do the same with the camera. That is a neat thing: To think of the camera as an object in the world. However, the rendered image means to be seen through the camera! That is, the scene must be given in view space when projection is applied, but not in world space. The meshes of the models are transformed from model local space into world space so far. Now they should be transformed further from world space into camera local space (hence somewhat "the other way"), and the camera local space is also known as view space. That is the reason to compute
V * M = C[sup]-1[/sup] * M
for every model: Transform its mesh from the model local space to the world to the camera local space. So the concept of a camera in such a 3D application is only approximately comparable to a real camera.

Now, you have the situation where you don't want a free camera at all, but a camera that is bound to a model. In other words, you don't want the camera to be controlled directly, but to automatically follow the model which itself is under control. The keyword for such a behavior is forward kinematic. The set-up for forward kinematic is usually called parenting. In fact, it means that the (in this case) camera to world transformation is composed of the camera to airplane (model local) transformation and the airplane (model local) to world transformation. As a formula
C := M[sub]A[/sub] * L[sub]C[/sub]
where M[sub]A[/sub] denotes the airplane's model local to global transformation and L[sub]C[/sub] how the camera is related to the airplane's model local space. Assume that the camera should be orientated exactly like the airplane (i.e. there is no rotation between them) but it is located 10 length units behind and 3 length units above the airplane's origin. Hence L[sub]C[/sub] is just a translation matrix like
L[sub]C[/sub] := T( 0, 3, -10 )

If you want to have the camera to look a bit down, then compose L[sub]C[/sub] from the translation and a little pitch:
L[sub]C[/sub] := T( 0, 3, -10 ) * R[sub]x[/sub]( angle )

Etc, etc.

That said, please notice that I suggest to drop every and all usage of glTranslate, glRotate, glScale, and gluLookAt. They are deprecated anyway, but also mislead to use them instead of a flexible own solution. Although it looks like OpenGL unburdens you from matrix math, as you can see from this entire thread, "real life" applications have some requirements where a matrix/vector library is much more efficient.


My next approach I just accepted the fact that the camera (inverse landscape movement) has to be there, or else everything would just stay still. So I made an inverse matrix for the original transform matrix that I used to move my camera. I tested the matrix under the assumption that it should perform the exact opposite transformations as my original transform matrix. Therefore it should appear that the plane flies backwards and the controls are inverted (pitch up is now pitch down etc.) The matrix worked exactly that way. Then I started my drawing with the original transformation matrix (set my camera), and draw the landscape. Then I use glMultMatrixf with the inverse matrix I created, and draw the airplane. This works at first. The plane is moving along in front of the camera, but when I roll the plane seems to roll around the world's z-axis, not the local z axis. And if I pitch up/down the plane disappears completely from the scene. Even if I make a loop it never comes back.

The thing with the local and global axes is as follows: A transformation is ever applied in global space!

Assume a sequence of 2 rotations, one computed as x and the other as y rotation:
R := R[sub]y[/sub] * R[sub]x[/sub]
As you know, matrix multiplication isn't commutative but it is associative. If you apply the above transformation to a vertex position v, then it is IMHO best for the understanding to use parentheses as in
R * v = R[sub]y[/sub] * ( R[sub]x[/sub] * v )
In words: The vertex position is rotated around the x axis. At this moment the model local x axis is coincident with the global axis. Then the already rotated vertex position is again rotated, this time around the y axis. This y axis is the global y axis, and the global y axis is presumably not coincident with the local y axis (due to the former rotation by R[sub]x[/sub]).

Now you want to rotate around local axes for both the x and the y rotation. No problem for the inner rotation as global and local axes are coincident. But the problem is in the outer rotation. You know that transformations are ever applied w.r.t. global axes. So what you need to do is to make the local y axis temporarily coincident with the global y axis at the moment when R[sub]y[/sub] is applied. (Please notice that making something temporarily coincident is a very important concept! It is e.g. also used when rotating around a center point that is not the global origin, or scaling along axes that are not the global ones.)

ATM when R[sub]y[/sub] is applied, the local y axis is changed due to the influence of the formerly applied R[sub]x[/sub], isn't it? So, to make the local and global y axes coincident, you have to undo (i.e. apply the inverse of) R[sub]x[/sub] after the application of R[sub]x[/sub] but before the application of R[sub]y[/sub]. Then, after the application of R[sub]y[/sub], you have to redo the former undo, i.e. you have to multiply with R[sub]x[/sub] again. In summary
R[sub]x[/sub] * R[sub]y[/sub] * R[sub]x[/sub][sup]-1[/sup] * R[sub]x[/sub] = R[sub]x[/sub] * R[sub]y[/sub]
Yeah, the result is that the order of rotations is simply revers! Although you still rotate around global axes (as said, each transformation by itself is applied in global co-ordinates), the effect is as you would rotate around local axes.


If you are still with me here, I have two questions. First, am I on the right track with these attempts? Second, why don't the transform matrices work when applied to the object when they work fine to control the camera? My best guess is that I need to create the matrix differently for the objects than I do for the camera, but I am not sure where to begin. Any help would be greatly appreciated. If you would like me to post some code to help explain what I am doing I will gladly do so. Thanks!

Please see my comment to the first section above on how the S is applied either to the model, the view, or the camera.


In summary: Switch over to the use of a matrix/vector library (need not be a full fledged one but one that is dedicated to 3D graphics). Bound the camera by parenting to the airplane. Apply the steering to the current transformation of the airplane (on the left side). Update the camera transformation due to forward kinematics, and invert it for the view matrix. That should do the job.
Thanks again haegarr. That is a lot to digest, but I think I get most of it. Implementing it will take me a while, but I am patient and dilligent. I took a look around for some matrix/vector libraries. It seems there is not a lot out there specific for android/java, that seemed too specific for 3d graphics. I did find libgdx, which is to my knowledge more of a full engine. It definitely has some good matrix, vector, and quaternion classes in it. I like that it is cross platform so I can do some testing strictly on windows. I think I'll get my feet wet with that and come back to this once I'm a bit more comfortable with it. Thank you very much again for your insight.

This topic is closed to new replies.

Advertisement