# OpenGL Issues with 3rd person space ship flight algorithm

This topic is 2498 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hey!

After a few days of trying to figure it out on my own, I am unable to figure out what is wrong or what I am not doing right in trying to make a 3rd space flight demo.

At the moment, my ship moves properly if I turn left or right or if I go up and down but when I start turning around and mixing axes, things get messed up at some point. The ship just starts moving backwards or just drifts away in a direction.

I'm using OpenGL so I'm not sure whether this belongs to the OpenGL forum but I thought this was more theory than actual OpenGL issues.

The way I compute my movement is the following :

I start with the ship at the origin and give it a direction vector going [0,0,-1]. My world is a simple sphere spanning about 25000 in radius and I intend on making everything stay inside that.

By pressing the arrow keys, I change the Heading Angle and the Pitch Angle of the ship by a small amount. Those angles return to 0 if the keys aren't pressed anymore.

Every frame, I rotate the rest of the world opposite of those angles by a ratio of 1/30 of the value of those angles. so if the ship is tilted 30 degrees upwards on that frame, I will rotate the world by 1 degree in the opposite direction to give it a slower pace. If I keep those angle 1 to 1 the world spins so fast it's impractical.

To computer proper positioning, I take my initial vector [0,0,-1] and put it inside a 4x4 matrix homogeneous matrix. I obtain the current matrix transformation from the ModelViewMatrix and multiply them together to obtain a new unit direction vector for my ship. With this unit direction vector, I update the position by adding each successive new direction vector I obtain. So far, it seems to keep track of the position very well but my direction is often reversed and I don't know what I must do.

Do you guys have a suggestion of what could be going wrong ? I can always post some code if necessary. I thought I would ask first and see if perhaps someone would have a good lead on what I should look into and try to figure out on my own in order to learn from this.

I'm probably doing lots of things wrong so don't hesitate to tell me what I should change.

Thanks!

##### Share on other sites
I think a good first step would be to define precisely what kind of motion control scheme you want.

It sounds like only pitch and yaw can be modified (that is, no roll). Are these rotations intended to be relative, or is there a 'fixed' up vector? In other words, do you want a 6DOF control scheme, or something more like a first-person character control scheme?

##### Share on other sites
From what I understand of 6DOF vs FPS style I would tend to say more of a 6 degrees of freedom approach.

At the moment, I have Yaw and Pitch implemented. Rolling is there as well but is meant to act differently. I make the ship roll over no more than 90 degrees on each side and so the world rolls on either side but follows the angle of the ship itself. when the ship's roll angle returns to 0, so does the rolling rotation applied to the world.

I'm not entirely certain what you mean by relative rotations or fixed up vector but I'll try to answer that the best I can. Logically I would believe 6DOF to be relative and FPS to be a fixed up vector. since the ship can't really move backwards and can only go forward, and where everything around the ships moves according to its movement I think I'm looking for relative rotations and translations. A fixed vector approach would make sense for an FPS but I don't know how I would apply it to my ship.

Each frame, I start assuming the ship is at [0,0,0] in the direction [0,0,-1]. Based on the world rotation made to the scene at that frame, I calculate the matrix multiplication of that matrix state with my [0,0,-1] initial direction vector to to get the proper translation vector that I use to move the world around the ship. The ship is always at [0,0,0] the entire time.

I have two sets of rotations that are independent from each other that I use together: The ship's rotation angles and the world's rotation angle. The ship angles are only used to animate the ship and indicate how much the world should rotate around the world. If I stop animating the ship and disable the rotations, I have a fixed ship, seen from the back, and a bit from above that doesn't move on its own but I the world around it rotates. From what I understand, the ship must always face the direction it is going so this is where it bugs up depending on the coordinates.

if I change the pitch angles either up or down, I end up with the world upside down and going the right way. as Long as I don't turn left or right more than 90 degrees more or less than it's still all fine. The same goes for the Yaw angle. as long as the ship is flat on the X/Z plane more or less. turning around anyway I want works fine. if I start going up or down to around 90 degrees, the direction of the ship gets reversed as well as the yaw rotations.

*Edit*
An important information would be that the camera is also always looking at [0,0,0]. I tried to move the camera weeks ago and I am not comfortable with how it works and how to move the camera in the scene. Some information I found and discussing it with a friend made me decide moving the world around might be simpler for me than to move the camera and the ship in the world. Perhaps this was a mistake?
*Edit*

I don't know what else to add so far. I will probably be able to give more pertinent details later as the discussion goes on.

Thanks!

##### Share on other sites
There's a lot of different things that can go wrong with something like this. I didn't completely follow all the details of the movement scheme you described, but here's a few general tips that might be useful:

- Generally speaking, you'll want to do all the math yourself (using your own code or a third-party math library) rather than using OpenGL convenience functions or accessing the modelview matrix directly. This will give you more control and will provide cleaner interaction with the OpenGL API.

- For 6DOF motion, the general approach is to store the object's orientation as a matrix or quaternion, apply local, incremental rotations as necessary each update, normalize the orientation to prevent numerical drift, and then build a transform matrix from the orientation and position for uploading to OpenGL (or whatever API you're using). The direction vectors for the object can then be extracted directly from the matrix for the purpose of applying linear motion.

- Unless you have a specific reason for doing so, I wouldn't think of (or implement) the ship's motion in terms of applying opposite rotations to the world. (Again, unless you're going for a specific effect or have a specific need to do it that way.) Just let the world be static and move the ship using normal methods; once you've built the ship's transform matrix, you can then invert it to yield the view transform.

##### Share on other sites

There's a lot of different things that can go wrong with something like this. I didn't completely follow all the details of the movement scheme you described, but here's a few general tips that might be useful:

- Generally speaking, you'll want to do all the math yourself (using your own code or a third-party math library) rather than using OpenGL convenience functions or accessing the modelview matrix directly. This will give you more control and will provide cleaner interaction with the OpenGL API.

- For 6DOF motion, the general approach is to store the object's orientation as a matrix or quaternion, apply local, incremental rotations as necessary each update, normalize the orientation to prevent numerical drift, and then build a transform matrix from the orientation and position for uploading to OpenGL (or whatever API you're using). The direction vectors for the object can then be extracted directly from the matrix for the purpose of applying linear motion.

- Unless you have a specific reason for doing so, I wouldn't think of (or implement) the ship's motion in terms of applying opposite rotations to the world. (Again, unless you're going for a specific effect or have a specific need to do it that way.) Just let the world be static and move the ship using normal methods; once you've built the ship's transform matrix, you can then invert it to yield the view transform.

I'm sure what I wrote isn't exactly easy to follow. I already have a bit of my own matrix operations going here and there. Using the Model View Matrix of OpenGL was an idea I got based on loading an identity matrix into it and then doing operations on it to obtain the transformations I wanted and applying them to my direction vector to get another matrix from which I could extract the properly transformed vector.

I agree with you that this may not be a good idea. This was mostly to save time as I noticed it always gave me the results I expected and decided to use it. I'll try to apply my own matrix transformations. I'm just hoping I won't screw up the functions. Using a third party would probably be more powerful but as I'm trying to learn how it works from the ground up I think going the painful way might be better.

I'm just not sure about normalization. I know what it is, but I'm not sure if I need to normalize if the vector I provide my operations with a unit vector to start with. will that vector need normalizing after transformation or will it remain in unit form ?

I will also attempt moving the camera and reporting. There are other kinds of operations I am not sure how to go about it. I'll post more as I go along. I'm not entirely certain how I would go about it but I'm going to use your advice as a base for what to do.

I'll post an update as I go on.

Thanks!

##### Share on other sites
Regarding the normalization I mentioned, the purpose of that is to correct for numerical error caused by multiple successive incremental rotations.

Due to the imprecision of floating-point representations, rotations in both quaternion and matrix form will generally drift over time, and at some point noticeable artifacts may appear (e.g. scaling with quaternions, or scaling and/or shear with matrices). Normalizing (in the general sense) the rotation is intended to correct this. With quaternions, this equates to a quaternion normalization, which is a simple and fairly efficient operation. With matrices it requires orthogonalization, which is a bit less simple and efficient, but is still relatively straightforward.

##### Share on other sites
I had the same issue with my space simulator. My camera would use the y axis as the "up" vector, effectively locking it to a first person camera. The trick is to use Quaternions. It's been a year or so since I implemented it, but I believe I created a quaternions for the yaw, pitch and roll angles, and then concatenated them together, and then converted it to a matrix. The result is a camera similar to the old Star Wars X-Wing games. Using matrices causes issues with the gimbol lock and having to use an up vector (I'm not sure if there's a solution to this, but I never found it). But yeah, look into that.

*Edit* I reread the title and noticed the "3rd person" part. My solution was for a first person camera. Still, I believe quaternions are your best bet.

##### Share on other sites

I had the same issue with my space simulator. My camera would use the y axis as the "up" vector, effectively locking it to a first person camera. The trick is to use Quaternions. It's been a year or so since I implemented it, but I believe I created a quaternions for the yaw, pitch and roll angles, and then concatenated them together.

A couple comments on the above (just to prevent confusion).

First of all, whether you use quaternions or matrices makes no difference as far as gimbal lock or anything like that goes. Generally speaking, you can get the same results regardless of whether you use quaternions or matrices. Each representation has its own advantages and disadvantages, but their fundamental behavior with respect to rotations is equivalent. (Just to be clear, I'm not talking about memory requirements, efficiency of specific operations, stability, or any of the other areas where quaternions and matrices do indeed differ; rather, I'm talking about fundamental behaviors, such as whether or not gimbal lock can occur and how rotations can be constructed and manipulated.)

Second, creating separate yaw, pitch, and roll quaternions is the same as constructing an Euler-angle rotation using any other method; the use of quaternions here makes no difference and doesn't buy you anything in particular.

Using matrices causes issues with the gimbol lock and having to use an up vector (I'm not sure if there's a solution to this, but I never found it). [/quote]
Gimbal lock and whether you have to use an 'up' vector have nothing to do with whether quaternions or matrices are used. (There's a lot of confusion about these topics already, so I think it's worth clarifying these points when they come up, which seems to be fairly often.)

##### Share on other sites
This kind of topic might be a good candidate for a sticky.

My opionion is simple on this: take those Euler Angles behind the barn and put them out of their misery. Three angles can't store the information you need, which is not just three isolated summed up angles, but the _order_ in which they were applied. Yes, you can easily get away in a scenario with limited motion (1st person shooter), where there is always one "correct" order to apply them (usually y-axis, x-axis, z-axis).

Just use matrices in the first place and forget all the trigonometry you apply to those angles (and all the convoluted tricks to somehow hammer Euler Angles into working).
Matrices are neat and simple, they even tell you your ships forward axis (AND right AND up AND position) in 4 easy to read vectors. You will eventually need a matrix library anyway, so either find one you like or quickly write one yourself (even my straightforward code was automatically compiled to make heavy use of SSE).

So next time dont do
"yaw += 2", do
"transformation = Matrix::rotationMatrixY(2) * transformation;

even saves you the annoying step of converting your angles back to a matrix (or rather, one of many different possible matrices of which in a 6DOF scenarion most likely none will be the right one).

About quaternions. I got a feeling especially those that never quite got the math love to suggest it as a magic bullet for a problem they didn't fully understand. Jyk said pretty much all about that and I doubt you will find them overly useful (or better performing) until you start adding skeletal animations or other features that require a lot of concatenations. After all, you have to eventually transform them back to matrices for your API and the speed up in calculations has to make up for that overhead.

##### Share on other sites
I have to admit, I am completely lost...

Moving the ship instead of the world also requires me to move the camera to keep the view the same. The only way I know how to control a camera view is through gluLookAt() but so far all attempts have failed at getting where I want to be. I have a feeling I have to control the eye and Center value manually but I have no clue how to update it properly each frame. The only thing that comes to mind is using the eye as the position and provide a direction unit vector to the center. And since the camera is backed away from the ship, I also need to translate the position opposite of the center and then raise it a bit. This is extremely confusing.

I realized I can achieve a similar outcome rotating and translating the projection view matrix so this is what I am attempting right now without much success. I can get the camera to follow the ship properly by translating it first and rotating it by the opposite rotation that I do for the ship and it works fine on a 2D plane. The ship only rotates at the origin at the moment and does not move.

The issue I have is when I attempt to rotate in 3D, the axes get all scrambled. If I turn the left 90 degrees and attempt to go up, instead of rotating in that direction, the view is tilted left or right. I understand why it happens but I'm looking for a way to make the axes "follow" along with the ship in a way.

Once I get that sorted out I will be able to try moving the ship in the space.

Also, I tried to create a matrix that I keep updating frame after frame but it goes awfully wrong when I use rotation and translation. because the matrix contains both rotation and translations into it, when I add further translations and rotations it becomes Screwed up beyond all recognition. Should I keep two separate matrices and update them each frame and put them together only when comes the time to update the view?

I think I'm going to need some step by step guidance. I get too confused with everything.

##### Share on other sites
This probably won't answer all your questions, but I'll try to touch on a few points here.

Moving the ship instead of the world also requires me to move the camera to keep the view the same. The only way I know how to control a camera view is through gluLookAt() but so far all attempts have failed at getting where I want to be. I have a feeling I have to control the eye and Center value manually but I have no clue how to update it properly each frame. The only thing that comes to mind is using the eye as the position and provide a direction unit vector to the center. And since the camera is backed away from the ship, I also need to translate the position opposite of the center and then raise it a bit. This is extremely confusing.

A 'look at' function is just a convenience; there's no requirement that the view matrix has to be constructed that way. You certainly *can* use it (and it can be convenient to do so), but it's not a necessity.

The general idea behind a 'virtual' camera associated with an object in the scene is this: the object in question has a transform, and the inverse of that transform is the view transform.

A simple third-person camera (i.e. with no damping or other animated effects) can be implemented as follows:

1. Compute a transform that represents the camera object's rotation and position in the local space of the target object (e.g. 'behind, slightly above, and looking down').

2. Concatenate that transform with the target object's transform to yield the camera object's world transform.

3. Invert the transform to yield the view transform (which can then be uploaded directly to the graphics API).

Using a 'look at' function is basically a shortcut in that it takes care of some of these steps for you, but at the very least you'll most likely need to transform the local-space camera position by the target object's transform to yield the camera's world-space position (the 'eye' point).

I realized I can achieve a similar outcome rotating and translating the projection view matrix so this is what I am attempting right now without much success.[/quote]
I'm not sure what you mean by that, but typically you wouldn't apply rotations or translations to the projection matrix, and I wouldn't think about it in terms or rotating or translating the view matrix other. As mentioned earlier, just work with the world transforms of the objects in question, and then invert the camera object matrix to yield the view matrix.

Also, I tried to create a matrix that I keep updating frame after frame but it goes awfully wrong when I use rotation and translation. because the matrix contains both rotation and translations into it, when I add further translations and rotations it becomes Screwed up beyond all recognition. Should I keep two separate matrices and update them each frame and put them together only when comes the time to update the view?[/quote]
Yes, typically you'd store the orientation and position separately, or use special-purpose functions that apply rotations to the transform matrix without modifying the translation.

The whole thing can be a little confusing, and it does require some basic understanding of 3-d math and linear algebra. One thing I'll mention is that you're trying to solve two problems at once (moving the ship, and having a camera follow it). One thing you might try is setting up a fixed camera (e.g. using gluLookAt()), and focusing on getting the ship controls working correctly. Then, once that's working, you can move on to implementing a follow-cam.

##### Share on other sites
You WANT translation and rotation in the same matrix, because that way if your ship was modelled with z being forward, translating it by (0,0,1) will always move it forward accoording to its current orientation. It's what makes everything easy instead of a huge mess where you have to constantly figure out how to rotate that ship around the right axis. "Up" or "y" will always be your ships local "up". If you touch any trigonometry except for whatever function creates the rotation matrix I'd blindly say "you're doing it wrong".

A simple example with shipTransform being the ships transformation matrix:

shipTransform = translationMatrix(0,0,1) * shipTransform; // ship moves along it's LOCAL z axis (or "forward")

shipTransform = shipTransform * translationMatrix(0,0,1); // ship moves along the GLOBAL z axis

You want your ship to start shooting and ask "which way do my bullets go" and the answer is
Vector3 bulletDirection(shipTransform[8], shipTransform[9], shipTransform[10]);

You ask "I need to know my ships position in space for collisions" and the answer is
Vector3 shipPosition(shipTransform[12], shipTransform[13], shipTransform[14]);

For convenience you can introduce aliases for the different matrix columns like right, up, forward, position.

For your camera you have the trivial task of taking the ships transformation, add a translation of [0, alittleup, abitback], invert the result and set it as modelview matrix.

cameraMatrix = translationMatrix(0, 2, -10) * shipTransformation;

Now before you say "but inverting a generic matrix is too much math".. you're dealing with a neat and simple kind of matrix, where inversion is just transposing the rotation part and adjusting the translation part (assuming a typical OpenGL matrix with 0-4 being "x", 5-8 "y", 9-12 "z" and 12-15 "origin").

invertedMatrix = Matrix44(
matrix[0], matrix[4], matrix[8], 0,
matrix[1], matrix[5], matrix[9], 0,
matrix[2], matrix[6], matrix[10], 0,
-(matrix[0]*matrix[12] + matrix[1]*matrix[13] + matrix[2]*matrix[14]),
-(matrix[4]*matrix[12] + matrix[5]*matrix[13] + matrix[6]*matrix[14]),
-(matrix[8]*matrix[12] + matrix[9]*matrix[13] + matrix[10]*matrix[14]), 1);

But don't try to use glMultMatrix from there. When rendering objects you want modelview = object.transformation * modelview, while glMultMatrix would give you modelview = modelview * obj.transformation. So keep track of your matrices, do the multiplication yourself and keep using glLoadMatrix (OpenGL is deprecating everything left and right, so you should eventually handle it yourself anyway).

Understanding transformation matrices and local/global coordinate systems is what I consider the most important part of 3D graphics. There is no magic involved, they are literally a coordinate system with three axes and an origin. It will also avoid weird questions like "how do I need to rotate my billboard to make it face the camera" instead of "ok, "z" must be camera - position, "y" and "origin" is fixed and "x" is just a cross product". Seriously, it would be like asking "I have x = PI, but I want x = 5.325, what calculation do I need to turn x into 5.325".

Also getting used to think in local coordinates instead of sticking to an "outside" perspective in world coordinates will allow you to laugh about silly ideas like "OpenGL applies transformations in reverse order". Seriously, who would deliberately want to write his code with "have to do it backwards" in mind? Object point of view: simple... world point of view: head ache.

##### Share on other sites
I scrapped all the code when I restarted last night.

So far I got the ship at the origin of the scene with the camera behind, slightly above and looking down on the ship using gluLookAt. As I see gluLookAt is generally not the way to go and seems to be deprecated and not recommended for quite a few years I'll definitely need to look for a different way of working.

I don't really understand what inverting the matrix does in terms of actual transformation. If I need to do a transformation to the camera and the invert that transformation to apply to the object, is it the same thing as applying opposite operations in the opposite order meaning. If I translate something by (10,10,10) and rotate the Y axis by 45 degrees then inverting the matrix would yield a transformation of -45 degrees rotation on the Y axis and then a translation of (-10,-10,-10) ? If that's not what it does then I do not understand what results it's giving me. I would like to know though.

jyk, I understand the first numbered point you made but I'm not sure I understand the 2 others but I do have a feeling it comes to something similar as what Trienco is saying right?

Trienco, could you elaborate on the process you're going through regarding the3 lines with the additions and multiplications ? I looked up how OpenGL works with matrices today and figured out its use it transposed as opposed to the way we would use it in math. From there I believe matrix[12],matrix[13],matrix[14] the x,y,z obtained from the transformation and that the others contain rotation information.

I'm not using any trigo anywhere as I learned quickly that things get very messy. I'm trying to stick to matrices and learn how to work from there. I have to say it's quite new to me but I'm slowly getting around to it. I can get my way around understanding math concepts but I've never been good with understanding how to apply certain things and why. I did most of my linear algebra classes a few years ago but never put any of it into actual practice until recently so I basically have to relearn it almost from scratch so I may not understand some concept you might assume basic and trivial so I guess it might be necessary to dumb it down a bit so I can understand better.

Also, regarding jyk saying I'm trying to solve two problems at once. I didn't see it that way but that's an interesting way to put it. I'll see how I can make it work that way. In my head, I thought it was simple to back the camera off a bit from the origin looking down at the ship at the origin and do the rotation transformations both affecting the ship and the camera at the same time in order to make it follow the ship wherever it was going but maybe the approach was wrong.

Thanks again for your help !

##### Share on other sites
I didn't see your updated post Trienco. I'm not sure I understand Local vs global axis.I see that the only difference is in the order of the operations.

I also agree that the backwards operation thinking is giving me serious headaches as well. I don't find it user friendly at all.

##### Share on other sites

I don't really understand what inverting the matrix does in terms of actual transformation. If I need to do a transformation to the camera and the invert that transformation to apply to the object, is it the same thing as applying opposite operations in the opposite order meaning. If I translate something by (10,10,10) and rotate the Y axis by 45 degrees then inverting the matrix would yield a transformation of -45 degrees rotation on the Y axis and then a translation of (-10,-10,-10) ? If that's not what it does then I do not understand what results it's giving me. I would like to know though.

Yes, in this context inverting a matrix can be thought of as making the transform the 'opposite' of what it was originally. A typical 'model' transform consists of a rotation followed by a translation. To use your example, a model transform might consist of a rotation about the y axis of 45 degrees, followed by a translation of (10, 10, 10). The inverted transform would then consist of a translation of (-10, -10, -10) followed by a rotation about the y axis of -45 degrees (that as, each transformation is converted to its opposite, and the converted transforms are applied in the opposite order).

There are various ways an inverted transform can be realized. The most general way is to use a general matrix inversion function; this is more expensive than the alternatives, but that's unlikely to matter if you're only doing it once per frame. For 'rigid body' transforms, you can also use the shortcut Trienco mentioned. Or, you can build the inverse manually by combining the individual inverted transforms in the opposite order. As for the 'look at' function, it actually rolls two operations into a single function: first, it builds a transform that positions the object and orients it towards a target, and then it inverts that transform.

jyk, I understand the first numbered point you made but I'm not sure I understand the 2 others but I do have a feeling it comes to something similar as what Trienco is saying right?[/quote]
I'm not sure if Trienco and I are talking about the same thing (I'd have to read over the thread again carefully in order to determine that.)

I looked up how OpenGL works with matrices today and figured out its use it transposed as opposed to the way we would use it in math.[/quote]
It's not transposed.

From there I believe matrix[12],matrix[13],matrix[14] the x,y,z obtained from the transformation and that the others contain rotation information.[/quote]
That's generally true. However, the whole 1-d indexing thing is an unfortunate side-effect of how matrices are often represented in memory (that is, as a 1-d array). The 1-d indexing really has no mathematical significance, and (IMO) tends to add to the already existing confusion regarding notational conventions and matrix layout. As such, I generally recommend writing your matrix code so that the 1-d indexing (if any) is hidden behind an interface that allows you to write the actual math code using 2-d indexing.

##### Share on other sites

I also agree that the backwards operation thinking is giving me serious headaches as well. I don't find it user friendly at all.

The first time somebody taught that concept during a lecture I almost started laughing about the masochism involved in that way of looking at it and had to surpress a "uhm, no, it's simply applying them in LOCAL coordinates". Stop looking at the world from the outside and dive into it. Things make so much more sense from in here.

It's easier to play with it and look at the numbers in a debugger, but I'll try.

As a default, you have the identity matrix. I use OpenGLs memory layout a lot, since trying to use mathematical notation on here is annoying. So "each row is a vector" and would be a column in mathematical notation. Also, think about that matrix as 4 vectors and not as a whole.

1,0,0,0 //this is x or "right" and since we didnt do anything it's the same as the worlds "right"
0,1,0,0 //this is y... and so on
0,0,1,0 //no comment
0,0,0,1 //the origin is zero

those confusing 4th coordinates like to be called w. Rule of thumb, if a vector has w=0 it's a direction (for example a normal vector) and if w=1 it's a position. So you notice the three axes are directions and the origin is a position.

What happens if you apply a rotation by say 90° to the left? right turns to forward, forward turns to left and the rest is unaffected

0,0,1,0
0,1,0,0
-1,0,0,0
0,0,0,1

Since this is the first thing we applied to the identity, this also happens to be the actual rotation matrix you can apply to rotate stuff 90° to the left.

Now our local coordinates have changed. You could go and visualize it by drawing three glLines from the last vector to each of the first 3. Since your "local forward" is no the "worlds left", if you apply a translation along [0,0,1] you get

0,0,1,0 //orientation isnt affected by translating
0,1,0,0
-1,0,0,0
-1,0,0,1

The last vector calculated as 0*(0,0,1,0) + 0*(0,1,0,0) + 1*(-1,0,0,0) + 1*(-1,0,0,1)
If the last part is confusing. The translation matrix has the movement in the position part, which has w=1, so strictly speaking you translate by [0,0,1,1].

Now if you multiply the other way around, you first move to

1,0,0,0
0,1,0,0
0,0,1,0
0,0,1,1

Surprise, it's also the translation matrix that moves you along 0,0,1

then you rotate, resulting in

0,0,1,0
0,1,0,0
-1,0,0,0
0,0,1,1 //rotation doesn't affect position

Whatever transformation you apply first is happening while you still got the identity matrix and hence can be considered happening in global coordinates. But essentially every transformation is always based on your current local coordinates (ie. the current matrix... which is on the right in a multiplication)

For more complex scenarios I'd avoid the headache of going through the numbers, it always works the same anyway. How you store matrices is up to you, a simple way that should be compatible with OpenGLs memory layout would be matrix[vector][coordinate], so matrix[3][1] would be the y coordinate of the position vector and you can directly call glLoad and glGet on it. Behind the scenes the compiler turns it into a 1d array and handles the indexing.

##### Share on other sites
That's very interesting!

Around here it's pretty much morning so I'll play with it after a few hours of sleep. I'll re-read the posts and try to make sense out of them and keep you guys updated during the day.

##### Share on other sites
I've been experimenting a bit with the matrix and displaying the axis on screen and something interesting is going on but I would like to know if it's me who did something wrong or if that's the way it is.

I have my ship rotate on itself around the Y axis and I draw the local axes around it. They rotate twice as fast as the ship.

I also replaced the ship by a Cube and the same thing happens.

I'm simply getting the transformed Model View Matrix to see what's going on.

This is the code I'm using to draw the lines along with a cube at the origin. I amplify the vector magnitude so I can see them properly. The rotation call is the same for both, the lines and the Cube.

 glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(Player.GetHeadingAngle(),0.0,1.0,0.0); DrawMatrixReferenceLines(); glColor3f(1.0,1.0,1.0); glutWireCube(5); void DrawMatrixReferenceLines() { float* ModelMatrix = new float[16]; glGetFloatv(GL_MODELVIEW_MATRIX,ModelMatrix); // Draw Origin Axes for reference glBegin(GL_LINES); // X axis glColor3f(1, 0, 0); glVertex3f(ModelMatrix[12],ModelMatrix[13],ModelMatrix[14]); glVertex3f(ModelMatrix[0]*500,ModelMatrix[1]*500,ModelMatrix[2]*500); // Y axis glColor3f(0, 1, 0); glVertex3f(ModelMatrix[12],ModelMatrix[13],ModelMatrix[14]); glVertex3f(ModelMatrix[4]*500,ModelMatrix[5]*500,ModelMatrix[6]*500); // Z axis glColor3f(0, 0, 1); glVertex3f(ModelMatrix[12],ModelMatrix[13],ModelMatrix[14]); glVertex3f(ModelMatrix[8]*500,ModelMatrix[9]*500,ModelMatrix[10]*500); glEnd(); delete [] ModelMatrix; } 

##### Share on other sites
I'm not sure if this is what's causing the problem you're seeing, but the second vertex for each of the line segments in DrawMatrixReferenceLines() should be relative to the object's position, rather than absolute. That is, you should be summing what you have now with elements 12, 13, and 14.

Also, you don't need to allocate 'ModelMatrix' dynamically; it can just be a local array on the stack.

##### Share on other sites

I'm not sure if this is what's causing the problem you're seeing, but the second vertex for each of the line segments in DrawMatrixReferenceLines() should be relative to the object's position, rather than absolute. That is, you should be summing what you have now with elements 12, 13, and 14.

Also, you don't need to allocate 'ModelMatrix' dynamically; it can just be a local array on the stack.

That would make sense if the position was different than the origin. Right now, it only makes sense if the object is at the origin but I added the rest of the coordinates to make it compatible for later when I actually move the object.

As for the dynamic array. I'm aware I could just do a local array, I just put the first thing that worked but it's prone to memory leaks so I'll fix those up when I get my base running.

Even with the summation, the axes still rotate twice as fast as the object. The Player.GetHeadingAngle() is basically the Yaw rotation that I change through the keyboard arrows. it currently has a 45 degree limit on each side so when I hit the limit, the axes are rotated by 90 degrees.

Could it be something with the model view matrix I am loading or maybe something else ? I'll keep playing around to see in the mean time.

##### Share on other sites
Ah, I see the problem. The line segments you're drawing are themselves transformed by the transform at the top of the modelview matrix stack, so when you draw the axis, you're actually drawing the rotation rotated, if that makes sense. That's why the debug graphics seem to show the rotation 'doubled', as it were.

It's a little convoluted, but here's one way you could correct for this:

- Call glRotate*() to set the rotation
- Render the cube or model or whatever
- Grab the modelview matrix from OpenGL
- Call glLoadIdentity() to reset the transform to identity
- Draw the rotation axes as you're doing currently

I think if you do it that way, you'll find that the debug graphics match what you're expecting.

##### Share on other sites

Ah, I see the problem. The line segments you're drawing are themselves transformed by the transform at the top of the modelview matrix stack, so when you draw the axis, you're actually drawing the rotation rotated, if that makes sense. That's why the debug graphics seem to show the rotation 'doubled', as it were.

It's a little convoluted, but here's one way you could correct for this:

- Call glRotate*() to set the rotation
- Render the cube or model or whatever
- Grab the modelview matrix from OpenGL
- Call glLoadIdentity() to reset the transform to identity
- Draw the rotation axes as you're doing currently

I think if you do it that way, you'll find that the debug graphics match what you're expecting.

Awesome ! that was indeed the issue. I would have never figured that out by myself. I think I understand why this is happening but I think this is the kind of thing that will screw me over often. I'll pay more attention to it.

doing my own matrix stuff isn't so great so far. I won't post any blurbs of code yet as I'm not done but I'm having some issues loading my selfmade matrices using glLoadMatrix(). It's working for the most part but it seems off, especially for rotations. Perhaps this is where normalizing the matrix will become necessary.

Speaking of which. When normalizing the matrix. is it necessary to go through the Gram-Schmidt process or QR Compression or all that kind of stuff or should I simply consider each vector individually by dividing them by their lengths?

I can see work for the 3 vectors but maybe the position vector won't react well to this. I tried to incorporate Trienco's matrix inverse but I think it Did it wrong I might want a bit more explanation regarding this.

Thanks!

##### Share on other sites

Speaking of which. When normalizing the matrix. is it necessary to go through the Gram-Schmidt process or QR Compression or all that kind of stuff or should I simply consider each vector individually by dividing them by their lengths?

Normalization by itself isn't sufficient, as it won't correct for the axes drifting out of orthogonality. However, you don't necessarily need to bother with Gram-Schmidt; you can instead use a simple series of cross-product operations (Gram-Schmidt may produce a more 'balanced' orthonormalization in some cases, but for this particular purpose, that doesn't really matter).

I can see work for the 3 vectors but maybe the position vector won't react well to this.[/quote]
The position vector can be left alone; it's really only the upper-left 3x3 portion of the matrix that you're concerned with.

##### Share on other sites
I played around some more with my current code. Turns out my rotation code had a little problem with the wrong variable being at one place but I got that corrected. Right now, if I do a single operation for translation or for rotation using own self defined functions, I obtain the same result as glRotate and glTranslate. However as soon as I stack them together, I get completely different results. I make sure that the multiplication always occurs with the transformation matrix multiplied by the current matrix and not the other way around.

This is probably where find ing the orthonormal basis of the matrix after each operation will become important right?

Sorry for not catching on very quickly but now if I need to apply normalization or making everything into a matrix of unit vectors, what would be the way to go about it ? If Gram-schmidt is overkill then. Can I treat each set of 4 numbers as independent vectors and simple normalize them as one would for a single non-unit vector ? if not I don't think I understand how to go about it.

##### Share on other sites

Right now, if I do a single operation for translation or for rotation using own self defined functions, I obtain the same result as glRotate and glTranslate. However as soon as I stack them together, I get completely different results. I make sure that the multiplication always occurs with the transformation matrix multiplied by the current matrix and not the other way around.

That sounds like an issue of multiplication order, or how your multiplication function is implemented, or matrix layout, or some combination of those. It could be any number of things, so I can't be much more specific than that.

This is probably where find ing the orthonormal basis of the matrix after each operation will become important right?[/quote]
The problems you describe don't sound like they're related to numerical drift, so no, probably not.

Sorry for not catching on very quickly but now if I need to apply normalization or making everything into a matrix of unit vectors, what would be the way to go about it ? If Gram-schmidt is overkill then. Can I treat each set of 4 numbers as independent vectors and simple normalize them as one would for a single non-unit vector ? if not I don't think I understand how to go about it.
[/quote]
Another option to consider is to store the rotation and translation separately (e.g. as a 3x3 matrix and a vector or a quaternion and a vector) and then combine them when needed, as that can make things simpler in some ways. As for your current method though, what you'll want to do is isolate the upper-left 3x3 portion of the matrix, specifically the three vectors that make up the rows or columns (either - doesn't really matter which).

Once you have these three vectors, orthonormalization can proceed as follows (untested, may or may not be correct):

x.normalize(); y = unit_cross(z, x); z = unit_cross(x, y);
Then you would load the three vectors back into the matrix.

• 12
• 10
• 11
• 18
• 13
• ### Similar Content

• By QQemka
Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
Let's go:
Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
There were several more but i forgot/solved them at time of writing
• By RenanRR
Hi All,
I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
#version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);
So, some doubts:
- Why use it like that?
- Is it okay to manipulate the camera that way?
-in this way, are not the vertex's positions that changes instead of the camera?
- I need to pass MVP to all shaders of object in my scenes ?

What it seems, is that the camera stands still and the scenery that changes...
it's right?

Thank you

• Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

int rgbValue = int(textureSample.w);//4 bytes of data packed as color
// algorithm might not be correct and endianness might need switching.
vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
extractedData /= 255.0f;

• While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
Anyone has any idea .. what should I do?

• Hi all!
I try to use the Sun shafts effects via post process in my 3DEngine, but i have some artefacts on final image(Please see attached images).
The effect contains the following passes:
1) Depth scene pass;
2) "Shafts pass" Using DepthPass Texture + RGBA BackBuffer texture.
3) Shafts pass texture +  RGBA BackBuffer texture.