This topic is 4325 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm trying to make a 3rd person camera with damping for orientation using a quaternion to represent the camera orientation (for SLERP) and a relative offset from the "character" which will ultimately be a car. I'm using OpenGL and was using gluLookAt to do the camera transforms till now. I was following this article http://www.gamasutra.com/features/19980703/quaternions_01.htm which starts to explain how to do a 3rd person camera but just brushes over it and i can't understand everything. My first question is how to calculate the camera position and orientation quaternion from given data: a) the position vector, b) direction vector of the "character" and c) a constant offset calculated using the character's up and direction vectors. Secondly, I can calculate the camera transform matrix (I guess) once I have the camera's orientation quaternion + it's absolute translation coordinates (am I right that I have to use the absolute coordinates?). Then I have to multiply the MODELVIEW matrix by its inverse? I've been thinking over this for hours and I still can't get my head around it! :( It would be great if someone laid out a step by step what-I-have-to-do or pseudocode. Thanks!

##### Share on other sites
As first hint, you typicall have to initialize the MODELVIEW matrix with the inverse of the camera's transformation (not to multiply, but to load!).
glMatrixMode(GL_MODELVIEW);glLoadMatrixf(myCamera->inverseGlobalTransformation());// here comes the first model's transformation

This is since a model's overall transformation is
local -> global -> view (-> to screen)
where local->global is named the MODEL transformation and global->view the VIEW transformation, so that in total the MODELVIEW transformation is given.

##### Share on other sites
Quote:
 Original post by deavikMy first question is how to calculate the camera position and orientation quaternion from given data: a) the position vector, b) direction vector of the "character" and c) a constant offset calculated using the character's up and direction vectors.

At the end you have to supply a matrix to GL, either directly or else indirectly by a translation vector or axis/angle pair from what GL internally calculates a matrix first. IMHO it is somewhat easier to directly calculate a matrix from the given data. For that you have to be aware that the column vectors of an orientation matrix could be interpreted just as a side vector, an up vector, and a look-at vector. The only thing is that you have to guarantee that the orientation matrix obey the orthogonality condition. I.e. each column vector is of unit length, and each pair of column vectors is orthogonal. Please ask if you need more information on this topic.

Quote:
 Original post by deavikSecondly, I can calculate the camera transform matrix (I guess) once I have the camera's orientation quaternion + it's absolute translation coordinates (am I right that I have to use the absolute coordinates?). Then I have to multiply the MODELVIEW matrix by its inverse?

See my previous reply, please. However, if your engine supports local co-ordinate frames (it should), then you could take into account to use simple forward kinematics. I.e. link the camera into the frame of the avatar (or perhaps its head or something like that). E.g. locate the camera 2 meters in local y direction and -5 meters in local z direction. Then, as soon as you control the location/orientation of the avatar, the camera will be controlled automatically. So simply compute the global camera transformation from the local frame, and use that for the aforementioned VIEW set-up.

EDIT: I think you mean global co-ordinates if you say "absolute" co-ordinates. A position and orientation is always absolute, regardless whether it is in the global or a local co-ordinate frame. On the other hand, a translation and a rotation is relative. So routines like glTranslate and glRotate and glMultMatrix do a "relative" job. For the computation of the VIEW matrix you have to use global co-ordinates at last, as I've shown already in my previous reply.

##### Share on other sites
haegarr, thanks very much for the reply. I took note of all you said, and I have more questions on those ... but right now I am in deep muddy waters [lol] and I'll go slowly, one at a time.

Calculating the camera transformation (what you do with myCamera->inverseGlobalTransformation())

I can't figure out how this is done ... to find the camera transformation I need the global camera position and orientation, but this depends on the avatar's position and orientation. How exactly is this done?

##### Share on other sites
there is no camera in openGL as such, so instead of setting the camera's position, you simply move the scene in the opposite direction. This means you can't use the camera's transform, but rather the inverse camera transform.

In opengl though, you can make use of the gluLookAt function which does all of the matrix maths for you.

gluLookAt( eyeX, eyeY, eyeZ,           aimX, aimY, aimZ,           upX, upY, upZ );

##### Share on other sites
Quote:
 Original post by deavikto find the camera transformation I need the global camera position and orientation, but this depends on the avatar's position and orientation.

Two ways exist:

(1) As already mentioned, use build-in forward kinematics. This way is suitable best if your engine does support forward kinematics. It does so if you could plug-in scene graph nodes into other nodes, so that all sub-nodes are moved if you move a super-ordinated node. Does ysuou engine support such stuff? If so, then you could put the camera's node into the avatar's node, locating the camera some meters behind it.

Then, computing the camera's global transformation (sorry, but now we must look at some math) is given by
CG := AG * CL
where AG denotes the avatar's local to global transformation (say the node's co-ordinate frame where the camera's node is inserted into) and CL the camera's local frame (the aformentioned 5 meter translation behind the avatar, and perhaps some small pitching).

(2) The other way, e.g. if your engine doesn't support local frames, is as follows: Fetch the avatar's global position AGP and its global orientation matrix AGO. A (pure) orientation matrix (say it is not multiplied by any scaling/shearing) is build of 3 vectors that could be interpreted as side vector (the 1st one), up vector (the 2nd one), and front or look-at vector (the 3rd one). Isolate the look-at vector; let us name it LG.

Now, scale this vector by the distance the camera should be away from the avatar, and subtract that onto the already fetched global position of the avatar:
CGP := AGP - 5 * LG
Here is assume a distance of 5 length units. You may have to play a little with that to yield a pleasing distance, of course.

Now it may happen that the camera looks onto the middle between the avatar's feet, or at its stomach, dependent on where you have located the origin of the avatar. So it may be that you want to add some distance in the up direction to let the camera look slightly above the head of the avatar, e.g. like so in affine co-ordinates:
CGP := AGP - 5 * LG + [0 2 0]

Now the camera is located (hopefully) well, but its orientation isn't that good yet. Well, the camera should look exactly as the avatar does. So try to simply copy the avatar's orientation matrix to that of the camera.

In total that could be done as
(a) copy the entire co-ordinate frame of the avatar for the camera
(b) translate by CGP by multiplying from left!!

Please notice that the stuff above works well if and only if your avatar's model is made so that it actually looks along its local z axis, else you have to do some 90° heading.

For both ways above, after having computed the camera's transformation, you have to invert it before setting up the VIEW matrix. In principle, the transformation is computed as
CG := CGP * CGR
where CGP denotes the global position, and CGR the global rotation (from that you could also see why (2b) above has to be performed from left).

Now, from math
CG-1 = CGR-1 * CGP-1 = CGRT * CGP-1
(please notice the inversion of the order!). The inverse of the orientation matrix is identical to its transpose (see the superscripted T). You may know that transposing means to mirror the matrix over its main diagonale, what is relatively cheap in runtime. The inverse of the translation matrix is just computed by negating all of the 3 translation parameters.

So, expressed in OpenGL, setting up the VIEW matrix is like
float orientation[16] = myCamera->globalOrientation();
transpose(orientation);
glMatrixMode(GL_MODELVIEW);
glTranslatef(-myCamera->globalPos.x,-myCamera->globalPos.y,-myCamera->globalPos.z); // the CGT-1
as an example.

EDIT: Alternatively you could use the way RobTheBloke has mentioned above, using the CGP as eye point, the avatar's origin (or a point 2 meters above it) as aim point, and perhaps [0 1 0] as up vector.

Hopefully I have made no error in the stuff above. Please see that I have written it from mind, and hence the one or other point may suffer from a mistake. So don't hesitate to ask if something is unclear or seems wrong. I hope you see the principle application.

##### Share on other sites
My problem, I realised, was that I wasn't seeing how to fit my camera in the whole scheme of things. After a good night's sleep, I realised that all I had to do was find the position of the camera with a bit of vector math (after orienting with the quaternion) and use gluLookAt after that.

So it works! BTW, I am storing my own normalized view vector and everything and I think it would be a little faster for me to make my own look-at matrix (gluLookAt must use a normalize to find the view vector, right?). So if anyone had any tips on how to construct a look-at matrix that would be fantastic. (gluLookAt doesn't invert any matrices does it? I would be a very expensive operation then which I don't think it is.)

Thanks very much to haegarr and RobTheBloke!

##### Share on other sites
Quote:
 Original post by deavikBTW, I am storing my own normalized view vector and everything and I think it would be a little faster for me to make my own look-at matrix (gluLookAt must use a normalize to find the view vector, right?). So if anyone had any tips on how to construct a look-at matrix that would be fantastic. (gluLookAt doesn't invert any matrices does it? I would be a very expensive operation then which I don't think it is.)
Don't worry about the efficiency of your camera setup; since it only happens once per frame, it's unlikely to be a problem. Also, gluLookAt() is plenty fast. There are a couple of normalizations, but although it does invert the matrix, it's able to take a very fast shortcut due to the nature of the transformation.

If you still want to construct your own 'lookat' matrix, you can find the algorithm here (you'll just need to transpose it for OpenGL).

[Edit: I didn't read the whole thread, so the above might not be particularly relevant to the discussion. Please ignore if so.]

##### Share on other sites
Thank you for the reply, jyk. I actually found a page that has information about gluLookAt itself - here.

However, I am taking your advice and letting gluLookAt be for now. [smile]