# Cross Product woes

This topic is 4305 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi guys, I'm using OPenGL to make a small modelling program. I want to translate my objects relative to my camera. I can get my objects to translate towards & away from my camera. I get the vectors of my object position & camera position, minus them, then normalise the result. Translating to & from the camera is the same as translating up & down the cameras relative Z-axis. But I am having trouble with translating my objects relative to my camera's x-axis. To find the camera's x-axis, I am using the code:
SceneVector3 cameraOrthogonalVector = cameraPositionVector.CrossProduct(cameraLookAtVector);
cameraOrthogonalVector.Normalise();


where
SceneVector3 SceneVector3::CrossProduct(const SceneVector3& vector) const
{
return SceneVector3(	y*vector.z - z*vector.y,
z*vector.x - x*vector.z,
x*vector.y - y*vector.x );
}


But this is not giving the correct results. Say if my object is at position 0,0,0 & my camera is at 0,0,-20 my object won't translate at all. Can some1 help me here? Thanks for any help given!

##### Share on other sites
One potential problem here is that if your camera is looking directly at the object then the cross product you describe will always give the zero vector. Another, although this may not be such a problem, is that the cameras x-axis as you've defined it will change as the object position changes.

##### Share on other sites
Yeah the camera's x-axis will change, I might change that when I get the code working and test what the functionality is like with the current code, that will be easy, I can just use the vector that the camera looks at.

Do you have any ideas how I can compensate for this zero vector?

##### Share on other sites
Just curious, why not just translate them independant from camera position/view direction?

##### Share on other sites
Because Im making a modelling program. I want the same king of translation you would get in 3d studio max etc.

##### Share on other sites
Quote:
 Original post by CacksYeah the camera's x-axis will change, I might change that when I get the code working and test what the functionality is like with the current code, that will be easy, I can just use the vector that the camera looks at.Do you have any ideas how I can compensate for this zero vector?

When this happens, you need to choose an x-vector, as cameraLookAtVector and objectPositionVector - cameraPositionVector no longer define a plane that you can use to orient yourself.

In the absence of any other choice, and writing cameraPositionVector = (a,b,c), you could just take cameraOrthogonalVector = (-b,a,0)

##### Share on other sites
What do you mean by x-vector?

##### Share on other sites
The function gluLookAt takes 2 points and an up vector as its arguments. With that information it sets up your MODELVIEW matrix with the cameras x, y, and z axis vectors and the cameras position, which is the first parameter of gluLookAt. To find the x, y, and z axis of the camera gluLookAt does something similar to the following.

Find the cameras z-axis vector, this is easy since it is just (camera_position - lookat_position).

Next it gets the x-axis vector, which is what you said you needed, it takes the cross product of the up vector(3rd argument to gluLookAt) and the new z-axis.

Last it takes the cross product of the z-axis vector and the x-axis vector to find the new y-axis.

Thats basically it, the up vector is usually something simple like 0, 1, 0, as in straight up. You want to make sure the z-axis vector you compute from the eye and lookat positions does not equal your up vector or your cross products won't work right.

You want to make sure your taking the cross product of vectors and not points and also the order in which you do the cross product affects the direction of the cross product. As in it has the opposite direction that it would have if you took the cross product with the 2 vectors reversed. Thats why I said gluLookAt does something similar to what I described.

Hope this helps.

##### Share on other sites
Extract the first 3 column vectors from your view matrix. Ignore the 4th component. These are the camera's axes (as unit directions) in global coordinates.

##### Share on other sites
Quote:
 Original post by someusernameExtract the first 3 column vectors from your view matrix.
Other issues aside, wouldn't it be the first three rows, rather than columns? (Since the OP is using OpenGL...)

##### Share on other sites
I have the impression that openGL transforms vectors as columns, hence performs multiplication as:
[ m11  m12  m13  m14 ] (x)   ( m11*x+ m12*y + m13*z + m14*w )[ m21  m22  m23  m24 ] (y)   ( m21*x+ m22*y + m23*z + m24*w )[ m31  m32  m33  m34 ]*(z) = (             ...              )[ m41  m42  m43  m44 ] (w)   (             ...              )

Therefore, the model local X axis, (1,0,0,0), when transformed, has coordinates:
( m11, m21, m31, m41 ). Thus the local X axis becomes the first column...

If my initial assumption is wrong (openGL is not my specialty!), then I am probably wrong too...

edit:
A quick way to remember this, is that, if you use row vectors, these axes are also represented by row vectors in the matrices, and the opposite for column vectors

##### Share on other sites
You are right about OpenGL. However, you mentioned extracting the axes from the 'view' matrix, in which case the basis vectors would be transposed and would therefore be in the rows. Maybe we have different ideas of what 'view matrix' means though.

##### Share on other sites
I didn't know that the transpose is used for the view matrix. I can't be sure about it, but I think that's how I did it a couple of times when I had to make one from scratch... Let me try this in DX and I'll be back...

##### Share on other sites
Well, I didn't even have to try that! You obviously want to apply the inverse of the orientation in the view matrix to simulate it... I mean you even translate things by the negative of the camera position.

In openGL, the camera axes will be the row vectors. You're right.

##### Share on other sites
Hi guys,

The Anonymous Poster was right, tried it and I've nearly got it working. Thanks for that Anonymous Poster!

As for OpenGL matrices; I am using them to represent my transformations. But I do have a bit of difficulty understanding them. I use a 4x4 matrix:

0, 1, 2, 3,
4, 5, 6, 7,
8, 9, 10,11,
12,13,14,15

OpenGL uses column vectors, is that correct?

so:

0,4,8,12 is the first vector
1,5,9,13 is the second vector
.. etc

is that correct?

Also, what is the 4rth element of these vectors used for? And what should it be initialised to?

Thanks!

##### Share on other sites
Quote:
 Original post by CacksHi guys,The Anonymous Poster was right, tried it and I've nearly got it working. Thanks for that Anonymous Poster!As for OpenGL matrices; I am using them to represent my transformations. But I do have a bit of difficulty understanding them. I use a 4x4 matrix:0, 1, 2, 3,4, 5, 6, 7,8, 9, 10,11,12,13,14,15OpenGL uses column vectors, is that correct?so:0,4,8,12 is the first vector1,5,9,13 is the second vector.. etcis that correct?Also, what is the 4rth element of these vectors used for? And what should it be initialised to?Thanks!
OpenGL matrices are column major, and so are indexed like this:
0 4 8  121 5 9  132 6 10 143 7 11 15
They also use column vectors, so a simple model matrix looks like this:
xx yx zx txxy yy zy tyxz yz zz tz0  0  0  1
Where x, y and z are the basis vector of the coordinate system, and 't' is the translation. The inverse of this matrix is:
xx xy xz -t.xyx yy yz -t.yzx zy zz -t.z0  0  0  1
Which is what you would use for a camera or view matrix (this is what gluLookAt() generates).

Because of the way the matrix is arranged, you can (to a certain extent) use row-major matrices with row vectors and get similar results. However, IMO for conceptual clarity it's good to match the conventions of whatever API you're using.

##### Share on other sites
Quote:
 Original post by CacksAlso, what is the 4rth element of these vectors used for? And what should it be initialised to?Thanks!

The 4th element has been introduced to overcome certain difficulties that arise in 3d, like the fact that the origin is invariant under matrix multiplication.

Without getting in the underlying math, if you want to represent a point initialize its 4-th coordinate "w" to 1. If you want to represent a direction of some magnitude (vector), set it to 0. As you can see, 4d vectors with w==0 are not affected by the translation part of the matrices, sums of vectors are vectors, difference of points is vector etc... (just like they should)

Furthermore, all points *must* have w==1. If you transform a 4d vector by such a matrix, its w will generally change. In order for the result to represent a point in the original space, all its components must be divided by its w, thus it must be normalized to its w. Else, it's not a point, it's a set of projective coordinates.

And a clarification on the matrices mentioned by jyk...

Their upper-left 3x3 part must be strictly unit column vectors, mutually perpendicular. The determinant of that sub-matrix is 1, which means geometrically that it preserves the volume of the transformed set.
It is quite often, that these unit vectors are multiplied by a scaling value for that axis (to apply scaling to the model), so make sure to normalize them before using them in any way.
The rest part, the zeros and translation do not affect this property because they do not affect the determinant.
The inverse matrix has the aforementioned form, only if there is no scaling encoded in the matrix.

##### Share on other sites

This topic is 4305 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628721
• Total Posts
2984394

• 25
• 11
• 10
• 16
• 14