Implementing modelview matrix transforms...HELP!

Started by
4 comments, last by Trynthlas 17 years, 6 months ago
Just a portion of the whole thing I'm working on (implementing most of the pipeline), but since I'm doing it in pieces, here's where I'm at. I had some 'camera' functions that could translate, rotate, or translate while maintaining a fixed view, like so:
void Camera::MoveCamera()
{
	if( (moveLeft || moveRight) && !(moveLeft && moveRight) )
	{
		float s = (moveRight ? speed : -speed);

		Vector3 vVector = view - pos;
		Vector3 vOrthoVector;

		vOrthoVector.v[0] = -vVector.v[2];
		vOrthoVector.v[2] =  vVector.v[0];

		pos.v[0]  = pos.v[0] + vOrthoVector.v[0] * s;
		pos.v[2]  = pos.v[2] + vOrthoVector.v[2] * s;
		if( !viewLocked )
		{
			view.v[0] = view.v[0] + vOrthoVector.v[0] * s;
			view.v[2] = view.v[2] + vOrthoVector.v[2] * s;
		}
	}

	if( (moveForward || moveBack) && !(moveForward && moveBack) )
	{
		float s = (moveForward ? speed : -speed);
		Vector3 vVector = view - pos;
		
		pos.v[0]  = pos.v[0]  + vVector.v[0] * s;
		pos.v[2]  = pos.v[2]  + vVector.v[2] * s;
		if( !viewLocked )
		{
			view.v[0] = view.v[0] + vVector.v[0] * s;
			view.v[2] = view.v[2] + vVector.v[2] * s;
		}
	}
}

void Camera::Rotate()
{
	if(!viewLocked)
	{
		if( (rotateLeft || rotateRight) && !(rotateLeft && rotateRight) )
		{
			float s = (rotateLeft ? -speed : speed);
			Vector3 vVector = view - pos;

			view.v[2] = (float)(pos.v[2] + sin(s)*vVector.v[0] + cos(s)*vVector.v[2]);
			view.v[0] = (float)(pos.v[0] + cos(s)*vVector.v[0] - sin(s)*vVector.v[2]);
		}


		if( (rotateUp || rotateDown) && !(rotateUp && rotateDown) )
		{
			float s = (rotateUp ? -speed : speed);
			Vector3 vVector = view - pos;

			view.v[2] = (float)(pos.v[2] + sin(s)*vVector.v[1] + cos(s)*vVector.v[2]);
			view.v[1] = (float)(pos.v[1] + cos(s)*vVector.v[1] - sin(s)*vVector.v[2]);
		}
	}
}
Worked perfectly, using this drawing implementation (in the display func):
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();

camera.MoveCamera();
camera.Rotate();
gluLookAt(camera.pos.v[0],  camera.pos.v[1],  camera.pos.v[2],
		  camera.view.v[0], camera.view.v[1], camera.view.v[2],	
		  camera.up.v[0],   camera.up.v[1],   camera.up.v[2]);

// ...cut out code for object rotation - works fine... //
Now, here is my replacement implemenation segment:
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
make4x4Identity( obj.transform );
make4x4Identity( camera.transform );

// ...cut out code for object rotation - works fine... //

//// ---- Camera transform ----------------
camera.MoveCamera();
camera.Rotate();
camera.CreateTransformFromVectors();

// apply camera transform
obj.MultiplyTransformByMatrix(camera.transform);
//// ---- Camera transform ----------------

glLoadMatrixf( obj.transform );
Obviously I'm looking to just create a knock-off gluLookAt() method. Here it is:
void Camera::CreateTransformFromVectors()
{
        Vector3 forward; // the direction the camera is pointing
	Vector3 camUp; // the upward direction of the camera
	Vector3 side; // vector pointing out from the side of the camera
        float m[16];

        forward = view - pos;
        camUp = up;

	forward = Normalize(forward);

	side = Cross(forward, camUp);
        side = Normalize(side);

        camUp = Cross(side, forward);
	camUp = Normalize(camUp);

	make4x4Identity(m);
        m[0] = side.v[0];
        m[4] = side.v[1];
        m[8] = side.v[2];

        m[1] = camUp.v[0];
        m[5] = camUp.v[1];
        m[9] = camUp.v[2];

        m[2] = -forward.v[0];
        m[6] = -forward.v[1];
        m[10] = -forward.v[2];

	MultiplyTransformByMatrix(m);
	ApplyTranslation( CreateVector3(-pos.v[0], -pos.v[1], -pos.v[2]) );
}
The question I have is this: what's wrong with my lookAt function that it's not creating the right transform matrix? I haven't touched the MoveCamera() or Rotate() functions at all...and as said, I know the math for them is right because it works when I use gluLookAt(). Help!
Advertisement
Are you sure it's not supposed to be:
m[0] = side.v[0];m[1] = side.v[1];m[2] = side.v[2];m[4] = camUp.v[0];m[5] = camUp.v[1];m[6] = camUp.v[2];m[8] = -forward.v[0];m[9] = -forward.v[1];m[10] = -forward.v[2];

99%. I'm using column major notation since that's what openGL has, and I translated those numbers from this:

m[0][0] = side[0];m[1][0] = side[1];m[2][0] = side[2];m[0][1] = up[0];m[1][1] = up[1];m[2][1] = up[2];m[0][2] = -forward[0];m[1][2] = -forward[1];m[2][2] = -forward[2];
Quote:Original post by Trynthlas
99%. I'm using column major notation since that's what openGL has, and I translated those numbers from this:

m[0][0] = side[0];m[1][0] = side[1];m[2][0] = side[2];m[0][1] = up[0];m[1][1] = up[1];m[2][1] = up[2];m[0][2] = -forward[0];m[1][2] = -forward[1];m[2][2] = -forward[2];


This is wrong. OpenGL store's it's matrices this way:
x0, y0, z0, w0x1, y1, z1, w1x2, y2, z2, w2x3, y3, z3, w3

or like this:
 0   1  2    3  4   5    6  7    8  9   10  11  12  13  14  15 <- indicesx0, y0, z0, w0, x1, y1, z1, w1, x2, y2, z2, w2, x3, y3, z3, w3
That's exactly what column-major means, heh. The question is, is do I want:

( side0, up0, forward0, 0 )
( side1, up1, forward1, 0 )
( side2, up2, forward2, 0 )
( 0 , 0 , 0 , 1 )

or the transpose of that? (with it being stored in a linear array)
Ok, just for grins I went ahead and transposed the way I was assigning things...still not 100%, but it is better.

Now my problem is this:
MoveCamera() with the view locked, performs as Rotate() should;
Rotate() is just spinning the object - which isn't right at all.

The intended functionality is:
MoveCamera() without view locked is a simple translation of the camera along an axis, changing position and view.
MoveCamera() WITH view locked moves the camera, then rotates to maintain the same view.
Rotate() rotates the camera's view around an axis (mainly just spinning around Y as up right now).

This topic is closed to new replies.

Advertisement