Jump to content
  • Advertisement
Sign in to follow this  
davidr

OpenGL GL_MODELVIEW matrix confusion

This topic is 3755 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I have some experience with Direct3D, but I'm new to OpenGL. I'm having problems drawing a cube at an arbitrary location in the world. I managed to get things working using gluLookAt, but I'd prefer to create the view matrix myself then multiply it by the current objects world matrix e.g. glLoadMatrixd( camera.viewMatrix() * object.worldMatrix() ); So, the current setup is the object is located at the origin, with: right vector = <1, 0, 0> up vector = <0, 1, 0> look vector = <0, 0, -1> The camera is located at <0, 0, 10>. Its right, up, and look vectors are identical to the object (I'm not doing any rotations at the moment). The view matrix created by Camera::viewMatrix is:
"[         1,          0,          0,          0]
[         0,          1,          0,          0]
[         0,          0,         -1,        -10]
[         0,          0,          0,          1]" 

const GLdouble* Camera_Class::View_Matrix(void)
{
	if (m_View_Matrix_Dirty == false) 
	{
		return m_View_Matrix;
	}

	// Row 1
	m_View_Matrix[0] = m_Right_Vector[0];
	m_View_Matrix[1] = m_Right_Vector[1];
	m_View_Matrix[2] = m_Right_Vector[2];
	m_View_Matrix[3] = -1.0 * (m_Position_Vector * m_Right_Vector);
	// Row 2
	m_View_Matrix[4] = m_Up_Vector[0];
	m_View_Matrix[5] = m_Up_Vector[1];
	m_View_Matrix[6] = m_Up_Vector[2];
	m_View_Matrix[7] = -1.0 * (m_Position_Vector * m_Up_Vector);
	// Row 3
	m_View_Matrix[8] = m_Look_Vector[0];
	m_View_Matrix[9] = m_Look_Vector[1];
	m_View_Matrix[10] = m_Look_Vector[2];
	m_View_Matrix[11] = -1.0 * (m_Position_Vector * m_Look_Vector);
	// Row 4
	m_View_Matrix[12] = 0.0;
	m_View_Matrix[13] = 0.0;
	m_View_Matrix[14] = 0.0;
	m_View_Matrix[15] = 1.0;

	m_View_Matrix_Dirty = false;

	return m_View_Matrix;
}

Here is the paint function:
void OpenGL_Widget_Class::paintGL(void)
{
	glClear(GL_COLOR_BUFFER_BIT);
	
	glLoadMatrixd(m_Camera.View_Matrix());
	
	glEnableClientState(GL_VERTEX_ARRAY);
	glEnableClientState(GL_COLOR_ARRAY);
	glEnableClientState(GL_INDEX_ARRAY);

	glVertexPointer(3, GL_DOUBLE, 0, m_Vertices);
	glColorPointer(4, GL_DOUBLE, 0, m_Colors);
	glIndexPointer(GL_INT, 0, m_Indices);

	// Row major
	// world_matrix = yaw * pitch * roll * world_matrix

	// Column major
	// world_matrix = ((world_matrix * roll) * pitch) * yaw

	for(QList<Object_Class*>::iterator iter = m_Cube_Objects.begin(); iter != m_Cube_Objects.end(); ++iter)
	{
		glDrawElements(GL_QUADS, 24, GL_UNSIGNED_INT, m_Indices);
	}
	
	glFlush();
}

Other info: the near and far planes are set to 1 and 1000. Many thanks,

Share this post


Link to post
Share on other sites
Advertisement
Ok some notes/tips:

GL stores things in column order your matrix should be:


[1, 0, 0, 0,
0, 1, 0, 0,
0, 0, -1, 0,
0, 0, 10, 1 ]

then try:

glMatrixMode(GL_MODELVIEW);
glLoadMatrixd(object); //object/camera may need to be switched, not sure cuz I
glMultMatrixd(camera); //never use LoadMatrix


You should let GL do things, because it's an API that does things uniformly and you don't have to worry about things like column order.

Make sure to use glMatrixMode(GL_MODELVIEW)

Share this post


Link to post
Share on other sites
Hi,

Thanks for the tips.

Re your comments about the view matrix, according to my math book, if you are using column vectors, e.g. v' = m*v, then the view matrix should look like:


/*
right_x, right_y, right_z, -(position dot right)
up_x, up_y, look_z, -(position dot up)
look_x, up_y, look_z, -(position dot look)
0, 0, 0, 1
*/




I also understand that i (row index), j (column index) refers to the jth row and ith column in an OpenGL matrix. Keeping both these things in mind, I rewrote my View_Matrix function:


const GLdouble* Camera_Class::View_Matrix(void)
{
/*
Regular matrix

0, 1, 2, 3,
4, 5, 6, 7,
8, 9, 10, 11,
12, 13, 14, 15

OpenGL matrix

0, 4, 8, 12,
1, 5, 9, 13,
2, 6, 10, 14,
3, 7, 11, 15,

The first row of the matrix occupies elements 0, 4, 8, 12 - not 0, 1, 2, 3

*/


if (m_View_Matrix_Dirty == false)
{
return m_View_Matrix;
}

// First row of OpenGL matrix
m_View_Matrix[0] = m_Right_Vector[0];
m_View_Matrix[4] = m_Right_Vector[1];
m_View_Matrix[8] = m_Right_Vector[2];
m_View_Matrix[12] = -1 * (m_Position_Vector * m_Right_Vector);
// Second row of OpenGL matrix
m_View_Matrix[1] = m_Up_Vector[0];
m_View_Matrix[5] = m_Up_Vector[1];
m_View_Matrix[9] = m_Up_Vector[2];
m_View_Matrix[13] = -1 * (m_Position_Vector * m_Up_Vector);
// Third row of OpenGL matrix
m_View_Matrix[2] = m_Look_Vector[0];
m_View_Matrix[6] = m_Look_Vector[1];
m_View_Matrix[10] = m_Look_Vector[2];
m_View_Matrix[14] = -1 * (m_Position_Vector * m_Look_Vector);
// Fourth row of OpenGL matrix
m_View_Matrix[3] = 0.0;
m_View_Matrix[7] = 0.0;
m_View_Matrix[11] = 0.0;
m_View_Matrix[15] = 1.0;

m_View_Matrix_Dirty = false;

return m_View_Matrix;
}



If the camera is positioned at the origin, this produces a view matrix that is an identity matrix.


[ 1, 0, 0, 0]
[ 0, 1, 0, 0]
[ 0, 0, -1, 0]
[ 0, 0, 0, 1]



My object is positioned 4 units in front of the camera. Its world matrix is:


[ 1, 0, 0, 0]
[ 0, 1, 0, 0]
[ 0, 0, 1, -4]
[ 0, 0, 0, 1]



In the paint function, I multiply the view matrix by the world matrix, then paint the object (a 2x2 cube).

e.g. Result_Matrix = View_Matrix * World_Matrix

The result is a black screen.

Setting the objects position to 4 made the cube visible.

This is a bit confusing because I understood that in OpenGL the +z-axis extends "out of the screen", and the -z-axis into the screen. To move an object in the direction of the world z-axis I'd normally do something like (<0, 0, -1> * distance) + object_position, but that does not work in this case, or my view matrix is still wrong :(

Anyway,

Thanks again

Share this post


Link to post
Share on other sites
Your identity matrix has a -1 for the z-axis, so its not the identity..... In any system regardless of the z+ direction, the identity is still all 1's......

So there forefore, you had to switch your translation to this "new system" where you flipped the z-axis.

Share this post


Link to post
Share on other sites
Quote:
Original post by dpadam450
GL stores things in column order your matrix should be:
Thats actually not quite true...

Quote:

Column-major versus row-major is purely a notational convention. Note that post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. The OpenGL Specification and the OpenGL Reference Manual both use column-major notation. You can use any notation, as long as it's clearly stated.

Sadly, the use of column-major format in the spec and blue book has resulted in endless confusion in the OpenGL programming community. Column-major notation suggests that matrices are not laid out in memory as a programmer would expect.

Share this post


Link to post
Share on other sites
Yea I've never heard of that, but that doesn't make sense because you cant have both unless there is an option to say "hey I want them row major now". I see what its saying but anytime you print a GL matrix, its always column major, even in GLSL. So can you tell GL how you want it?

Share this post


Link to post
Share on other sites
Quote:
Original post by dpadam450
Your identity matrix has a -1 for the z-axis, so its not the identity..... In any system regardless of the z+ direction, the identity is still all 1's......

So there forefore, you had to switch your translation to this "new system" where you flipped the z-axis.


Hi,

OK - I set my camera's look vector to <0, 0, 1>, the camera's position to <0, 0, 1>, and the cubes position to <0, 0, -2>.

If +z coordinates are behind the origin, and -z coordinates are in front of the origin, I can visualise this in my head, but the camera's look vector is pointing away from the cube - not at it.

In D3D, if the camera's look vector is <0, 0, 1>, then (a) <0, 0, 10> is a point in front of the origin, and (b) <0, 0, -10> is a point behind the origin.

<0, 0, 0> + (camera_look_vector * 10) = <0, 0, 10> (a).

I would expect things to work the same in OpenGL i.e.

Look vector = <0, 0, -1>
a = <0, 0, -10> // in front of the origin
b = <0, 0, 10> // behind the origin

<0, 0, 0> + (camera_look_vector * 10) = <0, 0, -10> (a)

Do you see what I'm getting at? I'm just trying to get this sorted in my head.

Thanks











Share this post


Link to post
Share on other sites
Ok your having coordinate problems un-related to openGL.

Image your origin is the middle of your screen. If your look vector is 0,0,1, then your looking down the positive z-axis, or outward from your pc screen. Your only going to see things that have a positive z-coordinate....


Again, you have this z- concept because your saying "OpenGL's z+ is DX's z-", which is true.......IFFFFF you want to describe GL in terms of DX.

So give this frame if your camera matrix is the identity matrix and remember the z- is the vector into the screen, then anything you see needs to be in the z- axis.

Got it? Again, in your case here, your camera is looking out towards your face, and your cube is behind your pc screen..... complete opposite directions.


OpenGL frame

y
|
|
|______ x
/
/z

Share this post


Link to post
Share on other sites
Quote:
Original post by dpadam450
Ok your having coordinate problems un-related to openGL.

Image your origin is the middle of your screen. If your look vector is 0,0,1, then your looking down the positive z-axis, or outward from your pc screen. Your only going to see things that have a positive z-coordinate....


Again, you have this z- concept because your saying "OpenGL's z+ is DX's z-", which is true.......IFFFFF you want to describe GL in terms of DX.

So give this frame if your camera matrix is the identity matrix and remember the z- is the vector into the screen, then anything you see needs to be in the z- axis.

Got it? Again, in your case here, your camera is looking out towards your face, and your cube is behind your pc screen..... complete opposite directions.


OpenGL frame

y
|
|
|______ x
/
/z


Thanks for the clarification.

In view space, the camera's vectors are aligned with <1, 0, 0>, <0, 1, 0>, and <0, 0, 1>, and objects need to have a position such as <x, y, -z> in order to be visible because OpenGL uses a right-handed coordinate system.

Anyway, things seem to be working now.

Share this post


Link to post
Share on other sites

/* transform matrix */
Vector3f n = view - eye; //look
Vector3f u = n.Cross(up); //right

//Vector3f v = n.Cross(u); //left-handed - ???
Vector3f v = u.Cross(n); //right-handed - currently I used this

u = u.Normalize();
v = v.Normalize();
n = n.Normalize();

/* make n as -z (negative z) in OpenGL (we look to the negative OpenGL) */
n = -n; //now become left-handed

//translation
Vector3f t = Vector3f(-eye.Dot(u), -eye.Dot(v), -eye.Dot(n));

/* interpreting as column-major like this in OpenGL
* ( u.x, u.y, u.z, t.x,
* v.x, v.y, v.z, t.y,
* n.x, n.y, n.z, t.z,
* 0 , 0, 0, 1 ) */


GLfloat m[] = { //so the matrix must be like this:
u.x, v.x, n.x, 0,
u.y, v.y, n.y, 0,
u.z, v.z, n.z, 0,
t.x, t.y, t.z, 1
};



I tested this transformation and gained the same result as gluLookAt method.
But now I'm having a problem deriving this for the billboard. The billboard code is like this:


Vector3f n = vecView; //vector from object of billboard to camera
Vector3f u = vecUp.Cross(n); /* !!! Point to the left??? */
Vector3f v = n.Cross(u);

u = u.Normalize();
v = v.Normalize();
n = n.Normalize();

//no translation yet
//Vector3f t = Vector3f(-posObj.Dot(u), -posObj.Dot(v), -posObj.Dot(n));

/* interpreting as column-major like this
* ( u.x, u.y, u.z, t.x,
* v.x, v.y, v.z, t.y,
* n.x, n.y, n.z, t.z,
* 0 , 0, 0, 1 ) */

/* THIS ONE NOT WORK!!!
GLfloat m[] = {
u.x, v.x, n.x, 0,
u.y, v.y, n.y, 0,
u.z, v.z, n.z, 0,
t.x, t.y, t.z, 1
};*/


/*BUT THIS ONE WORKS!!! -> This matrix's rotation part (u, v, n) is wrong of its position.*/

GLfloat billboardMat[] = {
u.x, u.y, u.z, 0,
v.x, v.y, v.z, 0,
n.x, n.y, n.z, 0,
t.x, t.y, t.z, 1
};


The billboard matrix is like this. I thought the billboardMat's elements is wrong but it does produce the right result of billboard.

In the opposite, if I used the camera transformation matrix without the line n = -n, I still get the billboard effect but the rotation of Y axis is wrong (the image is upside down).

It looks confusing. Anyone could give me some hints? :)

Thanks in advance.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!