Jump to content
  • Advertisement
Sign in to follow this  
Endar

OpenGL Help understanding OGL matices

This topic is 4842 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Got this code from v0.1 of the irrlicht engine, and I'm needing some help understanding the matrix parts. First of all, OGL matrices are column-major right? That means that [2][3] would access the 4th element of the 3rd column, instead of the 4th element of the 3rd row. So, all that is needed to convert a row-major to a column-major is a transpose, right? The projection matrix is used to convert from 3d space to 2d space on the screen, right? The model view matrix, is what is used when rotations and translations are done when things are drawn to 3d space, right? Apart from that, I need some more info, because I really feel like I know nothing about what they do and how to use them.
//! sets transformation
void CVideoOpenGL::setTransform(E_TRANSFORMATION_STATE state, const core::matrix4& mat)
{
	GLfloat glmat[16];
	Matrizes[state] = mat;

	switch(state)
	{
	case TS_VIEW:
		// OpenGL only has a model matrix, view and world is not existent. so lets fake these two.
		createGLMatrix(glmat, Matrizes[TS_VIEW] * Matrizes[TS_WORLD]);
		glMatrixMode(GL_MODELVIEW);
		glLoadMatrixf(glmat);
		break;
	case TS_WORLD:
		// OpenGL only has a model matrix, view and world is not existent. so lets fake these two.
		createGLMatrix(glmat, Matrizes[TS_VIEW] * Matrizes[TS_WORLD]);
		glMatrixMode(GL_MODELVIEW);
		glLoadMatrixf(glmat);
		break;
	case TS_PROJECTION:
		{
			createGLMatrix(glmat, mat);
			
			// flip z to compensate OpenGLs right-hand coordinate system
			glmat[12]*= -1.0f;

			glMatrixMode(GL_PROJECTION);
			glLoadMatrixf(glmat);
		}
		break;
	}
}


Share this post


Link to post
Share on other sites
Advertisement
Matrices are so confusing, because there's not only the column-major vs row-major convention, but there's also the vertex-is-row-on-left vs the vertex-is-column-on-right convention.

It so happens that OpenGL and DirectX are the opposite in BOTH these conventions, which means that the actual, in-memory layout of the matrix is the same (!)

I prefer to think of what actually happens. The translation part of the matrix is in offsets 12, 13 and 14 of the matrix; offset 15 contains the "1" at the lower-right corner. The vertex (x,y,z,1) gets dotted with offsets 0,4,8,12 to generate the output X coordinate; 1,5,9,13 to generate the Y coordinate, ...

It's easiest for me to think of this as the vertex being a row on the left and the matrix being stored row major. You can also think of it as the vertex being a column on the right, and the matrix being stored column major. The end result is: no conversion should be needed in 99% of the cases.

Share this post


Link to post
Share on other sites
Another consideration with row and column vectors is that they affect the order of operations. For example, with column vectors the sequence scale->rotate->translate would look like this:

v' = T*R*S*v

While with row vectors it would look like this:

v' = v*S*R*T

You'll need to take this into consideration whether you're writing your own math library or using somebody else's.

This is IMHO, but I recommend not conceptually intermingling the underlying math with the matrix 'majorness'. The choice of row or column major is an implementation detail, not really a mathematical issue.

Mathematically, we think of matrices as being indexed by row and then by column. In code the indices are usually 0-based; in the literature, 1-based. For example, a 2x2 matrix would be:

[00 01]
[10 11]

I usually use 0-based indexing in examples, as that makes it easier to convert to code. Multiplying this by a column vector on the right:
[00 01][x] = [00x+01y]
[10 11][y] [10x+11y]
With row vectors:
[x y][00 01] = [00x+10y, 01x+11y]
[10 11]
As you can see, the result is not the same. In short, which convention you choose (row or column vectors) will determine how your matrices are constructed and in what order you multiply them together.

Once you've got that down, you can consider the issue of majorness. When storing a matrix as a 1-dimensional array, you have a choice of whether the elements go by row and then column:

[0 1]
[2 3]

Or column and then row:

[0 2]
[1 3]

Again, this is a programming issue rather than a mathematical one. It will obviously affect how your code is written, but it in no way affects the underlying mathematics.

Well, there's a lot to say about matrices, and I've probably just further confused the issue, but ask if you have further questions.

Again, some of this is MHO.

Share this post


Link to post
Share on other sites
For me, the underlying matrix math isn't really the problem, its more the function and use of each particular OGL matrix.

[Edited by - Endar on September 9, 2005 12:34:31 AM]

Share this post


Link to post
Share on other sites
Correct, the projection matrix takes a 3d point and 'projects' it onto a 2d 'screen'. And yes, the modelview matrix is responsible for any of the transforms that occur in the 3d world.

ScreenPoint = ProjectionMatrix * ModelviewMatrix * 3dPoint

In the end, technically, the two matrices combine and there is only one final matrix that needs to be multiplied with the 3Dpoints to make the screen points. (There are other stages involved too, like the viewport, but I'm sticking in scope here).

I noticed in your code that there are comments saying the 'world' and 'view' matrices don't exist. Which really isn't the case, it's just that they are all combined into one "ModelViewMatrix". I'll give an example using a few transforms.

First lets say we have a world coordinate system. You know where (0,0,0) is, and you know which way the x,y,z axes point.

Now lets say you've created an object that has it's own coordinate system. Like defining a sphere about the origin. Now you want to place this sphere at a specific location x,y,z. This would involve a translation matrix applied the the 'local system' of the sphere to place it in the world coordinate system.
All of this is included to make a "Model" matrix.

Finally, you define a camera in space that views this world coordinate system and the sphere. This gives you a picture of the scene from your point of view. Now OpenGL by default has a 'viewing axis'. The X axis goes across the screen in the left to right direction, and the Y axis points up. This leaves the Z axis pointing out of the screen. So now the trick is to align these 'viewing axes' with the cameras point of view. So you'd align the -z axis in the direction the camera points, and the Y axis with its 'up' vector. You would then translate so the viewpoint is originating from the camera's location. All of this is included into a "View Matrix".

You now take the Model and View matrices and multiply them together and get the resulting ModelViewMatrix.

The projection matrix is the second stage, and this defines how the 3D scene from that particular point of view is 'projected' onto the screen. Typically there are two ways of doing this. Orthographic and Perspective. Orthographic is like an architectural drawing of a 'side' or 'top' view of a building etc. Perspective is a representation of a pinhole camera. This causes objects that are further away to appear smaller, and closer ones larger.

Hopefully this is what you were looking for, I wasn't sure what sort of specific questions you had in regard to the matrices.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!