Orthographic matrix not working

Started by
12 comments, last by 21st Century Moose 11 years, 1 month ago

This is how I send it to the shader: I know this part works since the perspective projection matrix works, but the ortho produces wierd results.

As for my previous projection matrix - this is my first projection matrix - so I am not multiplying it by anything before sending it tothe shader. Should I be?


///upload matrices to uniform varis in shader
        GL20.glUseProgram(shaderObject.pId);
        
        projectionMatrix.store(matrix44Buffer);matrix44Buffer.flip();            
        GL20.glUniformMatrix4(projectionMatrixLocation, false, matrix44Buffer);
        
        viewMatrix.store(matrix44Buffer);matrix44Buffer.flip();
        GL20.glUniformMatrix4(viewMatrixLocation, false, matrix44Buffer);
        
        modelMatrix.store(matrix44Buffer);matrix44Buffer.flip();
        GL20.glUniformMatrix4(modelMatrixLocation, false, matrix44Buffer);
        
        
        GL20.glUseProgram(0);
 
Advertisement

I managed to figure this out - apparently I was not accessing the matrix elements correctly - transposing their accessors = m03 instead of m30 for example for the translation matrix. All is well now! Thanks everyone.

I noticed you're transforming your matrices as so:

projectionMatrix * viewMatrix * modelMatrix

This is also how I do it, but a lot of people do it opposite (and I think glOrtho does too). If this is the case, than the glOrtho() code provided above may be correct, but it just has to be transposed. Here's my camera's ortho code:


void Camera::Ortho(float width, float height, float zNear, float zFar)
	{
		Ortho(0.0f, width, 0.0f, height, zNear, zFar);
	}
	
	
	void Camera::Ortho(float left, float right, float top, float bottom, float zNear, float zFar)
	{
		// find the translation vector
		const float tx = - (right + left)/(right - left);
		const float ty = - (top + bottom)/(top - bottom);
		const float tz = - (zFar + zNear)/(zFar - zNear);
		
		// column 1
		projMat.m[ 0] = 2.0f / (right - left);
		projMat.m[ 1] = 0;
		projMat.m[ 2] = 0;
		projMat.m[ 3] = 0;
		
		// column 2
		projMat.m[ 4] = 0;
		projMat.m[ 5] = 2.0f / (top - bottom);
		projMat.m[ 6] = 0;
		projMat.m[ 7] = 0;
		
		// column 3
		projMat.m[ 8] = 0;
		projMat.m[ 9] = 0;
		projMat.m[10] = -2.0f / (zFar - zNear);
		projMat.m[11] = 0;
		
		// column 4
		projMat.m[12] = tx;
		projMat.m[13] = ty;
		projMat.m[14] = tz;
		projMat.m[15] = 1;
		
		mode = PROJECTION_MODE_ORTHOGONAL;
		Update(0.0f); // update the camera
	}

The first method is just a shortcut I created where the origin is in the upper-left corner of the screen is treated as the origin. The second method is what would emulate glOrtho() from the old OpenGL days, only it should be multiplied first like you have it.

Also, keep in mind that all matrix elements are an array of 16 floats stored on a column-by-column basis:


m[ 0] m[ 4] m[ 8] m[12]
m[ 1] m[ 5] m[ 9] m[13]
m[ 2] m[ 6] m[10] m[14]
m[ 3] m[ 7] m[11] m[15]

When you feed the matrix as a 4x4 to OpenGL, make sure that the transpose argument is GL_FALSE. That's very important.

When you feed the matrix as a 4x4 to OpenGL, make sure that the transpose argument is GL_FALSE. That's very important.

Not really; please see http://www.opengl.org/archives/resources/faq/technical/transformations.htm

Column-major versus row-major is purely a notational convention. Note that post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. The OpenGL Specification and the OpenGL Reference Manual both use column-major notation. You can use any notation, as long as it's clearly stated.

Also

Sadly, the use of column-major format in the spec and blue book has resulted in endless confusion in the OpenGL programming community. Column-major notation suggests that matrices are not laid out in memory as a programmer would expect.

OpenGL is perfectly capable of using either row-major or column-major matrices (and all the more so with the programmable pipeline); the only important thing is that you be consistent in your code. That just means getting the multiplication orders correct in both your C/C++ code and your shader code. You don't have to use the same major-ness in your shaders as you use in your C/C++: either transpose before sending (either manually or by using GL_TRUE in your glUniformMatrix call) or flip the multiplication order in your shader code.

So what's actually important is that you know which major-ness each element of your code uses and that you set things up appropriately for that; otherwise the importance of row-major versus column-major is hugely overstated.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement