OpenGL OpenGL: last column of matrix multiplication

This topic is 1664 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

It seems OpenGL concatenates matrices like this:

void Matrix::postmult( const Matrix& matrix )
{
float newMatrix[16];

const float *a = m_matrix, *b = matrix.m_matrix;

newMatrix[0]  = a[0] * b[0]  + a[4] * b[1]  + a[8] * b[2]   + a[12] * b[3];
newMatrix[1]  = a[1] * b[0]  + a[5] * b[1]  + a[9] * b[2]   + a[13] * b[3];
newMatrix[2]  = a[2] * b[0]  + a[6] * b[1]  + a[10] * b[2]  + a[14] * b[3];
newMatrix[3]  = a[3] * b[0]  + a[7] * b[1]  + a[11] * b[2]  + a[15] * b[3];

newMatrix[4]  = a[0] * b[4]  + a[4] * b[5]  + a[8] * b[6]   + a[12] * b[7];
newMatrix[5]  = a[1] * b[4]  + a[5] * b[5]  + a[9] * b[6]   + a[13] * b[7];
newMatrix[6]  = a[2] * b[4]  + a[6] * b[5]  + a[10] * b[6]  + a[14] * b[7];
newMatrix[7]  = a[3] * b[4]  + a[7] * b[5]  + a[11] * b[6]  + a[15] * b[7];

newMatrix[8]  = a[0] * b[8]  + a[4] * b[9]  + a[8] * b[10]  + a[12] * b[11];
newMatrix[9]  = a[1] * b[8]  + a[5] * b[9]  + a[9] * b[10]  + a[13] * b[11];
newMatrix[10] = a[2] * b[8]  + a[6] * b[9]  + a[10] * b[10] + a[14] * b[11];
newMatrix[11] = a[3] * b[8]  + a[7] * b[9]  + a[11] * b[10] + a[15] * b[11];

newMatrix[12] = a[0] * b[12] + a[4] * b[13] + a[8] * b[14]  + a[12] * b[15];
newMatrix[13] = a[1] * b[12] + a[5] * b[13] + a[9] * b[14]  + a[13] * b[15];
newMatrix[14] = a[2] * b[12] + a[6] * b[13] + a[10] * b[14] + a[14] * b[15];
newMatrix[15] = a[3] * b[12] + a[7] * b[13] + a[11] * b[14] + a[15] * b[15];

set( newMatrix );
}

However, this doesn't give me a .w equal to 1 in orthographic projection.

My directional light matrix (that maps to texture space) ends up looking like a perspective projection. This also gives me wrong results when getting the screen position of 3D objects. To get pixel-perfect orthographic results I must either manually set the transformed vector's w component to 1 or use this multiplication:

void Matrix::postmult2( const Matrix& matrix )	//gives correct results, use this
{
float newMatrix[16];

const float *m1 = m_matrix, *m2 = matrix.m_matrix;

newMatrix[0] = m1[0]*m2[0] + m1[4]*m2[1] + m1[8]*m2[2];
newMatrix[1] = m1[1]*m2[0] + m1[5]*m2[1] + m1[9]*m2[2];
newMatrix[2] = m1[2]*m2[0] + m1[6]*m2[1] + m1[10]*m2[2];
newMatrix[3] = 0;

newMatrix[4] = m1[0]*m2[4] + m1[4]*m2[5] + m1[8]*m2[6];
newMatrix[5] = m1[1]*m2[4] + m1[5]*m2[5] + m1[9]*m2[6];
newMatrix[6] = m1[2]*m2[4] + m1[6]*m2[5] + m1[10]*m2[6];
newMatrix[7] = 0;

newMatrix[8] = m1[0]*m2[8] + m1[4]*m2[9] + m1[8]*m2[10];
newMatrix[9] = m1[1]*m2[8] + m1[5]*m2[9] + m1[9]*m2[10];
newMatrix[10] = m1[2]*m2[8] + m1[6]*m2[9] + m1[10]*m2[10];
newMatrix[11] = 0;

newMatrix[12] = m1[0]*m2[12] + m1[4]*m2[13] + m1[8]*m2[14] + m1[12];
newMatrix[13] = m1[1]*m2[12] + m1[5]*m2[13] + m1[9]*m2[14] + m1[13];
newMatrix[14] = m1[2]*m2[12] + m1[6]*m2[13] + m1[10]*m2[14] + m1[14];
newMatrix[15] = 1;

set( newMatrix );
}

Notice that it leaves the last row (0,0,0,1) and doesn't add the last term.

Is this because I am constructing my ortho projection incorrectly?

Matrix OrthoProj(float l, float r, float t, float b, float n, float f)
{
float m[16];

#define M(row,col)  m[col*4+row]
M(0, 0) = 2 / (r - l);
M(0, 1) = 0;
M(0, 2) = 0;
M(0, 3) = 0;

M(1, 0) = 0;
M(1, 1) = 2 / (t - b);
M(1, 2) = 0;
M(1, 3) = 0;

M(2, 0) = 0;
M(2, 1) = 0;
M(2, 2) = -1 / (f - n);
//M(2, 2) = -2 / (f - n);
M(2, 3) = 0;

M(3, 0) = -(r + l) / (r - l);
M(3, 1) = -(t + b) / (t - b);
M(3, 2) = -n / (f - n);
//M(3, 2) = -(f + n) / (f - n);
M(3, 3) = 1;
#undef M

Matrix mat;
mat.set(m);

return mat;
}

Notice that there's some commented out lines because I got different answers for that reading

void setothographicmat(float l, float r, float t, float b, float n, float f, Matrix4<T> &mat)
{
mat[0][0] = 2 / (r - l);
mat[0][1] = 0;
mat[0][2] = 0;
mat[0][3] = 0;

mat[1][0] = 0;
mat[1][1] = 2 / (t - b);
mat[1][2] = 0;
mat[1][3] = 0;

mat[2][0] = 0;
mat[2][1] = 0;
mat[2][2] = -1 / (f - n);
mat[2][3] = 0;

mat[3][0] = -(r + l) / (r - l);
mat[3][1] = -(t + b) / (t - b);
mat[3][2] = -n / (f - n);
mat[3][3] = 1;
}

and

http://www.songho.ca/opengl/gl_projectionmatrix.html

Edited by polyfrag

Share on other sites

Im not sure about your matrix concatenation or row column ordering, but my code and the opengl red book use the two lines you have commented out,

M(2, 2) = -2 / (f - n);
M(3, 2) = -(f + n) / (f - n);

Share on other sites
Concatenated affine transformations will result in an affine transformation (and an ortho is one), so the last row stays (0,0,0,1), (Edit: or column if row vectors are used) and w of a transformed vector should stay 1 as well. My bet is some row/column mixup in the construction: The first link is confusing in this regard (mixing row/column vector convention with row/column-major layout). Edited by unbird

Share on other sites

Concatenated affine transformations will result in an affine transformation (and an ortho is one), so the last row stays (0,0,0,1), (Edit: or column if row vectors are used) and w of a transformed vector should stay 1 as well. My bet is some row/column mixup in the construction: The first link is confusing in this regard (mixing row/column vector convention with row/column-major layout).

+1

http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-4-geometry/conventions-again-row-major-vs-column-major-vector/

I've screwed up every Matix class i have EVER written from scratch.  And every time I say to myself "OK.  This time I'll get the row/column major stuff right!"

Share on other sites
In adition to what NumberXaero said, your matrix is also transposed. Try switching rows and cols.

• What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 15
• 9
• 11
• 9
• 9
• Forum Statistics

• Total Topics
634133
• Total Posts
3015747
×