Sign in to follow this  

Basic matrix question

This topic is 682 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Guys,

I have decided to take the plunge and try to de-mystify matrices in my own mind.

I have managed to write a simple calculator to multiply two matrices together to give an output.

So multiplying a simple identity matrix against a translation matrix gives me this (assuming translation is 5, 6 ,7 -xyz for learning purposes).

[quote]
1 0 0 0
0 1 0 0
0 0 1 0
5 6 7 0
[quote]

If I were to now use the x, y, and z co-ordinates for this single vertex (assuming the vertex is originally 0,0,0) would the location of the new vertex always be stored in elements 12, 13, and 14?

If so, when I start playing with rotation matrices etc and start getting numbers that aren't easy to predict (as they are above). Is the new x, y, and z of the vertex always going to be in 12, 13, and 14?

Sorry if this sounds overly basic and noobish, but I literally started learning matrices around 20 minutes ago smile.png

Share this post


Link to post
Share on other sites
It depends how the matrix is ordered in memory, but if it is
0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15

then your position will be at 12,13,14.

The trick about hormogoneous matricies in game dev is, that you can read the important vectors (position, lookat, right, up) directly from the matrix most of the time.

But, before you break your fingers by reinventing the wheel (aka writing your own math lib) , try to use a math library like glm (free, and header only lib, no lib linking necessary !).

Share this post


Link to post
Share on other sites


If I were to now use the x, y, and z co-ordinates for this single vertex (assuming the vertex is originally 0,0,0) would the location of the new vertex always be stored in elements 12, 13, and 14?
A vertex is a 1x4 matrix, or a 4x1 matrix (depending on which mathematical conventions you're using)

Depending on your convention, a transformation matrix multiplied by a vertex looks like:

1x4 * 4x4 = 1x4

4x4 * 4x1 = 4x1

 

i.e. your matrix has 16 elements, but your input vertex and your result only have 4 elements each.

Share this post


Link to post
Share on other sites
I think I get what you are saying there.

In other words less calculations, because it is a transform matrix?

I was playing around with the full 16 elements as I'll move on to trying out rotations etc next and wanted to make sure I was going about the whole thing correctly.

But (if I understand your post correctly) you can abbreviate some of the calculations somewhat. Right?

Share this post


Link to post
Share on other sites

In other words less calculations, because it is a transform matrix?

No. You were talking about vertices... which are very different from matrices. I assumed you were talking about transforming a vertex using a matrix (which is what my post is about).


Yep, I was talking about verts as well.

Obviously I am missing something. As I said completely new to matrix calculations.

Share this post


Link to post
Share on other sites
You should learn about matrix calculation, at least about the special subset of homogenous 4x4 matricies typically used in 3d game development. The special matricies are rotation, translation, scaling, and the inverse, order of multiplication, accessing position/lookat/up/right vector of a hom. 4x4 matrix,the difference between multiplying a hom. 4x4 matrix with a (x,y,z,0) and (x,y,z,1) vector etc. This should be really the baseline for coding a 3d game.

You should use a lib to get a fast and robust implmentation, still you need to learn how and when to apply what type of matrix.

Share this post


Link to post
Share on other sites
It's definately worth demystifying further. I suggest you use one of the first chapters of the frank d Luna d3d11 book, this covers all the basics and some usefull exercises.

Also note that in theory there can be a difference on row major vs column major (depending on the API and or settings/ HLSL usage)

Share this post


Link to post
Share on other sites

It's definately worth demystifying further. I suggest you use one of the first chapters of the frank d Luna d3d11 book, this covers all the basics and some usefull exercises.
Also note that in theory there can be a difference on row major vs column major (depending on the API and or settings/ HLSL usage)


Hehe - that was the first book I picked up this afternoon. I have all of his books. I admit I was guilty of glossing over the math section at the start, in the past, to get into the 'cool stuff'. Now older (nearly 41 - geez where did that time go?) and wiser I am returning to get a better understanding of all of the math behind it. The journey has been great so far, it has actually cleared up a lot of things.

@DuckFlock - thanks for the extensive write up too. I'll be reading through that in the morning (bed time now).

Thanks again for your help guys :)

Share this post


Link to post
Share on other sites

Alright, let's go on. We have learned about vectors and matrices and basic operations between them. We have seen that we can write a system of linear equations (no higher powers of variables, no multiplication of variables) as a single matrix*vector = vector equation:

A * u = b

For this equation to make sense, A, u and b must fulfill some constraints on their dimensions. To recall, we have seen that we can only multiply a matrix in Rmxn with a vector in Rn. There is a simple rule to check this: Write out the dimensions in order, like this:

m x n * n = m x n * n x 1 = m x (n * n) x 1 = m x 1 = m

You check that the inner parts (the "touching" parts) have the same dimension and remove them. This works for larger multiplications, too:

m x n * n x k * k x 1 = m x (n * n) x (k * k) x 1 = m x 1 = m

Note that we can skip the x1 part -- in some sense a vector in Rn is nothing more than a matrix of dimension nx1, i.e. in Rnx1. This is useful for reasoning, about dimensions. If you don't get the x1 part right now, don't worry about it -- the next section expands on the concepts behind this.

 

In fact, at this point it is useful to make a clearer distinction between row and column vectors.

 

Row vs column vectors

 

Right now you may ask "what is the difference between row vs column vectors anyways?" Rejoice, we will look at that right now! First and foremost, a warning though: We are talking about math, not programming. Don't confuse this discussion with row-major vs column-major, which defines the matrix element order in computer memory. It has nothing to do with how we write matrices in math. When talking about mathematics we always write matrices in the way I have introduced them. Storing matrices in computers is an implementation detail that often gets mixed in with the concept of row and column vectors, but has nothing to do with it.

 

That being said, what are row and column vectors? To recall; a column vector is written as such:

    | v_1 |
v = | v_2 |
    | v_3 |

A row vector looks like this:

u = | u_1  u_2  u_3 |

It is not hard to see that they describe the same concept; both of these vectors are a list of 3 numbers and nothing more. Now that we know about matrices (which are generalizations of vectors), there is a general context we can embed row and column vectors in.

 

As I have already mentioned we can interpret vectors as special matrices; a column vector is simply a matrix with one column, while a row vector is a matrix with only one row. In mathematical terms, we can interpret the column vector v given above as a matrix in R3x1 -- it has 3 rows of 1 column each. In the same way we can interpret the row vector u as a matrix in R1x3 -- one row, three columns.

 

Notice that there is a consequence to using row vectors instead of column vectors, though: The whole notation switches around! In fact, it does not make sense, given a matrix and row vector

    | b_11 b_12 |
B = | b_21 b_22 |, u = | x y |

to compute the product B * u. Why is that? Just check with the rule given above: B lives in R2x2, u lives in R1x2, not in R2, because it is a row vector!

Check the rule for B * u :  R2x2 .  R1x2 -> 2x2 * 1x2. This does not work, because the inner dimensions do not agree! B * u makes no sense.

 

On the other hand, we can now look at u * B:

                  | b_11 b_12 |
u * B = | x y | * | b_21 b_22 | = | x*b_11 + y*b_21   x * b_12 + y * b_22 |

Consider that for a vector equation to hold all entries on both sides must be equal. By now you should at least have a guess how to write our trusty old equation system from way above,

2*x + y = 4
  x + y = 3

as a product of an equation with row vector times matrix: we want to find

u * B = | 2*x + y  x + y | = | 4  3 |

I will leave that as an exercise.

 

Let's stop talking about row and column vectors for now. There is a distinction and it is relevant with respect to multiplication order.

You may ask yourself right now "What else is there to it? I can write systems as Matrix * column_vector = column_vector and as row_vector * Matrix = row_vector. They both describe the same math!". You're absolutely, positively correct. There are two ways to describe the same mathematical operations and obtain the same results. There are two conventions and both are equally valid. Which one do we use? Unfortunately, Microsoft chose, despite all better knowledge, the "wrong" one. This, paired with memory layout issues, leads to the excellent row/column major vs row/column vector confusion.

 

The canonical way to write vecotr/matrix math is to use column vectors. It makes working with systems of equations way easier in terms of notation and it is the status quo in mathematics. At the end of the day you can easily switch between both notations, but I will keep using the canonical notation -- and you should, too. Noone will prevent you from using row vectors for your calculations, but you will get funny looks (I'm looking at you, Microsoft!).

 

Ok, so how do I solve a system of equations?

 

Recall our system of equations in matrix-vector form:

              | 2 1 |   | x |   | 4 |
A * u = b <=> | 1 1 | * | y | = | 3 |

Formally, we want to find the vector u that solves this equation. There is a handy concept that helps us: inversion.

 

This is a core concept to many problems in math: Finding inverse objects. What is an inverse? Let's first look at multiplication in the real numbers. The inverse of a number a (which can't be 0), is the number that, multiplied by a gives the value 1 -- let's call that number b for now. In equations:

given a, find b such that: a * b = b * a = 1

Well, the unique solution is of course 1/a. We say that 1/a is the inverse of a. A similar concept exists for matrices.

 

Here I will only talk about the inverse of an invertible square matrix in Rdxd -- notice that both dimensions must agree. There are more general concepts for non-square matrices that are of no interest for us. We define the inverse B of a matrix A in the same way as in the case of real numbers:

A * B = B * A = Id,

where Id is shorthand for the identity matrix (the matrix with 1 on the diagonal entries and 0 in everything else). Similarly to vectors, this equation is satisfied if it is satisfied in every single component of the matrices on both sides. If B satisfies both equations, we call B the inverse of A and write B=A-1. We can interpret the inverse as the action that negates the effect of A:

                        | 1 0 |   | x |   | x |
A^-1 * A * u = Id * u = | 0 1 | * | y | = | y | = u.

Note that I have written an "if" in the above. Not every matrix has an inverse. This is related to the advanced concept of rank and the determinant of a matrix and I won't go into detail here. All matrices we're usually working with in 3D graphics will be invertible, so you can ignore this in the beginning.

 

Now, there is formally rather easy way to solve systems of equations: We can manipulate vector equations in the same way we can manipulate normal equations, but order of operations is important here -- if we multiply something from the left, we have to multiply it from the left on both sides. Look at the following transformation of our system, and I hope you can follow it right now:

             A * u = b         | A^-1 * (multiply A^-1 from the left)
<=>  A^-1 * A  * u = A^-1 * b   
<=> (A^-1 * A) * u = A^-1 * b  | A^-1 * A = Id by definition
<=>     Id     * u = A^-1 * b  | Id * u = u (see above)
<=>              u = A^-1 * b

There you have it. To compute the solution u, we simply have to invert A and compute A-1 * b! There are tons of algorithms (something that can be implemented in a computer!) to calculate the inverse of A. You can do it by hand too -- Gaussian elimination is taught in many higher level courses and can be done by hand. The concrete way to calculate the inverse of a square matrix is not of too much interest right now - you can certainly follow if you know that inverse(A) * A gives you the identity matrix.

 

Ok, so why would I use matrices and vectors?

 

A good question! Mainly, convenience. Instead of keeping track of individual equations we can easily keep track of a whole system of equations with a matrix and a vector for the right hand side. In fact, the most striking advantage is that there are several algorithms to solve systems of linear equations, which makes matrix math very well suited for computer use. I won't go into detail on this here, but I'll give you some keywords if you want to look into this: Gaussian elimination, LU decomposition, Cholesky decomposition, ... There are even iterative algorithms to compute solutions! Computers are really good at solving systems of equations and that is what we're using them for in applications of math most of the time.

 

 

This time I had to go into some technical details -- don't worry if you don't understand everything. You should know what an inverse matrix is and understand that there are some slight notational differences between row and column vectors, which is not related to this row major/column major stuff at all.

 

Again I'm out of time! Next time I want to talk about the nitty gritty details of 3d math - the stuff we're interested in. I will introduce some different classes of transformations and talk about their actions on vectors. Then I will finally tell you why we're using these 4-dimensional "affine" matrix transformations. Hopefully most of the things should click by then.

Share this post


Link to post
Share on other sites

(unfortunately latex isn't working here, so this will have to do)

The forum does have some latex support with $$ markers:

a 1x4 matrix:
$$\begin{bmatrix} Vx & Vy & Vz & 1 \end{bmatrix}$$
or a 4x1 matrix:
$$\begin{bmatrix} Vx\\ Vy\\ Vz\\ 1 \end{bmatrix}$$

a 1x4 matrix:
$$\begin{bmatrix} Vx & Vy & Vz & 1 \end{bmatrix}$$
or a 4x1 matrix:
$$\begin{bmatrix} Vx\\ Vy\\ Vz\\ 1 \end{bmatrix}$$

Share this post


Link to post
Share on other sites

Whoops...I've found the eqn tags, but they produced nothing in the preview window. I simply assumed it didn't work.
 
Let me try:
 
$$\begin{bmatrix} Vx & Vy & Vz & 1 \end{bmatrix}$$
 
Edit: Nope, I can't get it to work unsure.png. Is there something I'm missing?

 

Edit2: Now it's working...huh.png

Edited by duckflock

Share this post


Link to post
Share on other sites

This topic is 682 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this