• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
DwarvesH

Oh no, not this topic again: LH vs. RH.

22 posts in this topic

So I'm porting my game over from XNA (RH) to SharpDX (LH or RH, LH traditionally)  and have finally arrived to the last phase: the terrain. Once this is done porting finally be behind me!

 

And here is where LH or RH really causes major changes. For terrain I needed to do a ton of adjustment, even having to transpose the terrain only for physics.

 

In the past I did not mind switching from RH to LH. I had to use different matrices and swap the Z in position and normals at mesh load and things where pretty much done.

 

But recently I noticed that logically LH does not not make that much sense. Everything from applying a 2D grid on top of the 3D world to mapping mathematical matrices to world positions to drawing mini maps is a lot more intuitive using RH. Especially mapping of matrices. In programming you often have an (x, y) matrix that is represented as y rows of x columns. This maps intuitively to right handed. You need to put element (0, 0) at you coordinate system's origin and map Y to Z (assuming your world is not centered; even if it is, just subtract an offset).

 

LH on the other hand is more difficult to map. Especially since you often need to transform things from one space into another in your head.

 

Are there any good reasons to use LH other than DirectX has traditionally used it and it may make integrating things from DirectX easier?

 

I'm really torn between the two. But if I switch back to RH it must be done now.

0

Share this post


Link to post
Share on other sites

Switch to RH now.  Ultimately it's just a matrix multiply, so the final vertex transform is identical in both: SomeLeftHandedMatrix * Position or SomeRightHandedMatrix * Position.  It's far more important that you're consistent in your own code.

 

A write-up of the historical reasons for D3D choosing LH is here: http://www.alexstjohn.com/WP/2013/07/22/the-evolution-of-direct3d/ - summary: it was purely an arbitrary decision and based on a purely personal preference rather than for any technical or other reasons.

0

Share this post


Link to post
Share on other sites

There's nothing in modern (programmable) DirectX/OpenGL that incentivizes you to use one over the other, apart from the libraries you're using (D3DX, XNAMath and DirectXMath all provide LH and RH functions, while GLM unfortunately only provides RH ones).

 

Also, slightly unrelated but always interesting: http://programmers.stackexchange.com/a/88776

0

Share this post


Link to post
Share on other sites


the final vertex transform is identical in both: SomeLeftHandedMatrix * Position or SomeRightHandedMatrix * Position.

Actually, that's not correct. As commonly implemented, it would be Position * SomeLeftHandedMatrix and SomeRightHandedMatrix * Position.

-4

Share this post


Link to post
Share on other sites

There's nothing in modern (programmable) DirectX/OpenGL that incentivizes you to use one over the other, apart from the libraries you're using (D3DX, XNAMath and DirectXMath all provide LH and RH functions, while GLM unfortunately only provides RH ones).

 

Also, slightly unrelated but always interesting: http://programmers.stackexchange.com/a/88776

 

Yes, as I said, the changes were simple. But now with the physics engine, adapting to LH is incredibly hard.

 
Mapping data structures to 3D space will eventually boil down to familiarity.
 
RH seems much more intuitive and familiar. The one thing I might have problems is having to do some conversion when adding depth based thing or whatever from DirectX world.
 

 


the final vertex transform is identical in both: SomeLeftHandedMatrix * Position or SomeRightHandedMatrix * Position.

Actually, that's not correct. As commonly implemented, it would be Position * SomeLeftHandedMatrix and SomeRightHandedMatrix * Position.

 

 

I've never done that.

 

I always have world = (bone) * (matrix_from_model_to_physics) * w

 

and then the v * p matrix.

 

I never needed to swap any orders when going from LH to RH.

Edited by DwarvesH
0

Share this post


Link to post
Share on other sites

 

There's nothing in modern (programmable) DirectX/OpenGL that incentivizes you to use one over the other, apart from the libraries you're using (D3DX, XNAMath and DirectXMath all provide LH and RH functions, while GLM unfortunately only provides RH ones).

 

Also, slightly unrelated but always interesting: http://programmers.stackexchange.com/a/88776

 

Yes, as I said, the changes were simple. But now with the physics engine, adapting to LH is incredibly hard.

 
Mapping data structures to 3D space will eventually boil down to familiarity.
 
RH seems much more intuitive and familiar. The one thing I might have problems is having to do some conversion when adding depth based thing or whatever from DirectX world.
 

 


the final vertex transform is identical in both: SomeLeftHandedMatrix * Position or SomeRightHandedMatrix * Position.

Actually, that's not correct. As commonly implemented, it would be Position * SomeLeftHandedMatrix and SomeRightHandedMatrix * Position.

 

 

I've never done that.

 

I always have world = (bone) * (matrix_from_model_to_physics) * w

 

and then the v * p matrix.

 

I never needed to swap any orders when going from LH to RH.

 

I think the key part of what Buckeye said is " as commonly implemented". but no, you should not need to swap matrix multiplication order for handedness, only for majorness, as Mona2000 mentioned. The only real place that handedness matters is in the projection transform, or how our mapping of the 3D vector space is converted to 2D.

Edited by Burnt_Fyr
1

Share this post


Link to post
Share on other sites

I think the key part of what Buckeye said is " as commonly implemented". but no, you should not need to swap matrix multiplication order for handedness, only for majorness, as Mona2000 mentioned. The only real place that handedness matters is in the projection transform, or how our mapping of the 3D vector space is converted to 2D.

 

Not even majorness matters. What determines whether the vector goes on the left or the right hand side of the matrix depends on whether you use row or column vectors and nothing else. A column vector cannot go anywhere but on the right hand side, and a row vector cannot go anywhere but on the left hand side of a multiplication with a matrix, and majorness or handedness are irrelevant.

Edited by Brother Bob
2

Share this post


Link to post
Share on other sites

Yeah, but HLSL, GLSL and most (all?) math libs used by games don't have the concept of row or column vectors.

-2

Share this post


Link to post
Share on other sites

The only real place that handedness matters is in the projection transform, or how our mapping of the 3D vector space is converted to 2D.

 

Sure, that is one key place that matters. But it is so easy and trivial to fix that it does not matter at all. You either have it right or wrong, and fixing it takes literally seconds.

 

Here are some things that change from LH to RH that do matter:

1. Mesh loading.

2. Mesh generations. I generate both some meshes and relative mesh placement procedurally. This code does not multiply everything by a forward vector so all Z placement is wrong after a change.

3. Mesh processing. I have custom code meant to fix the horrible and broken tangent export feature in most 3D modeling programs that results in seams. Maybe needs fixing, maybe doesn't.

4. The entire process of mapping my pretty big maps (64 square kilometers and growing) to the 3D world. Questions like "if I am at this character coordinate what is the logical coordinate, what cells does it map into, what neighbors is ahead of me and which one is behind, how best to circle cone map the area, etc." The simplest case is answering the question where does cell (0, 0) go. And does the next one go? This all change very slightly and subtly. At the very least you want to update your mapping that you present to the human being and that must be in a familiar map coordinate system, or risk questions like "why is 0 at the bottom".

5. Character controller. That code is 300 KiB and changing from LH to RH changes my move direction and mouse camera movement. I know where to look so I can fix it now relatively fast, but the first time I went from RH to LH that was very annoying to fix.

6. Terrain physics. This one I just can't get right in LH. Third party physics engine.

7. And many more.

 

So I would say that going from LH to RH is 2-4 weeks of work, that's why I weight the decision carefully.

0

Share this post


Link to post
Share on other sites

 


I think the key part of what Buckeye said is " as commonly implemented". but no, you should not need to swap matrix multiplication order for handedness, only for majorness, as Mona2000 mentioned. The only real place that handedness matters is in the projection transform, or how our mapping of the 3D vector space is converted to 2D.

 

Not even majorness matters. What determines whether the vector goes on the left or the right hand side of the matrix depends on whether you use row or column vectors and nothing else. A column vector cannot go anywhere but on the right hand side, and a row vector cannot go anywhere but on the left hand side of a multiplication with a matrix, and majorness or handedness are irrelevant.

 

 

You are correct, technically. As the OP was apparently going from column-vector to column-vector I saw no good in going into detail about handedness, column/row vector,  column/row major, etc. Just wanted to indicate that a general statement that the order of multiplication using "SomeLeftHandedMatrix" is immaterial needs some thought, particularly if someone wanted to generalize it to the order of matrix multiplications - e.g., scale-rotate-translate vs translate-rotate-scale. Sort of off-topic.

0

Share this post


Link to post
Share on other sites

So I would say that going from LH to RH is 2-4 weeks of work, that's why I weight the decision carefully.

 

It's 2-4 weeks now versus a potential lifetime of pain later.  I'd do it now.

 

if someone wanted to generalize it to the order of matrix multiplications - e.g., scale-rotate-translate vs translate-rotate-scale. Sort of off-topic.

 

That's not handedness either.
 

0

Share this post


Link to post
Share on other sites

@mhagain: You don't think there's any chance someone will mistake "handedness" with the use of row/column vector matrices? Hmm.

0

Share this post


Link to post
Share on other sites

Doesnt't that depend on row-major vs column-major and not LH vs RH?

No, neither. It only depends on whether you choose to use row-vectors or column-vectors.

Yeah, but HLSL, GLSL and most (all?) math libs used by games don't have the concept of row or column vectors.

Many math libraries just assume/force one convention or the other - e.g. an engine will choose row-vectors and build a whole math library around that convention.

 

HLSL/GLSL *do* support/implement both concepts though - they specify that a vector appearing on the left of a matrix is a row-vector and one appearing on the right is a column-vector (i.e. A vec4 is both a mat4x1 and a mat1x4, depending on usage). When a vector isn't being multiplied with a matrix, it's either/none/both.
HLSL/GLSL also support/implement both majornesses, but as above, that's just a RAM storage concern and has zero effect on the maths.

 

However, if you choose to use the column-vector convention, then it's usually more efficient to store your matrix data in column-major order (and vice-versa for row-vectors or row-major matrix storage)... So there is often a correlation between a chosen matrix storage scheme and the order of operations in your matrix concatenations, but it's not the cause.

It's possible to use any mixture of row/column-vectors, row/column-major matrix storage and RH/LH coordinate systems, and the only thing that determines the order that your matrices should be multiplied in, is the way that you've chosen to interpret your vectors. This also determines the way that you construct your matrices too -- a matrix designed to transform a column-vector is the transpose of a matrix designed to transform a row-vector.

 

Also, if one part of your code works with column-major and another part works with row-major, and you share data without performing the appropriate conversion, then the effect of that is a mathematical transpose, which does affect your math and reverse your order of operations... It's not that column/row majorness has changed your order of operations here though -- it's the mathematical transpose that you've performed that has done so.

Edited by Hodgman
1

Share this post


Link to post
Share on other sites

Not even majorness matters. What determines whether the vector goes on the left or the right hand side of the matrix depends on whether you use row or column vectors and nothing else.

In computer science, matrices in which the vectors are in the columns are called “column-major” matrices and matrices with the vectors in rows are called “row-major” matrices. Note that this does not refer to memory layouts alone, as the matrices used by OpenGL and Direct3D both have the exact same memory layout, despite being column-major and row-major respectively. 

Although you are talking about the side in which a vector must appear, the original context was the order in which matrices are multiplied with respect to “majorness”, which is an accurate way (within computer science) to make the distinction.
 
 
L. Spiro
0

Share this post


Link to post
Share on other sites


In computer science, matrices in which the vectors are in the columns are called “column-major” matrices and matrices with the vectors in rows are called “row-major” matrices. Note that this does not refer to memory layouts alone, as the matrices used by OpenGL and Direct3D both have the exact same memory layout, despite being column-major and row-major respectively. 
D3D/GL support both row-major and column-major storage, and both "row major"/"column major" mathematical element layouts (using the quoted definition), so it doesn't make sense to say that one is row-major and the other is column-major... they're both, both, using both definitions of majorness.

 

Under this definition, what would you call a matrix that has been constructed to transform row-vectors (i.e. has the basis vectors stored in the mathematical rows), but is stored in memory using column-major array ordering?

0

Share this post


Link to post
Share on other sites

D3D/GL support both row-major and column-major storage

What do you mean by “storage”? The memory layout? Majorness isn’t defined by memory layout alone.
 

so it doesn't make sense to say that one is row-major and the other is column-major

I’m quoting the specification.
http://www.opengl.org/archives/resources/faq/technical/transformations.htm

The OpenGL Specification and the OpenGL Reference Manual both use column-major notation.

Of course both GLSL and HLSL can use either, but I am referring to the result of using deprecated OpenGL matrix functions, recent GLKit functions, and the D3DXMatrix* set of functions, in addition to the results produced by those functions during multiplication.
 

Under this definition, what would you call a matrix that has been constructed to transform row-vectors (i.e. has the basis vectors stored in the mathematical rows), but is stored in memory using column-major array ordering?

Again, memory layout is not the defining factor. Transposing a matrix switches its “majorness” no matter what, but the actual storage can be as arbitrary as physical RAM index sequence 12 4 8 10 0 3, etc., as long as the routines that work with them are written to pluck out the proper values for the operation they are performing.


Native matrix functions provided by OpenGL 1.0 and Direct3D 9 exemplify this well.
Both have the same layout in RAM but the routines that work with them have been written to produce the result you would expect given the order of multiplication and the “majorness”. That is, assume A and B are exactly the same matrices in RAM (same physical layout and values). In OpenGL’s/GLKit’s math routines, B × A = C whereas in Direct3D’s math routines A × B = C.



Matrix majorness in computer science is specifically used to describe the order of multiplication, not the memory layout.
That is, majorness is the combination of both the memory layout and the routines that work on that memory. OpenGL math routines are designed to produce a B × A = C as a column-major result, but it would still be column-major if you transposed the matrices in physical RAM (changing the storage/memory layout) and also rewrote the matrix multiply routine accordingly. It would still produce the result, B × A = C, and that is (in computer science) the only practical use for the terms “column major” and “row major”.


The Wikipedia article on row-major (etc.) states that it actually is about memory layout, but if that is strictly true then it contradicts itself because it also says OpenGL/OpenGL ES are column-major, even though their matrices (as produced by their own matrix functions) are stored as described in the article’s row-major section in RAM.
 
I submit that in computer science “row major” and “column major” actually refer to the expected result given an order of multiplication (whether A × B = C (row-major) or B × A = C (column-major)).  The memory layout and the routines to work with them work together to produce the expected result, and the frank fact is that when you want to tell another programmer the order in which to multiply his or her matrices you tell him or her the “majorness”.
 
 
L. Spiro Edited by L. Spiro
0

Share this post


Link to post
Share on other sites

I’m quoting the specification.
http://www.opengl.org/archives/resources/faq/technical/transformations.htm
The Wikipedia article on row-major (etc.) states that it actually is about memory layout, but if that is strictly true then it contradicts itself because it also says OpenGL/OpenGL ES are column-major, even though their matrices (as produced by their own matrix functions) are stored as described in the article’s row-major section in RAM.

There's no contradiction - the GL spec describes a column-major array indexing scheme -- that element #0 is row0/col0, and element #4 is row0/col1 (x=1,y=0), not row1/col0 (x=0,y=1) as in row-major ordering.
 
Your FAQ link says "OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix", but it fails to mention that these base vectors are columns. The fact that it fails to mention it seems to imply that they think that there is a consensus that everyone knows that column-vectors are everyone's default choice or something... sleep.png 
i.e. If we label the 4x4 matrix elements as:
0123
4567
89AB
CDEF
The GL spec says that the LoadMatrix function expects those items to be ordered in RAM as 0,4,8,C,1,5,9,D,2,6,A,E,3,7,B,F, and that the translation components will be in 3, 7 and B.
Your link doesn't spell this out, but it does refer to section 2.11.2 of the spec, which does explicitly state all this.
 
So the GL specification for it's fixed function pipeline is based around column-major array notation (store the items in RAM by writing out each column), and it also specifies that it expects you to be using column-vectors (the basis vectors are stored in the mathematical columns of the matrix).
 
It also defines LoadTransposeMatrix (instead of the usual LoadMatrix), which is defined to accept a matrix stored in row-major array notation, but still using column vectors (i.e. the translation components are still in 3/7/B, but the storage ordering in ram is now 1,2,3,4,5...). What is this kind of matrix called? It's "column major" according to your definition as it uses the multiplication ordering required of column-vectors, but in RAM the elements are stored using the computer-science row-major array addressing convention! It's a column-vector matrix stored as row-major.
 
This is just the spec for fixed-function stuff though. Modern GL/D3D don't have math libraries, requiring you to make your own choice about conventions.
On the GLSL/HLSL side where matrices are used, you're free to choose column-vectors or row-vectors, and you're free to choose the RAM storage scheme, e.g. with column_major or row_major keywords (these keywords only affect the RAM storage scheme, they don't affect the maths at all).
You can use the two common choices of column-vector matrices stored in column-major, or row-vector matrices stored in row-major, but can also use the less common column-vector matrices stored in row-major and row-vector matrices stored in column-major!
 

Matrix majorness in computer science is specifically used to describe the order of multiplication, not the memory layout.

In my experience, it only describes the memory layout -- arrays are much more of a common computer science topic than 4x4 matrices are. Every programmer or language designer needs to deal with column-vs-row array storage conventions, but not everyone has to deal with transformation matrices. Transformation matrices just inherit this problem because they happen to be a 2-dimensional matrix.
 
Moreover, the issue of whether you store your basis vectors in the rows or columns of your matrix is a mathematical problem -- it exists when doing math on paper, completely independent of computer-science issues. If one mathematician on a project is storing basis vectors in rows and treating his vectors as row-vectors, while the rest or treating their vectors as column-vectors and storing basis-vectors in columns, you're gonna have a bad time.
 
When we start using matrices in computer-land, we inherit both these problems. Do we store our basis vectors in rows/columns (do we treat vectors are row-vectors or column-vectors), and also how to we address our 2d arrays?

I submit that in computer science “row major” and “column major” actually refer to the expected result given an order of multiplication

You just quoted a wikipedia article that disagrees, so, citation needed? tongue.png

 

In maths though, where all the computer complications don't apply, I've seen "row-vector matrices" and "column-vector matrices" called row-major/column-major, post-fix/pre-fix, post-concatenate/pre-concatenate... I what the standard mathematical jargon for those two styles is.

But, these guys pre-date the transistor, so if they're calling their matrices designed for the row-vector convention "row-major", then it's a bit of math jargon, not comp-sci jargon... which is confusing, because in comp-sci row/column-major usually refer to 2D array indexing schemes.

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

In the book chapters I gave you (for OpenGL ES) you can see that I originally described row-major and column-major as being how the data is stored in RAM (which aligns with everything you have said here).

While writing the next chapter I happened upon this.

 

The description for GLKMatrix4MakeTranslation() leaves no room for error, but I tested it by creating a matrix with it and then stopping it in the debugger to see the actual RAM layout.  It’s “row major” according to the Wikipedia link, and matches the exact memory layout of both my engine and Direct3D.

Yet when I called GLKMatrix4Multiply(), I get the correct C only if I call it as GLKMatrix4Multiply( B, A ), whereas in Direct3D and in my engine I have to call MatMul( A, B ) to get the same C.

 

 

So there is definitely a discrepancy, and it is not just about how it is laid out in RAM.

 

 

Even though GLKit is free to deviate from the OpenGL specification, it proves that RAM layout is not what decides A × B vs. B × A.

 

 

Additionally, the OpenGL API may be internally transposing it before storing it.  Because OpenGL uses column-major notation, they may have made a public API that take data that looks column-major in RAM but internally transpose it.

I have yet to confirm this, but I may over the weekend by using MHS and looking at the actual RAM inside the OpenGL DLL.

 

However in the case of GLKit on iOS, I viewed the RAM in a debugger already and verified that it writes to physical RAM in row-major (Direct3D style) order.

It’s verified that the RAM matches Direct3D’s to the byte yet uses post-multiplication vs. Direct3D’s pre-multiplication.

 

 

 

It is a fairly confusing topic, but I have confirmed for-sure through GLKit on iOS that the memory layout is not related to calling it “row major” or “column-major”.  The memory layout is the same, but GLKit is designed to access the matrix in a transposed way.

 

Frankly I am no longer sure what to put in my book because Wikipedia and my past self claim it is up to how it is laid out in RAM whereas GLKit contradicts that.

It’s an iOS book so…

 

 

L. Spiro

-1

Share this post


Link to post
Share on other sites

The description for GLKMatrix4MakeTranslation() leaves no room for error, but I tested it by creating a matrix with it and then stopping it in the debugger to see the actual RAM layout.  It’s “row major” according to the Wikipedia link, and matches the exact memory layout of both my engine and Direct3D.

No matter which conventions we choose, on paper, rows are always rows and columns always columns. Below (A) on the left is a column-vector matrix ("column major" in maths), while (B) on the right is a row-vector matrix ("row major" in maths).

(A)  (B)
100x 1000
010y 0100
001z 0010
0001 xyz1

Those are two different well defined mathematical objects. One is a matrix designed for pre-concatenation, the other designed for post-concatenation (or one for transforming column-vectors, the other for transforming row-vectors).

However, if we write (A) into RAM using column-major array indexing, we get the data stream of 100001000010xyz1.
And if we write (B) into RAM using row-major array indexing, we get the same data stream of 100001000010xyz1!

 

Mathematical matrix multiplication always works the same way, no matter which conventions you use -- it's defined on rows and columns, which aren't changed by convention.

 

The issue is that the GLK are interpreting that RAM data stream using column-major array indexing, so they interpret that data as the mathematical object underneath (A).

D3DX on the other hand interprets that data stream using row-major array indexing, so they end up using the mathematical object drawn underneath (B).

Both functions received the same raw bytes and both performed the same well-defined mathematical operation of a matrix multiply, but they've decoded those bytes differently, ending up with different mathematical objects, meaning the results they produce are predictably different!

 

 


It is a fairly confusing topic, but I have confirmed for-sure through GLKit on iOS that the memory layout is not related to calling it “row major” or “column-major”.  The memory layout is the same, but GLKit is designed to access the matrix in a transposed way.

GLK is interpreting everything using column-major array indexing, but it's also constructing it's matrices around the column-vectors convention.

D3DX is interpreting everything using row-major arraying indexing, but it's also constructing it's matrices around the row-vector convention.

The memory layout of each is the transpose of the other, but the data used by each is also the transpose of the other -- both of these cancel out so that when you look at the raw bytes, they appear the same! The maths that they each end up doing on those bytes is still the opposite of each other though, because they're using opposite mathematical conventions.

 

P.S. I've been meaning to give you feedback on your book stuff for agesunsure.png wub.png

Edited by Hodgman
1

Share this post


Link to post
Share on other sites

I would opt for RH and RM , Im doing this in my software rasterizer so i can chose it myself - sometimes this kind of decissions are hard to chose but this two I seem to be quite sure

 

ps could somebody maybe someone said to me how opengl and direct x storages such kind of matrix i would be probably using in my softrasterizer 

 

xdir_x, xdir_y, xdir_z

ydir_x, ydir_y, ydir_z

zdir_x, zdir_y, zdir_z

 

?

 

xdir is local x-vector  in the model space,

same wyth ydir, zdir, matrix describes the 

orientation of the model inthe global space

 

should i really add position to it 

 

xdir_x, xdir_y, xdir_z

ydir_x, ydir_y, ydir_z

zdir_x, zdir_y, zdir_z

pos_x, pos_y, pos_z
 
or better to handle it seperately?
 
if i use such matrix for camera orientation too
 
(in the camera case it would be expilcite
 
camera orientation:
 
1, 0, 0
0,1,0
0,0,-1
 
camera position:
 
0,0,1000
 
(i mean camera is +1000 on z axis and facing toward 0,0,0)
 
need i just multiply model matrix by this camera matrix 
 
should i use some matrix for projection to 2d too? how it would look like?
0

Share this post


Link to post
Share on other sites

 


I think the key part of what Buckeye said is " as commonly implemented". but no, you should not need to swap matrix multiplication order for handedness, only for majorness, as Mona2000 mentioned. The only real place that handedness matters is in the projection transform, or how our mapping of the 3D vector space is converted to 2D

 

Not even majorness matters. What determines whether the vector goes on the left or the right hand side of the matrix depends on whether you use row or column vectors and nothing else. A column vector cannot go anywhere but on the right hand side, and a row vector cannot go anywhere but on the left hand side of a multiplication with a matrix, and majorness or handedness are irrelevant.

 

Touche, I was just so excited that I could 1up on Buck's post that I used the wrong vocabulary. Mathematically you are correct, of course, but depending on how the multiplication is implemented in code, the majorness could effect the product, which I think is what what LSpiro and Hodgman discuss above.

0

Share this post


Link to post
Share on other sites
Wow, some discussion! I did not know about the row/column major distinction and multiplication order, but it makes sense. So there where no votes for LH... I started the port to RH and the progress is fast because I switched from RH to LH only a few month ago :). But the good news is that generally my happiness level went up when working with RH. I'll never touch LH again unless forced.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0