Sign in to follow this  

Right handed coordinate system... why?

This topic is 4101 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

There must be an explenation for this.. however it completely eludes me. I mean.. Left to right is positive, that just feels right. That's how most people read. Down to up is positive. That feels right too. Up.. yey! It's going up!! I'm adding energy (or whatever) to something and it goes up into the sky. But backwards being positive? That totally doesn't make sense to me. Someone explain please.

Share this post


Link to post
Share on other sites
There is more than one right-handed coordinate system; a right handed coordinate system is one where the cross product of the positive X and Y axis vectors form the positive Z axis vector when you "visualize" the cross product with your right hand. A left-handed one is where the same applies, but for your left hand. Armed with that knowledge you should be able to concieve of many different right-handed coordinate frames.

You can construct a right-handed coordinate frame where "backwards" is negative, but that will break the semantic sense you've assigned to at least one of the other axis.

You need a left-handed system to get what you want.

Share this post


Link to post
Share on other sites
Aye, I was asking this because i read that XNA will use right handed coordinates (that work like the example i wrote above) by default. Anyways, the answer is that Z is made out of the crossproduct of X and Y then, right?

Share this post


Link to post
Share on other sites
Right handed makes more sense when you think about the video hardware. You can make +X point right, +Z away from the screen and +Y down the screen. Down the screen is actually increasing the VRAM address, so +Y == +VRAM Address.

Skizz

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Redburn
I mean.. Left to right is positive, that just feels right. That's how most people read. Down to up is positive.


No, Up to Down is positive, thats how most people read, you start at the top of the page and go down.

Share this post


Link to post
Share on other sites
It's not so much a matter of left -> right, down -> up, but which way 'up' is at all. All cartesian coordinate systems (worth mentioning) have increasing magnintude as you travel away from the origin. Also, everybody knows what x and y do in the plane, on a page. The left/right discrepancy comes when we try to work out what 'up' is in three dimensions:

If we hold our page facing us, like a TV screen, the old y coordinate points upwards, towards the sky, but if we put it flat on the table, y (which used to be 'up') now points away from us. The new z coordinates goes where it must, in either case. It is this act of swapping y and z that toggles handedness.

Regards
Admiral

Share this post


Link to post
Share on other sites
Quote:
Original post by Redburn

I mean.. Left to right is positive, that just feels right. That's how most people read.
Someone explain please.


off topic...but actually "most people" read right to left considering the population of people that read asian-based languages :).

also, i thought you could specify matrices in DX as either right handed or left handed (eg. D3DXMatrixLookAtRH or D3DXMatrixLookAtLH) giving you the option to define your own coord sys.

i find it amusing that Max is Z up(our designers use) and Maya is Y up(our animators use). that's a lot of fun.

Share this post


Link to post
Share on other sites
Take a piece of paper, draw x & y lines on it and put it on the table. The "natural" direction for z is up from the table and you've got a right handed system.

Take the *same* piece of paper and stick it on your monitor. Now the "natural" direction for z is into the monitor and you've got a left handed system.

Share this post


Link to post
Share on other sites
Right handed coordinates make a lot of sense when Z is up. (Think of a piece of graph paper with 2D cartesian coords on it which is now being augmented with a third dimension.) This is how most people do math and physics.

When you spin it around so Z is into the screen and Y is up, then it's just weird. When you become OpenGL and screw up the matrices so that they don't even match the coordinate system, then it becomes retarded and non intuitive.

So the conventional way right handed coordinates are placed on the screen is kind of odd, but OpenGL goes and makes things much worse.

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
When you become OpenGL and screw up the matrices so that they don't even match the coordinate system, then it becomes retarded and non intuitive.
What exactly do you mean by OpenGL matrices not matching the coordinate system? Just curious...

Share this post


Link to post
Share on other sites
I have always been bothered that Y is traditionally used as up. The most intuitive coordinate system (to me) for a 3D space is to assume that the "default" view is top-down, where 0,0 is bottom right. This means in 2D you have a standard quadrant 1 view, with positive Z as up.

A top down view just seems right for visualizing the world. When Y is up it seems as if you are thinking in side-view... but in 3D, which "side" is it?

Share this post


Link to post
Share on other sites

Again we have this fundamental discussion of different coordinate systems. I find it rather limited not to understand the existence of different systems.

I have to admit that I am z-up person and it suits me well since all the CAD systems are like that too.

Occasionally I consider the space where it doesn't make sense to fix things into one coordinate system. Any location on a planet can has a different local coordinate system, otherwise it becomes just too weird.

It isn't right, it isn't wrong ... it is just different.

Cheers !

Share this post


Link to post
Share on other sites
Quote:
Original post by jyk
Quote:
Original post by Promit
When you become OpenGL and screw up the matrices so that they don't even match the coordinate system, then it becomes retarded and non intuitive.
What exactly do you mean by OpenGL matrices not matching the coordinate system? Just curious...
The most intuitive way to visualize a transformation matrix is to consider each row/column as one axis of the object's local coordinate system. The first column (in the case of OGL) is the local right vector, the second is the local up vector, the third is the local look vector, and the fourth is of course the world space position. You can compose a desired transformation very easily using this logic, without having to muck about with all sorts of formulas.

In the case of the identity matrix, that means that right is (1, 0, 0), up is (0, 1, 0), and look is (0, 0, 1). Makes sense.

The problem is introduced because in OpenGL, the default transform has things looking down negative Z. That is, the identity matrix specifies that the object is configured so that right is (1, 0, 0), up is (0, 1, 0), and look is (0, 0, -1). The entire coordinate system has been deliberately reflected around the XY plane. You end up reflecting everything back every time you want to construct a transformation matrix from the vectors defining the object's coordinate space. Maybe I'm overreacting, but that really gets on my nerves. (And I think you can actually repair this damage with a specially configured perspective matrix, but I forget.)

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
Maybe I'm overreacting, but that really gets on my nerves. (And I think you can actually repair this damage with a specially configured perspective matrix, but I forget.)
Ah, I see. I just wasn't sure what you meant. Personally when using OpenGL I treat +z as forward for the purposes of my application, and just apply a 180-degree flip as the very last step before computing the view matrix and sending it off to OpenGL. This is wrapped up in a camera class, so it's not something I think about often.

Also, given that you can upload your own projection and view matrices it may be that you can just as easily 'configure' OpenGL to be left handed, as you mentioned. I haven't actually tried this, but it would be a useful thing to confirm (or otherwise).

Share this post


Link to post
Share on other sites
Quote:
Original post by Redburn
That totally doesn't make sense to me.


It doesn't make sense because it's not a decision of logic, it's just a convention.

Just like nature couldn't decide for left handed people vs right handed people, why scripture is left to right or right to left or top to bottom or bottom to top or in diagonal. Why there is antimatter and matter (though antimatter tends to be short lived on earth).

What you need to realize is that in the end it doesn't make a difference: just be aware of the differences, plan your matrices of transformation accordingly and don't assume that everybody "naturally" thinks the same way as you do. As long as it's documented, then it's not a big deal.

LeGreg

Share this post


Link to post
Share on other sites
I think that skizz has hit the nail on the head.

There are two predominant methods of defining things.

One is in regards to 'what one expects' - All the posts but skizz are defining the coordinate system using the 'what I expect' method.

The other is in regards to actual implimentation details. VRAM addressing as skizz mentions is only one example where it can be advantageous to prefer one over another presuming that you have a preference axis for that implimentation. Another could be in vertex transformations. If Z is the first coordinate ready for pre- and/or post-transform work, then you probably want 1D, 2D, and 3D methodology to all favor the use of this fact.

Share this post


Link to post
Share on other sites
Quote:
Original post by Rockoon1
The other is in regards to actual implimentation details. VRAM addressing as skizz mentions is only one example where it can be advantageous to prefer one over another presuming that you have a preference axis for that implimentation. Another could be in vertex transformations. If Z is the first coordinate ready for pre- and/or post-transform work, then you probably want 1D, 2D, and 3D methodology to all favor the use of this fact.
Problem is, none of that is even vaguely relevant to how graphics hardware actually works.

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
Problem is, none of that is even vaguely relevant to how graphics hardware actually works.


Problem is, you are assuming.

The OP did not state anything about hardware, nor anything about graphics for that matter. He is quite non-specific in the original post and in his follow up post he mentioned XNA which is a complete API for game production so the major considerations arent just graphics or video hardware related.

Clearly Microsoft needs to also consider the CPU end of things and we don't know what their conclusions were. We do know that they chose a right handed coordinate system and we also know that specific coordinate systems can be advantageous under some circumstances. Those circumstances if they are present are not arbitrary while your pet favorite coordinate system *IS* arbitrary.

Coordinate systems arent just for graphics.

But what the hell do I know?

(edited to add an 's')

Share this post


Link to post
Share on other sites
Quote:
Original post by Rockoon1
The OP did not state anything about hardware, nor anything about graphics for that matter.
Neither did I -- it was suggested as a possible explanation by another poster. I'm merely pointing out that hardware has nothing to do with coordinate systems.
Quote:
Clearly Microsoft needs to also consider the CPU end of things and we don't know what their conclusions were.
It doesn't matter what end you look at. The coordinate system and the hardware have nothing to do with each other.

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
Quote:
Original post by Rockoon1
The OP did not state anything about hardware, nor anything about graphics for that matter.
Neither did I -- it was suggested as a possible explanation by another poster. I'm merely pointing out that hardware has nothing to do with coordinate systems.
Quote:
Clearly Microsoft needs to also consider the CPU end of things and we don't know what their conclusions were.
It doesn't matter what end you look at. The coordinate system and the hardware have nothing to do with each other.


You did infact mention both graphics and hardware in your reply to me.


Anyways.. while the hardware game programmers target may not have any preferential issues with coordinate systems.. as soon we actualy pick a coordinate system we are now commited to complex hardware issues which will effect the optimal order of operations in code which uses it. And if this happens to be a library of routines then that order of operations is set in stone after being compiled.

As a very simple example.. a library routine might be responsible for throwing game objects into a 3D array of cells...

cell[x][y][z]

..but if its more or less mostly a 2D game (gameplay mostly happens on the surface of a world) then its suddently folly to use a coordinate convention where Y is up. If Y is up and the array of cells is defined cell[x][y][z] then you are going to be thrashing the L1 cache *bigtime*

But you can use cell[y][x][z], right? Wrong! remember, its a library! Its already compiled.. set in stone.. and nobody wants to set a weird coordinate swizzle like that in stone.

Share this post


Link to post
Share on other sites
I think you're missing the point - Promit is correct: none of that has anything to do with how graphics hardware works.

In particular, you can use right handed, or left handed coordinates or whatever you like in shader land (which is what everything is now), since the hardware really doesn't need to know anymore (you provide the relevant programs).

Regarding your multidimensional array example: memory layout is completely independent of your coordinate system selection. You can store things however you like regardless of your selection. Even if you are storing volumetric data (which again, could be in any memory layout), it can easily be transformed from object->world space as desired. They're really two totally separate choices.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyTX
I think you're missing the point - Promit is correct: none of that has anything to do with how graphics hardware works.



I think you both are missing the point.

While the hardware does not take preference to 'coordinate system', it does take preference in regards to algorithm implimentation.

The choice of coordinate system is infact a choice from the subset of the permutation of all coordinate related algorithms. These algorithms range from the simplest (perhaps projection) to the most complex (perhaps piecewise subdivision of polygon soup) ..

The permutation of an algorithm which takes 3 inputs:

f(x,y,z)
f(x,z,y)
f(y,x,z)
f(y,z,x)
f(z,x,y)
f(z,y,x)

In cases where only one or a few of these is optimal due to hardware issues (I've shown a case where this is true), it behooves a project designer to maintain coherence by making a top level choice which lends itself naturally to favor the optimal permutation(s).

In effect, the hardware has influenced the choice of coordinate system... the same hardware that 'doesnt care' about coordinate systems.

Quote:
Original post by AndyTX
Regarding your multidimensional array example: memory layout is completely independent of your coordinate system selection.
....snip...
They're really two totally separate choices.


This isnt quite right. Making it independed is a design choice. It can be depended or independed. You can choose to make them a seperate issue, or not.

If you are designing a library, making them a seperate choice has increased the complexity of the library internally or externally. Either the library needs to be told by the caller which swizzle will be optimal, or the caller needs to be told by the library which swizzle is optimal.

If you can make a coordinate system choice which avoids swizzles entirely while maintaining optimality in your library, why would you not?

Share this post


Link to post
Share on other sites
I'm still not convinced that it matters. The choice of "up, down, left, right, etc." only becomes relevant when rasterizing to the screen (before that, it makes no difference - *all* choices are simply memory layout related). By the time you're rasterizing, there is no advantage in one coordinate basis or another - you'll have to specify it (usually in matrix form) at some point, and one 4x4 matrix is the same as the next to the GPU.

Given your "cell" example, you're simply talking about a 3D array, in which case the same rules apply for choosing your memory access pattern. Your library need not know which direction is "up" - that only matters once you're constructing a view matrix, etc. At that point - as mentioned - anything is as cheap as anything else.

[Edited by - AndyTX on September 19, 2006 3:25:24 PM]

Share this post


Link to post
Share on other sites

This topic is 4101 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this