Jump to content
  • Advertisement
Sign in to follow this  
avion85

Custom coordinate system in d3d?

This topic is 3409 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everyone, i have a problem building a simple collision detection program in d3d9. As i have come to understand, d3d uses the left handed coordinate system, and has a built in mechanism (in terms of D3DXMatrixPerspectiveRH and D3DXMatrixOrthoRH and some other ones)in case some of us prefer to use the right handed system. Both systems are very unnatural to me. I cant imagine throwing a ball up in the air along the Y axis, in my game. What i want is: +x rigt, +y depth, +z up. So a am looking for the most elegant resolution to this problem. What would be the best solution, i think, is to have a preprocessor instruction that does turns the world around for me at the beginning of the program. Otherwise i have to define my own (x,y,z)->(x,z,y) linear operator. This is not a problem, but i have to call it, before i do ANYTHING including creating my objects(where i set the vertices manually), and any calculations-this seems expensive to me, in terms of calculation time? is this true? Any ideas how to solve this problem?

Share this post


Link to post
Share on other sites
Advertisement
Quote:

Both systems are very unnatural to me.

Then you are out of luck or misspeaking. All (3D) coordinate spaces are left or right handed.

Quote:

I cant imagine throwing a ball up in the air along the Y axis, in my game. What i want is: +x rigt, +y depth, +z up.

Case in point, this is a right-handed coordinate system.

Quote:

So a am looking for the most elegant resolution to this problem. What would be the best solution, i think, is to have a preprocessor instruction that does turns the world around for me at the beginning of the program.

There's no "world" to "turn around." The only kind of state the API tracks related to this are the world, view and projection matrices, which it simply hands off to the card for fixed-function transformation of the geometry you submit, or you own transformation as done in a shader.

Quote:

Otherwise i have to define my own (x,y,z)->(x,z,y) linear operator.

You can easily construct a matrix that does this.

Quote:

This is not a problem, but i have to call it, before i do ANYTHING including creating my objects(where i set the vertices manually), and any calculations-this seems expensive to me, in terms of calculation time? is this true?

Once you express your transform as a matrix you can trivially concatenate it with any of the other matrices D3D wants you to set, as is most appropriate for your needs (for example, is your geometry already in this coordinate space or do you want to bring it there at runtime versus at content production time, etc). All your desired coordinate system is in a 90 degree rotation of the canonical D3D system about the X axis (CCW looking along the positive axis).

Share this post


Link to post
Share on other sites
There is a solution but it is not easy to implement if you don't have a good knowledge of matrix concatenations.

I had the exact same problem than you, basically because I didn't want to be dependent of DirectX or OpenGl coordinate system, and also because I wanted to keep using the same coordinate system used by most content creation packages, that is right handed (+x rigt, +y depth, +z up)

The solution is to create your own projection matrix in which the "depth" is computed in the Y Axis, instead of the Z Axis, which is what both OpenGL and DirectX utility functions do.

I don't have the code at hand, but if I remember well, the resulting matrix was equivalent as to get the Right Handed projection matrix produced by D3DXMatrixPerspectiveRH and then, swap Matrix Rows 2 & 3.

you probably need some other tweaks. But I guarantee it works, because I made it work down to the shader level (that is, in the shader computation the "Y" was depth, so if you're using shaders, or a 3D engine that uses shaders, you would probably need to modify the shaders to use Y instead of Z in some of the computations (fog depth comes to mind)

There is an easier solution, but it does not always work, which is simply to rotate 90 degrees up the camera. This is a cheap solution if you're doing something very simple, but at the very bottom you're still using the old coordinate system, so when you have to deal with complex transforms for shadows, lights, etc, it becomes a nightmare, I personally preffer the "total solution" described above.

I would love to see more people realize how important is this, because this coordinate system is simply the best; it's natural, it makes integrating terrain engines much easier, it lets you use content assets from 3D editors without any axis conversion (and for animation assets, this is a lifesaver!), and it makes you independent from the rendering API coordinate system, all this by just using a clever matrix trick at the end of the pipeline!

Share this post


Link to post
Share on other sites
Quote:

I would love to see more people realize how important is this, because this coordinate system is simply the best; it's natural, it makes integrating terrain engines much easier, it lets you use content assets from 3D editors without any axis conversion (and for animation assets, this is a lifesaver!)

I always found it fun (no sarcasm) to re-derive and illustrate the process by which one can transform transformations (which is necessary when swizzling animations, typically stored as tranformations, from one coordinate space to another).

Share this post


Link to post
Share on other sites
Quote:
Original post by avion85
Hi everyone, i have a problem building a simple collision detection program in d3d9.
As i have come to understand, d3d uses the left handed coordinate system, and has a built in mechanism (in terms of D3DXMatrixPerspectiveRH and D3DXMatrixOrthoRH and some other ones)in case some of us prefer to use the right handed system.

Both systems are very unnatural to me. I cant imagine throwing a ball up in the air along the Y axis, in my game. What i want is: +x rigt, +y depth, +z up.

So a am looking for the most elegant resolution to this problem. What would be the best solution, i think, is to have a preprocessor instruction that does turns the world around for me at the beginning of the program. Otherwise i have to define my own (x,y,z)->(x,z,y) linear operator. This is not a problem, but i have to call it, before i do ANYTHING including creating my objects(where i set the vertices manually), and any calculations-this seems expensive to me, in terms of calculation time? is this true?

Any ideas how to solve this problem?
Just to reinforce what's already been said, in most cases, setting up your simulation so that +z is up in world space is simply a matter of setting up your view transform appropriately. Using the projection transform functions provided by DirectX only means that +y will be up in *camera* space; in world space, 'up' can be anything you want.

Share this post


Link to post
Share on other sites
Like jyk said, doesn't matter where the Z axis points to, if you engine sends data so that the Z points up (e.g. objects running on the Z=0 plane and a camera looking down the Y axis), then that's what you're gonna get. Same with OpenGL.

Share this post


Link to post
Share on other sites
Quote:
Original post by jpetrie
Quote:

Both systems are very unnatural to me.

Then you are out of luck or misspeaking. All (3D) coordinate spaces are left or right handed.


--yes i understand that now, this line confused me: "though left-handed and right-handed coordinates are the most common systems, there is a variety of other coordinate systems used in 3D software. For example, it is not unusual for 3D modeling applications to use a coordinate system in which the y-axis points toward or away from the viewer, and the z-axis points up."
from here:
http://msdn.microsoft.com/en-us/library/bb204853.aspx
Anyway as i understand it now, those two systems are unique because they cant rotate into one another in any way, others are rotations of them, so they are unique in this way.



Quote:
Original post by jpetrie
Once you express your transform as a matrix you can trivially concatenate it with any of the other matrices D3D wants you to set, as is most appropriate for your needs (for example, is your geometry already in this coordinate space or do you want to bring it there at runtime versus at content production time, etc). All your desired coordinate system is in a 90 degree rotation of the canonical D3D system about the X axis (CCW looking along the positive axis).


--and i will do just that, along with these:
D3DXMatrixLookAtRH, D3DXMatrixPerspectiveFovRH and maybe others


Quote:
Original post by jyk
Just to reinforce what's already been said, in most cases, setting up your simulation so that +z is up in world space is simply a matter of setting up your view transform appropriately. Using the projection transform functions provided by DirectX only means that +y will be up in *camera* space; in world space, 'up' can be anything you want.

--Right, so the view pertains to the world space, and the projection transform pertains to the camera transform, got it!

Quote:
Original post by vicviper
I would love to see more people realize how important is this, because this coordinate system is simply the best; it's natural, it makes integrating terrain engines much easier, it lets you use content assets from 3D editors without any axis conversion (and for animation assets, this is a lifesaver!), and it makes you independent from the rendering API coordinate system, all this by just using a clever matrix trick at the end of the pipeline!


--i agree 100%, after some experience in 3dsmax i've learned to think that Z is up:) All else is counter-intuitive. Granted, i can live with doing everything in another coordinate system, but this way its a bit easier andi get to learn something new.

Thanks everyone for the replies, ill post some of the relevant code when i implement it in my project. Others may find it useful.

Share this post


Link to post
Share on other sites
I suppose which way makes most sense depends how you look at it somwhat.

For example all my stuff is generally 2d top down, with +y being south, and +x being east in my world. If I was to move into making a 3d game Id most likly want to make +z up, since that means my existing x and y axis can stay the same.

On the other hand I can imagen that someone that worked mainly on side scrollers, with x being left/right and y being up/down would want to make the z axis the 2d horizontal axis so that they can continue thinking of y as up/down.

Fortanatly with the transformation system its trival to implement basicly any system.

Share this post


Link to post
Share on other sites
For anyone interested,

D3DXMatrixPerspectiveFovRH
D3DXMatrixLookAtRH
solved the problem completely for me! Now the geometry generation and all else is in the right coordinate system.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!