Sign in to follow this  
adriano_usp

Tutorial: The math behind the view transformation

Recommended Posts

I have been noticing that a lot of people don't understand very well how is made a view transformation. Some tutorials say that the camera position is transformed to the world coordinate system’s origin. This can confuse the people, if they don't know the math behind the view transformation. For this reason, I decided to write a simple tutorial explaining how a view matrix is created. Take a look at: www.adrianojmr.ubbi.com.br/viewtransf.doc Since the document has vectorial equations, it is recommended you have the MS Equation Editor installed in the Word. I hope it helps. Adriano Ribeiro [Edited by - adriano_usp on February 8, 2005 4:50:44 PM]

Share this post


Link to post
Share on other sites
I have read your document, I can know more knowledge that World and Camera have its coordinates system, but when I read you document, I stop it in
"Since any vector can be expressed in relation to different coordinate systems." Becuase I don't know what you mean, why a vector will have relation will different coordinate system?

Share this post


Link to post
Share on other sites
Quote:
why a vector will have relation will different coordinate system?


Mathematically, a 3D vector can be defined (expressed) by a linear combination of other three independent vectors. These three vectors define a coordinate system. Well, since we can choose infinite groups of three independent vectors, we can generate infinite coordinate systems where one same vector can be expressed with different components.

For example: The T vector could be expressed in relation A, B, C,.. coord. systems, like this:
T = (Tax)ax + (Tay)ay + (Taz)az
T = (Tbx)bx + (Tby)by + (Tbz)bz
T = (Tcx)cx + (Tcy)cy + (Tcz)cz
...

where:
ax,ay and az are the unit vectors that define the A coord. system, and
Tax,Tay and Taz are the components of V when it is expressed in A. The same thought can be applied to the other coord. systems.

On that tutorial, I expressed one same vertice in relation to the world coord. system (W) and in relation to the camera coord. system (C) (what give me two different expression for the same vector). Why did I make that? Because using those two expression I can find a math relation that transforms vertex coordinates from one coord. system to another.
Well, that math relation is exactly the view matrix.

(Unfortunately, here I can't put an arrow on the caracters to represent vectors. This can confuse the explanation)


That tutorial requires some basic knowledge in lineal algebra and in analytic geometry. Did you know the math behind the dot product, cross product, transpose matrix, inverse matrix, product of matrices, etc?

If you don't understand or have other doubts, please ask again. I'll be happy to answering.

Share this post


Link to post
Share on other sites
First of all, let's me to know the answer about that....

the camera coordinate, vlook, vUp, vRight must represent z-axis, y-axis and x-axis, right? These axis from the camera coordinate can different with the axis from the 3D world, right? Becasue the camera may be rotate, am I correct?

Share this post


Link to post
Share on other sites
Quote:
First of all, let's me to know the answer about that....

the camera coordinate, vlook, vUp, vRight must represent z-axis, y-axis and x-axis, right? These axis from the camera coordinate can different with the axis from the 3D world, right? Becasue the camera may be rotate, am I correct?


Almost that [smile].

For the camera:
z-axis = Normalize(pAt - pEye)
x-axis = Normalize(pUp x z-axis) <- this is a cross product
y-axis = (z-axis x x-axis) <- this is a cross product

pEye is the camera positon (camera's eye).
pAt is the camera target.
pUp is an orientation vetor for the camera.

These vectors are expressed in relation to world coordinate system. Don't worry about these terms... just keep in mind that you can treat the camera as an object. You only need to set the position of the camera, the target of the camera (where it points) and its orientation. Direct3D does the rest (it generate the axes from the camera coordinate system for you).

Answering the other question:
Yes, the axes from the camera coordinate system are different than the axes from the world coordinate system. But we practically only work with vectors expressed in the world coord. system. As I said, don't worry about the camera coordinate system.

Consider, for example, this code sample:

D3DXVECTOR3 pEye( 0.0f, 0.0f,-10.0f );
D3DXVECTOR3 pAt( 0.0f, 0.0f, 0.0f );
D3DXVECTOR3 pUp( 0.0f, 1.0f, 0.0f );

D3DXMATRIX matR;
D3DXMatrixRotationY( &matR, 0.01f*timeGetTime());
D3DXVec3TransformCoord( &pEye, &pEye, &matR);

D3DXMATRIX matView;
D3DXMatrixLookAtLH( &matView, &pEye, &pAt, &pUp);
pd3dDevice->SetTransform( D3DTS_VIEW, &matView );

Here I am rotating the position of the camera (pEye) around the origin. Notice that I treat the camera as an object, and all the vectors from that code sample were expressed in relation to world coord. system. You don't see the axes from the camera coord. system. In fact, they are generated by Direct3D (this is an intern operation).

[Edited by - adriano_usp on February 9, 2005 1:27:47 AM]

Share this post


Link to post
Share on other sites
I want to ask a question?
the vUp is not the same as y-axis??

z-axis = Normalize(pAt - pEye)
x-axis = Normalize(pUp x z-axis) <- this is a cross product
y-axis = (z-axis x x-axis) <- this is a cross product

Share this post


Link to post
Share on other sites
Quote:
I want to ask a question?
the vUp is not the same as y-axis??

z-axis = Normalize(pAt - pEye)
x-axis = Normalize(pUp x z-axis) <- this is a cross product
y-axis = (z-axis x x-axis) <- this is a cross product


No, because the angle between pUp and z-axis can be different from 90 degrees.
That procedure always generate an ortho coordinate system to the camera. That is, the angles between the axes are always 90 degrees.

For example:

For these vectors:
D3DXVECTOR3 pEye( 0.0f, 10.0f, -10.0f );
D3DXVECTOR3 pAt( 0.0f, 0.0f, 0.0f );
D3DXVECTOR3 pUp( 0.0f, 3.0f, 0.0f );

D3D would create these axes:
z-axis = Normalize(pAt - pEye) = ( 0.0f , -0.707f , 0.707f )
x-axis = Normalize(pUp x z-axis) = ( 1.0f , 0.0f , 0.0f )
y-axis = (z-axis x x-axis) = ( 0.0f , 0.707f , 0.707f )

Note that pUp is different from the y-axis.

Share this post


Link to post
Share on other sites
As you say, the vUp is not the same y-axis. so, I get the following questions:

1. When we move the camera staight along forward / bacwarad, we just move camera from the z-axis forward/backward, right?

2. When we move the camera up / down, we just move the camera along the y-axis, right?

3. When we move the camera left or right (straft), we just move the camera along the x-axis, right?

4. When we use D3DXVec3Subtract two vector, does it have any orders, such as v1 -v2 = v2 - v1. D3DXVec3Subtract function, there has two vector parameter, which one is from and which one is to? Because I say your code below, the z-axis is subtract(vAt, vEye), but I don't sure which one is from or to, if vEye is from, that looks like correct, else, it seems not correct....

5. Considering the up vector from the D3DXMatrixLookAtLH, is the up vector is considered from camera, therefore, it may not the same value of y-axis?

Here is my questions, please answer me the answers, and thank you very much

I am really so stupid ~_~a

Share this post


Link to post
Share on other sites
Please tell me the answer about the above questions, and also, I think I almost understand what you say, but I have a little question, when we normalize a vector such as vRight(x-axis), we will multiply the a constant speed to the vRight, after that, we will add vRight to the vEye and vLookAt. This method is just add a unit speed to the coordinate of the camera system, it is not really add an value to the axis, for example
if the unit vector of right is (2, 2, 0), and the speed is 3.0f, and the vEye is (1,2,3). after calucation, the new vEye is (7, 8, 3). (7, 8, 3) is just added by a unit speed, not a constant speed, why?

My english is not good, hope you can understand what I say..

Share this post


Link to post
Share on other sites
GDMichael, keep in mind that you DON'T generate the axes for the camera. Direct3D DOES that for you. You give three vectors (pEye, pAt and pUp) to it, by putting them in the D3DXMatrixLookAtLH/RH function. Then, Direct3D returns a view matrix to you. Just that.

Everything you need to move the camera is to specify its position (pEye), its target (pAt) and its orientation (pUp).

Again, the camera's axes are an intern operation made by Direct3D every time that you call the D3DXMatrixLookAtLH/RH function.

Well, now answering your questions:

Quote:
1. When we move the camera staight along forward / bacwarad, we just move camera from the z-axis forward/backward, right?

Yes, but it is not good to think in that way. To move the camera on a straight path, you need to specify a directional vector for the path. The camera's z-axis is generated after that directional vector, and follows it.

Quote:
2. When we move the camera up / down, we just move the camera along the y-axis, right?

If the movement is relative to the world space, the answer is: not necessarily. If both eye and target from the camera are not in a horizontal plane, the y-axis won't have a vertical direction.
But if the movement is relative to the camera's orientation, the answer is yes.

Quote:
3. When we move the camera left or right (straft), we just move the camera along the x-axis, right?

The same thought of the previous answer.

Quote:
4. When we use D3DXVec3Subtract two vector, does it have any orders, such as v1 -v2 = v2 - v1. D3DXVec3Subtract function, there has two vector parameter, which one is from and which one is to? Because I say your code below, the z-axis is subtract(vAt, vEye), but I don't sure which one is from or to, if vEye is from, that looks like correct, else, it seems not correct....

Well, v1-v2 = v2-v1 only if v1=v2=0. Sure the D3DXVec3Subtract function has an order:

D3DXVec3Subtract( &v3, &v2 &v1 ) -> v3 = v2 - v1 (doing a simple test you can know this)

Again, subtract(vAt, vEye) is an intern operation made by D3D to create z-axis. You DON'T do that.

Quote:
5. Considering the up vector from the D3DXMatrixLookAtLH, is the up vector is considered from camera, therefore, it may not the same value of y-axis?

The up vector is an input for the D3DXMatrixLookAtLH function. It is NOT the y-axis from the camera. Please, read my last post again.



Quote:
Please tell me the answer about the above questions, and also, I think I almost understand what you say, but I have a little question, when we normalize a vector such as vRight(x-axis), we will multiply the a constant speed to the vRight, after that, we will add vRight to the vEye and vLookAt. This method is just add a unit speed to the coordinate of the camera system, it is not really add an value to the axis, for example
if the unit vector of right is (2, 2, 0), and the speed is 3.0f, and the vEye is (1,2,3). after calucation, the new vEye is (7, 8, 3). (7, 8, 3) is just added by a unit speed, not a constant speed, why?

My english is not good, hope you can understand what I say..


Sorry... I understood nothing...


But I have some suggestions [smile]:

- At Google, search for tutorials about vectorial algebra, cross product, dot product etc. If you have some vectorial algebra book, better. Spend a time studying the math concepts behind that.
- Always consult the DX SDK. It has all the D3D's functions syntaxes and much more. There you can know which is the input/output of the D3D functions.

Good luck!

Share this post


Link to post
Share on other sites
According to the questions 1 to 3. The coordinate of x-axis, y-axis, z-axis is focus on the camera coordinate, not the world coordinate. If these coordinates are focus on camera coordinate, is that right? This is because when we move up/down, we need to use cross product to generate a VECTOR. And we will use this VECTOR to add to our vEye and vLookAT, right? Therefore, this VECTOR is the y-axis, do you agree that? In the same technology, when we move right/left, we just add the x-axis VECTOR to vEye and vLookAT from a camera coordiante, Right? Do you agree all my above points?

From the Question 4, I just want to know if we use D3DXVec3Subtract function, if I write Subtract(vAt, vEye) is or is not equal to SUbtract(vEye, vAt). As I know, the vAt - vEye != vEye - vAt, right? Becasue the direction is the not same, therfore, vEye - vAt is correct, becasue the direction is from vEye to vAt, Do you agree that? Since, the function of subtract do not mentioned which parameter is from or which parameter is target, therefore, I ask you did the function has order parameter....

From the last question, using the method from the book. On the other words, when we move up the camera, we will multiply a constant value to the y-axis vector and added to the vEye and vLook. Using this method, we just add an unit distance to the vEye and vLook, consider the following code:

vUp.x *= speed;
vUp.y *= speed;
vUp.z *= speed; where the speed is 10.0f

but after the above operations, the vUp.x/vUp.y, vUp.z is not only extend 10 units because vUp.y, vUp.x, vUp.z may not equal to 1, therfore, their value of vUp may be large than 10. When we add vUp to vEye or vLook, we may not really add the speed 10.0f to them. That's why I ask you the question, this method is not add a unit distance to them, do you agree or understand what I say?

Share this post


Link to post
Share on other sites
wait... wait...[totally] Be not nervous, OK?! [smile]
Maybe we are talking about different vectors, so the doubts will never be solved! [smile] Let us begin again:

Who are the vectors vEye, vLookAT and vUp that you mentioned in the last post? Are they the axes from the camera or the input of the D3DXMatrixLookAtLH function?

Please, answer that while I read your last post again [smile].

Share this post


Link to post
Share on other sites
hi there!

I'm not quite following on these transformations, but I do have a tendency to understand better when shown an example :)

So, if you could take a look at this matter:

I have a player spaceship and stores a orientation matrix which is updated by mouse and keyboard input. And I want to make a chase camera.

In order to build a flexible camera class, I want the camera to be updated based on the spaceships orientation in the world.

What I've got so far (but doesn't quite work)

D3DXMATRIX V; //<-my spaceships orientation in worldcoords (including right, up, look, pos vectors)

//camera
D3DXVECTOR3 vShipDirection(V->_31, V->32, V->33);
D3DXVECTOR3 pEye = D3DXVECTOR3(V->_31, V->_32, V->33) + (f_distance * pShipDirection); //position the camera f_distance units behind the ship
D3DXVECTOR3 pLookAt = D3DXVECTOR3(V->_41, V->_42, V->_43); //look at my ship
D3DXVECTOR3 vUp = D3DXVECTOR3(V->_21, V->_22, V->_23); //my ships up vector

D3DXMatrixLookAtLH(&viewMatrix, &pEye, &pLookAt, &vUp);

This works as long as the spaceship is in it's origin. When i rotate the ship on all axes the camera also rotates and keep f_distance behind the ship. But when I start moving the ship around, it doesn't make any sense...

Hoping for a little explanation on what might be wrong ;)
Thanks in advance

Morten Helvig

[Edited by - mhelvig on February 18, 2005 2:37:25 PM]

Share this post


Link to post
Share on other sites
Quote:
D3DXMATRIX V; //<-my spaceships orientation in worldcoords (including right, up, look, pos vectors)

//camera
D3DXVECTOR3 vShipDirection(V->_31, V->32, V->33);
D3DXVECTOR3 pEye = D3DXVECTOR3(V->_31, V->_32, V->33) + (f_distance * pShipDirection); //position the camera f_distance units behind the ship
D3DXVECTOR3 pLookAt = D3DXVECTOR3(V->_41, V->_42, V->_43); //look at my ship
D3DXVECTOR3 vUp = D3DXVECTOR3(V->_21, V->_22, V->_23); //my ships up vector

D3DXMatrixLookAtLH(&viewMatrix, &pEye, &pLookAt, &vUp);

I didn't understand very well your code, but I know what you want.
If you already have the spaceship coordinate system (shipX,shipY,shipZ), then you could do something like this:

Suppose the spaceship position is shipPos.

The position for the camera can be defined as:

cameraPos = shipPos - f_distance * shipZ

Note: If the camera is behind spaceship, then f_distance should always be positive.

So, you could get the view matrix by:

D3DXMatrixLookAtLH(&viewMatrix, &cameraPos, &shipPos, &shipY);


Recently, I put a thread showing a camera function that could facilitate your life [smile]. Check the thread "D3DSmartCamera: a new flexible camera for you".

Share this post


Link to post
Share on other sites
Nice!!!

You should definately be a weightlifter, because you just lifted a couple of tons off of my shoulders :)
Actually the D3DSmartCamera behaved the same as the old code in my project (sorry about the bad pseudocode I wrote in my previous post), but yours was easier to understand. So I found out that it was not the camera that was behaving weird, but I did do some stupid calculations right before drawing (and after that setting the miscalculated D3DTS_WORLD transform).

So now it's working perfect

Thanks a million!


Morten

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this