# How to transform Camera in a Maya-like way??

This topic is 4960 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, For my main thesis I'm writing a program with a 3D user interface (C# / Managed DirectX) and would like to implement a camera control that is similar to the way Alias did in Maya. There, the user rotates, translates and zooms the camera around the aim / look-at. What I found out so far is that the camera coordinate system is often labeled UVN (where N is the axis orthogonal to the view-plane into the screen, V is the up-vector and u the cross-product to the right). To be more specific about the camera in Maya: Rotation: The camera's eye-vector / position is rotated around the lookat vector. Thereby, the screen axes (x and y) represent the axes in camera-space (u and v). Apperently, I have to convert the screen coordinates into camera space. I assume that thereby the up-vector in camera space is rotated too. Translation: Both, the eye-vector and the lookat vector are translated onto the view-plane. That might be the most important behavior I'd like to have in my own implementation. When the camera translates the axes of the world coordinate system, there might occur weird transformations (e.g. when rotating the x-axis when the whole coordinate system is already rotated 90 degrees around the y-axis it feels as if the z axis is rotated instead of the x-axis, since - as seen through the camera, the x-axis points into the screen). Thus, a correct camera translation has to be onto the view-plane. Zooming: The distance between eye-vector and lookat decreases. That means that the camera's position is shifted onto the eye-vector towards the lookat (or away respectively). But where can I find hints how to implement such a camera. Or does anyone already HAS an implementation (C++ would be fine either). I'd be grateful for any help I can get. It's a bit too tough for me [embarrass]. [Edited by - data2 on June 20, 2005 10:56:28 AM]

##### Share on other sites
Well, what DirectX does for me is computing the view-matrix from the eye, lookat and up (all vectors in world-space). Their computation is quite the same I found at google:

Quote:
 zaxis = normal(cameraTarget - cameraPosition)xaxis = normal(cross(cameraUpVector, zaxis))yaxis = cross(zaxis, xaxis) xaxis.x yaxis.x zaxis.x 0 xaxis.y yaxis.y zaxis.y 0 xaxis.z yaxis.z zaxis.z 0-dot(xaxis, cameraPosition) -dot(yaxis, cameraPosition) -dot(zaxis, cameraPosition) 1

But what I need is to transform those three vectors when the user moves with the mouse. And that is where I need your help...

##### Share on other sites
Quote:
 Original post by data2But what I need is to transform those three vectors when the user moves with the mouse. And that is where I need your help...

If i understand what you need correctly, the camera object's local forward/up/right vectors are already computed for you when you use the look_at() function to build the camera's transformation matrix. They make up the columns (or for directX rows) of this very transformation matrix (the order is local right vector, local up vector, local forward vector and finally translation from origin/parent)

##### Share on other sites
Well, yes, LookAtLH does indeed build up a view-matrix. That's not the point. What I need is to transform the 3 vectors whenever the user interacts with the scene. And don't really know how to do that.

In another forum they told me to introduce a second matrix that holds the transformations and is multiplied onto the initial view-matrix before passing it to the device. That means, that for a translation I would simply translate that second matrix (x/y/z of the translation vector would be in camera space) and then the camera is translated onto the viewplane. I implemented it and it seems to work, at least for the translations.

The zooming ( moving the camera's position closer to the lookat or farther away) I wanted to do like that: I'd take the vector from camera position to lookat, scale it and add it to the camera position. Afterwards, I recalculate the initial view-matrix.

Concerning the rotation, I didn't come along yet. I found something about trackball rotations at google, but haven't taken a closer look yet.

##### Share on other sites
Hmm let me try again. Suppose you have the position of your camera defined by vector Eye, and the point you want to aim the camera at, defined by vector Target.

Now, you can use these values to calculate local forward vector, and use this vector to change location of your camera and simulate zooming.

In similar manner, you can use this forward vector to obtain local right and up vectors ... and use these to modify location of your camera and simulate rotations: e.g. by repeatedly adding the right vector to the camera location, and re-orienting the camera after every translation so it always faces the Target, your camera moves effectively counter-clockwise around the target point, along more-or-less coarse approximation of a circle drawn on the plane defined by camera's local forward and right vectors. (the coarseness will depend on the size of single 'step' ... effectively on the frame rate)

(you can test it in practice -- stand in front of some object, and keep taking step to your right and then rotating yourself so you always look at that object after each step you make ... you should do a full circle around the object, well, as long as you don't run into a wall as you go :s

It's not the most technical approach (unless the size of update step is infinitely small the camera will be little-by-little moved farther away from the object of interest) but for something as simple as user-controlled point-of-view it tends to work ok, and it's simple to do.

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 10
• 12
• 10
• 11
• 13
• ### Forum Statistics

• Total Topics
634095
• Total Posts
3015477
×