Yet another question on camera transformation

Started by
5 comments, last by haegarr 16 years, 9 months ago
Hi: I am new here and recently I have been trying out RenderMonkey from ATI to begin my shader learning. When playing around with the sample projects, i was quite fancinated by its camera transformation(rotation) mechanism in its previewer window when responsing to cursur dragging. I am wondering what auctually is the underlying mechincs/theory inside it, because it found that it is apparently not just simple sequencial X-rot, y-rot based on euler angles mechanism which might cause the whole scene to only rotated around center of the screen if mouse is dragged horizontally when camera is pitched upwards 90 degrees.(really hope you guide get what i mean cause english is not my first language :-( ). The camera transformation i am observing from RenderMonkey is much more different. Anyone can explain it or just give me a hint so that i can have a clearer idea of what is actually happening when i drag the mouse in the previewer windows. Thanks in advance
Advertisement
To make my question clearer to you guys in case i didn't express it well enough:

Let say the object that is targeted by the camera is located at 0,0,0(origin), and camera is freely transformed in the way that it is always located on the imaginary sphere centered at origin with constant radius.

And now comes the tricky part:

In all my precious known models, whenever there is a drag of mouse that gives the camera displacement of x,y direction, one of the two directions would only make the object self-rotate on the object's local axis, not the eye(global) axis(imagine a inclined UFO only spins on its y-axis).
What i desire and what the renderMonkey achieves is that both of the x,y displacement singalled from the mouse dragging will transformate the camera appropriately so that the object would always seems to rotate about the hozisontal or vertical viewing-space axises.
If I understand you correctly, then the effect could be achieved by rotating the camera as usual and re-locating it so that the object of interest is targetted again. The latter part means that the line-of-view of the camera
V(k) := C + k * v , k >= 0
where C is the camera's location (point vector), v is its viewing direction (direction vector), and k is a running parameter.

Rotating the camera alters v, of course. You want to have the camera a defined distance d>0 apart from the target, so that
k * | v | == d
is given (most often | v | is chosen to 1, so that k==d).

So all you have to do is to ensure that the camera's line-of-view hits the target located at T
C + d * v == T
and to solve this for C
C = T - d * v
and at last setting the camera's location to that point.

In your case T is chosen to be the origin 0. Hopefully that is the stuff you're looking for...
Thanks! Your reply helps me a lot.


But will the following alternative i came up just now work?
I may store the current UP vector and RIGHT vector respect to the camera, and whenever x,y displacement is obtained from mouse, simply translate the C (camera coord) to the direction of UP and RIGHT with appropriate amount depends on x,y displacement respectively. And then normalize OC (position vector of camera from the target) so that its magnitude remains d. finally update the UP and RIGHT vector.


Quote:Original post by Jasonnfls
I may store the current UP vector and RIGHT vector respect to the camera, and whenever x,y displacement is obtained from mouse, simply translate the C (camera coord) to the direction of UP and RIGHT with appropriate amount depends on x,y displacement respectively. And then normalize OC (position vector of camera from the target) so that its magnitude remains d. finally update the UP and RIGHT vector.

That's actually not the same. Usually rotation means that for the same linear motion distance of the pointer the camera is rotated by the same _angle_. Your method will do so if and only if the smallest motion (e.g. a pixel) causes a redraw. Else the rotation angle will grow less with the distance of motion accumulated until the next redraw is done. Nevertheless you can use your method if the motion is small since it works like an approximation.

IMHO it would be better to convert the motion into an axis/angle pair, convert that to a rotation matrix of quaternion (or whatever you prefer) and rotate the camera by that.
Quote:Original post by haegarr
then the effect could be achieved by rotating the camera as usual ..


but could you please elaborate more on the rotation? In what manner? Still Euler angle based, or incremental rotation?

Thank you so much, really!

I prefer quaternions to store and compute rotations. They have the advantage of higher numerical stability and simpler interpolation math. The entire transformation state is stored in instances of AffineMap, a class that shows a 4x4 homogeneous matrix that can be altered in defined ways only (just "affine"). One way is to multiply the AffineMap by a quaternion.

For interactive user input of rotations a pivot/angle pair (from which a quaternion can be computed easily) can be used if the plane of rotation is known, or two vectors (from which pivot/angle can be computed easily) spanning a plane with an angle in-between, or one of the "virtual trackball" solutions available from the inet (which can also be understood as pivor/angle based input methods).

Euler angles can be understood as 3 rotations with pre-defined pivots (namely the cardinal axes). But for generality it is IMHO better to ignore Euler angles by storing the current orientation as a quaternion (or perhaps a matrix on which orthogonalization is run frequently). Then think of an appropriate way to map user input to a pivot/angle rotation, compute a quaternion from it, and multiply it onto the current orientation representation to yield in its successor.

The exact way of mapping user input to rotation depends on your needs. RenderMonkey has definitely its way, but instantly I can think of 3 trackball methods, the circular handle method, and 2 view plane oriented methods; other may exist as well. I suggest you to first look up virtual trackballs on the Inet. AFAIK they are introduced by Gavin Bell and Thant Tessman in 1988. A variant is the ArcBall. Look e.g. at the Graphics FAQ 5.02 for more informations.

This topic is closed to new replies.

Advertisement