Camera Pan Problem

Started by
6 comments, last by Stani R 14 years, 5 months ago
I'm designing a quick camera class that tries to emulate the functionality of the camera in 3ds Max. I ran into a bit of a stumbling stone however when I went to implement a mouse-based panning system to move the camera's x and y axis'. My first though was to take the mouse location, subtract it from the last updates mouse position and multiply the translation of the mouse cursor, by a constant, and finally I would move the camera along it's local axis' by that translation. This however did not behave as I wanted however because if I zoomed in or out from the object, the panning would slow down or speed up. Basically I needed the object I was panning around to follow the mouse, as it does in 3ds Max. My next plan was to take four points on the camera far plane, and project them into world-space: ----p3 ----*---- |-------| *p1-----*p2 |-------| ----*---- ----p4 Points p2 subtracted from p1 at the near or far plane could give me the world-distance across the camera's current FOV, and the same could be done with points p3 and p4, for the vertical distance, then I could get the percentage of screen that the mouse has moved across in the update and multiply it by the two values I calculated from the points. This worked just as poorly as the previous attempt however. Upon experimenting in 3ds Max and with a box I had at my desk I noticed that when panning in 3ds Max the perspective on the objects do not change, as one would expect if the camera were truly panning across in a straight line. So I come to this point at last: what is 3ds Max actually doing when it performs panning with the middle button, in a perspective view-port?
Advertisement
I went back to confirm what I said above about 3ds Max and I found I was wrong, it does use a perspective projection when panning, I was just too close to notice before.

But what I am after is for the objects on screen to follow the location of the mouse exactly as the camera is panned.

A perspective projection can be though of two planes in parallel, with one being smaller correct?

basically:

--far----
-\-----/-
--\---/--
---\-/---
--near---

So when I project the location of my mouse into world space with the goal of moving the camera the distance between two world space points, I have to choice were along that projection to get the depth. Basically the distances get larger at the back of the projection...

Perhaps if I use the (0, 0, 0) world origin as the intersection I would get a more accurate result?
I don't have access to 3ds Max so I'm having a hard time visualizing what you want to do. Does the camera orbit around the object at a fixed distance, or does the view pan left/right/up/down as if the camera was rotating in place?
The camera orbits around the focal point, and the pan moves the camera's target [focal point].

I think I worked out what was happening before that was causing the object not to catch up with the mouse: when I project the screen space I need to project it to the focal point's depth, so that the camera is getting moved by the translation closet matching the object at the origin (focal point is (0,0,0) at first.

Problem is, I am not really sure how to get the point in space that lays on the focal point by projecting screen coordinates to world coordinates.

I could get the camera's distance from the focal point, then figure out what percentage of the camera's far & near plane length that is, then calculate two points using that information I guess... Anyone got any ideas as to how I am making a horrible mistake here? :p I am really just running into this head-first without too much know-how.

[Edited by - JacksonCougar on November 8, 2009 1:37:15 AM]
Quote:Original post by JacksonCougar
Problem is, I am not really sure how to get the point in space that lays on the focal point by projecting screen coordinates to world coordinates.

This one at least is straightforward to do. You want to gluunproject twice and cast a ray, then intersect it with something (the y=0 plane, or your list of objects, for instance). More details here.

I'm really sleep-deprived here. I think I finally understood what you describe. So you select some object with the mouse, and when you move the mouse, the object moves, and the camera pans around to keep the object on screen? This makes everything I wrote pretty much redundant (good thing we have a delete function here). Unfortunately I'm not quite sure how to tackle this, either. The cheapest would be to always have the camera look at the object, but that's quite a cop-out.

[Edited by - lightbringer on November 8, 2009 1:07:57 AM]
I got a much better result by doing this:

float distanceFromFocal = (camPosition - camTarget).Length();
float totalCameraRange = 300f;
float percentDist = distanceFromFocal / totalCameraRange;

and then this:

Vector3 HorizontalTranslation = Vector3.Multiply((horizontalLeftFar - horizontalRightFar), FercentDist);

Basically finding the maximum world distance across the camera's FOV, then multiplying it by the percentage of distance the focal point was away from the camera. Using projected vector from the screen coordinates to calculate the world distance in the FOV.

Edit, although selecting the object is not strictly necessary, it would be the best method to set the focal point before panning around the object. However, all the translation effect the camera and not the object :]

I know I can be very hard to follow when I am trying to work a problem out at the same time as asking for help :| Sorry 'bout that ;p
Update: How do you find the opposite rotation of a rotation matrix?

I am rotating my camera around it's target in an orbital style, and I need to realign the camera's axis'...

So for example if I rotate the camera upwards by 30 degrees around the target, the camera's local rotation would rotate downwards 30 degrees to maintain it's looking direction. Is there some easy way to take the inverse of a matrix or quaternion? (I am not sure if inverse is the word I am looking for :p)
What I do for an orbital camera is, I set it's position to that of the target, then I create a new quaternion orientation based on stored pitch and yaw values. I then convert that to a rotation matrix and extract the facing vector (z-axis) and move the camera to the required zoom distance along that vector. I'm sure there are some ways to optimize this, but at least it works. I do invert rotations using quaternions, but I'm not sure how the math would work out with just matrices.

There are some ideas here for inverting rotation matrices.

Edit: is a method of inverting the orientation based on transposing the orientation part of the local transform matrix.

[Edited by - lightbringer on November 8, 2009 5:55:35 PM]

This topic is closed to new replies.

Advertisement