# Implementing lag for a 3rd person camera

This topic is 3480 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I've implemented a simple 3rd person camera. Right now, the camera moves exactly like the player object moves. I want the camera to lag slightly like in many 3rd person games. Here's the pseudo code I had in mind (runs every frame):
const lag = 0.25;  // quarter second delay

if moved
moved = false;
save current orientation;
interp = true;

if interp
timer += timeDelta;
if timer > lag
timer = 0;
interp = false;
t = transform timer from [0, lag] to [0, 1]
interpolate by t from saved to current (which keeps changing!)
moved is set to true when the object moves. I implemented this but I'm not getting the results I want. Before I post my code - am I thinking about this correctly? Is there a simpler way to do this? Any help is greatly appreciated. [Edited by - Gage64 on July 9, 2008 7:06:51 AM]

##### Share on other sites
Is this lag on the rotation of camera to face what the player object is facing, or lag on moving the camera as the player object moves away?

##### Share on other sites
Quote:
 Original post by NaxosIs this lag on the rotation of camera to face what the player object is facing, or lag on moving the camera as the player object moves away?

The former. I will try to implement lag for the movement after it works for the rotation (although I think the idea should basically be the same?).

##### Share on other sites
If I'm reading your psuedo-code correctly, then the behavior currently looks like it would wait for a fourth of a second, then begin to move.

I'm not really sure what 't = transform timer from [0, lag] to [0,1]' means, but I'm assuming its just a time-based view movement.

What sticks out my mind is that the camera doesn't immediately move. Perhaps that's what you want, but when I think of most lagged third-person camera movement, the camera will begin to move immediately, but the velocity of the turn is slower than the players.

So the player can turn X radians per second, and the camera can turn CAM_LAG*X radians per second. Where CAM_LAG would be some fraction between 0 and 1.

Or just set a velocity for the camera regardless of how fast the player can turn.

And if the player can turn quite fast, then you could say, if the angle between the camera view and the player vector is greater than some limit (for example: 90 degrees), then increase the camera turn velocity.

Or the velocity could be based directly on the different between the view vector and the player vector.

CamRotationVelocity = FEEL_GOOD_NUMBER * PlayerVector.getAngleBetween(CameraVector)

Which would be very elastic, I think. Angle grows, thus velocity grows, and vice-versa.

Might not be the way you would like to go about it though... Would this work for you?

##### Share on other sites
Yes there is.
Just store the last n camera positions can calculate the mean.
Use this mean as your current position when rendering the camera.

This will make the camera lack somewhat behind the object.

To make this time consistent you need to make n dependent on your framerate.

e.g.:
fps= 50
n=20
if your fps raise to 100 you need to double n
n=40 because you are effectively storing twice as many points in the same time.

A better solution is to keep n fixed and only store the position of your camera every 20ms if your fps = 50 1000ms/50

##### Share on other sites
Quote:
 Original post by NaxosIf I'm reading your psuedo-code correctly, then the behavior currently looks like it would wait for a fourth of a second, then begin to move.

The intention is that the camera should start moving immediately, but it should take 0.25 seconds for it to catch up. Are you saying that that's not how it works?

Quote:
 I'm not really sure what 't = transform timer from [0, lag] to [0,1]' means, but I'm assuming its just a time-based view movement.

t is the interpolation parameter given to the slerp function, so it should vary from 0 to 1, instead of from 0 to delay.

Quote:
 ...the camera will begin to move immediately, but the velocity of the turn is slower than the players.

That's what I'm trying to do. Actually I'm not sure how your proposed method is different than what I'm doing now (it feels different, but I can't quite explain to myself how).

##### Share on other sites
Quote:
 Original post by BasirorYes there is.Just store the last n camera positions can calculate the mean.Use this mean as your current position when rendering the camera.This will make the camera lack somewhat behind the object.

That actually sounds more complicated than what I'm doing. I'm not sure how to implement this but I will think about it.

Thank you for the suggestion.

##### Share on other sites
I think you're right. I must've misread or misunderstood the code.

There is something about it though, that I can't quite put my finger on.

What behavior is it exhibiting which is different from what you wanted/expected ?

Edit:
Also, 'interpolate by t from saved to current (which keeps changing!)'
What orientation is 'saved' and what is 'current'? When are they updated?

I wouldn't mind a peek at the code, too.

##### Share on other sites
How about have an expected position for the camera that is fixed to the character, and the actual position of the camera that chases the expected position with finite velocity [or even acceleration], with the focus of the camera fixed. The player starts to move, and the camera stays fixated on the character, but sort of smoothly attempts to chase where the camera *should* be. This way you do not need any previous camera positions, nothing gets saved, and you don't need to keep a bunch of old information around. This also allows you to take advantage of all the stuff you can do with steering so that your camera doesn't clip through your terrain. or go inside other characters.

Very simple example:
camera velocity = final position - camera position;bound camera velocity between a minimum speed and a maximum speedcamera delta = camera velocity * dt;if(camera delta length < distance between camera position and final position)     camera position += camera velocity * dt;else     camera position = final position; /* so it doesn't overshoot it and bounce around

##### Share on other sites
@Drigovas: Thanks, I will reread your description several times until I understand it. [smile]

@Naxos: Here is the relevant code (C#, MDX):

// VariablesMesh obj;Matrix objOrient;Vector3 objPosition;Vector3 cameraOffset;Matrix camOrient;Vector3 camPosition;float timer;bool moved = false;bool interp = false;Quaternion savedOrient;// This is in MoveObjectif (keys.KeyState(Keys.Right)){    Matrix rot = Matrix.RotationY(DegToRad(timeDelta * rotateSpeed));    objOrient *= rot;    moved = true;}// Similar handling for other keys// ...// All this is in the update function which is called every frameMoveObject(timeDelta);const float delay = 0.25f;if (moved){    moved = false;    savedOrient = Quaternion.RotationMatrix(camOrient);    interp = true;}if (interp){    timer += timeDelta;    if (timer > delay)    {        timer = 0.0f;        camOrient = objOrient;        interp = false;    }    else    {        float t = timer / delay;        Quaternion target = Quaternion.RotationMatrix(objOrient);        Quaternion q = Quaternion.Slerp(savedOrient, target, t);        camOrient = Matrix.RotationQuaternion(q);    }}else{    camOrient = objOrient;}

Quote:
 Original post by NaxosThere is something about it though, that I can't quite put my finger on.

That's exactly how I feel. [smile]

Quote:
 What behavior is it exhibiting which is different from what you wanted/expected ?

Hard to explain. Say I want to rotate right. If I just tap the right arrow key, it seems to work fine (the object rotates and the camera lags behind slightly). Not great, but fine (hard to explain why it's not great...). But if I continue to press it, it's like the camera lags behind, quickly catches up, starts lagging again, quickly catches up, etc. The rotation feels very choppy.

Also, changing the value of delay doesn't change this behavior.

##### Share on other sites
I think I may have figured out why it's behaving that way.

Consider this scenario:

Camera rotation is constrained along the vertical axis (be that Y or Z for you) for this example.

delay = 0.24 ( for easier division :) Most of the numbers I choose from now will be for convenience )

Say the player turns right 24 degrees at time 0.
The savedOrient is 0 degrees, the objOrient is 24.

At time 0.03:
- camOrient is updated with Slerp(0,24, .03 / .24)
- so camOrient is now at 3 degrees

At time 0.06:
- camOrient is updated with Slerp(0,24, .06 / .24)
- so camOrient is now at 6 degrees

At time 0.09:
- camOrient is updated with Slerp(0,24, .09 / .24)
- so camOrient is now at 9 degrees

Just before time 0.12, objOrient is updated to be 48 degrees.
So now 'savedOrient' = 9 degrees! and 'target' = 48 (it is from objOrient)

Thus!

At time 0.12:
- camOrient is updated with Slerp( 9, 48, .12 / .24)
- so camOrient is now at 0.5 * (48-9) + 9 = 28.5 (assuming linear interp)

At time 0.15:
- camOrient is updated with Slerp( 9, 48, .15 / .24)
- so camOrient is now at 33.375

At time 0.18:
- camOrient is updated with Slerp( 9, 48, .18 / .24)
- so camOrient is now at 38.25

Before time 0.21, it moves to the right again!
So objOrient(& target) = 72, and savedOrient = 38.25

At time 0.21:
- camOrient is updated with Slerp( 38.25, 72, .21 / .24)
- so camOrient is now at 67.78125

At time 0.24:
- camOrient is updated with Slerp( 38.25, 72, .24 / .24)
- so camOrient is now at 72 ( it just snaps right to the end )

And now that timer has reset to 0, the cycle starts all over again, going slowly, then trying to catch up to the objOrient as it's moving away. The way it is now, it's racing to get to it within .24 delay deadline, even when the objOrient changes.

##### Share on other sites
Thank you very much for taking the time to do this analysis. It made me realize that if the player starts rotating again, I have to reset the timer somehow, but I'm not sure how. I tried making it smaller (by dividing it), reseting it to 0, and I also tried changing the first conditional to:

if (moved && !interp)

But it didn't make any significant difference. Sometimes the camera would only start rotating when the object stops rotating, which obviously looks terrible.

I think I'm going about this the wrong way. I will consider some of the alternatives suggested and see if I can implement them. If anyone has any other suggestions, I would love to hear them.

Thanks again to everyone for their help.

##### Share on other sites
I can't really say whether this will work as you would like, but I would try something simple, like setting timer = 0 when the objOrient is updated in MoveObject(timeDelta).

Let me know how it goes :)

##### Share on other sites
Quote:
 Original post by Naxossetting timer = 0 when the objOrient is updated in MoveObject(timeDelta).

If I do that, the camera doesn't rotate but jitters slightly, and when the object stops rotating it quickly rotates to catch up.

##### Share on other sites
Here's an excellent series of tutorials showing how to implement a third person camera in C++ (D3D or OpenGL) or XNA, using a spring system for lag (which I suspect is the "proper" way).

http://www.dhpoware.com/demos/index.html

##### Share on other sites
Quote:
 Original post by JMabHere's an excellent series of tutorials showing how to implement a third person camera in C++ (D3D or OpenGL) or XNA, using a spring system for lag (which I suspect is the "proper" way). http://www.dhpoware.com/demos/index.html

Thanks, I'll take a look.

##### Share on other sites
Edit: Changed condition of if statement...
Edit2: Changed contents of if block (sorry!)

I think that perhaps the jitter is caused by the timer being reset every frame as you're holding down the key, therefore, I came up with this little idea...

Thank you for indulging me :) I want to see this working too

Not much changed I don't think

Don't know the ins and outs of C#, so please forgive any syntax errors.
/// ... Snip! ... ///// All this is in the update function which is called every frameMoveObject(timeDelta);const float delay = 0.25f;///////////////Addition!////////////const float TIME_THRESHOLD = 0.0175; // Feel-good numberif (moved ){///////////////Addition!////////////    if(!interp || (interp && timer > TIME_THRESHOLD)){        savedOrient = Quaternion.RotationMatrix(camOrient);        moved = false;        timer = 0;    }    interp = true;}/// ... Snip! ... ///

##### Share on other sites
I added that snippet and it doesn't work... The camera sort of jumps around and doesn't even behave consistently.

Actually, I just noticed that if I reset timer to 0, either where you listed or inside MoveObject(), then it almost works, but again the behavior is not consistent and it occasionally jumps around.

Even when it does "work", it doesn't look as good as I had hoped. Can't explain why exactly. When I compare it with this, for example, the movement there looks much smoother, but I can't explain exactly what's different.

It makes me feel that even if I do get this to work using this method, it will not look very good and I should try a different approach.

##### Share on other sites
Quote:
 Original post by JMabHere's an excellent series of tutorials showing how to implement a third person camera in C++ (D3D or OpenGL) or XNA, using a spring system for lag (which I suspect is the "proper" way). http://www.dhpoware.com/demos/index.html

Seconded on the spring system, I've seen it done with spectacular results when simulating a rider's view inside of a vehicle on rough terrain.

Alternatively, you could look into some flavor of a PID controller. They're common in robotics and mechanics, are easy enough to code, and you can control the overshoot/settling time/frequency of oscillation through three (or fewer) parameters, though you'll probably just care about settling time, shoving overshoot/frequency to nil. The nice thing is that they react well to changing targets, so moving the camera before it's settled is no big deal.

There's probably a ready-made snippet somewhere on the web somewhere, apart from the pseudocode at the bottom of the article...

##### Share on other sites
I've tried a slightly different approach:

protected override void Update(float timeDelta)        {            MoveObject(timeDelta);            const float lag = 0.5f;                      // Get the camera orientation            Quaternion q1 = Quaternion.RotationMatrix(camOrient);            // Get the object's orientation            Quaternion q2 = Quaternion.RotationMatrix(objOrient);            float t = timeDelta / lag;            Quaternion q;            if (CmpQuats(q1, q2, 0.0003f))            {                // If it's very close, just use the object's orientation                q = q2;            }            else            {                q = Quaternion.Slerp(q1, q2, t);            }            // Set the camera orientation            camOrient = Matrix.RotationQuaternion(q);            // Interpolate the position            camPosition = Vector3.Lerp(camPosition, objPosition, t);        }

This works pretty well, except that if I rotate the object for a long period, the camera hangs and starts jumping back and forth. Does anyone see anything wrong with this code?

##### Share on other sites
I recently wrote a 3rd person camera for the project I'm working on, and we made extensive use of "smooth values". They're basically small objects that contain a current value, a target value, a smoothing parameter (aka the time constant), and an exponential interpolation function. Every frame you update the smooth values with the delta time, and they interpolate the values smoothly towards their targets. The benefits of these objects are that they're simple, very smooth, relatively fast1, and never overshoot the targets since Ae-kt + B has a horizontal asymptote at 'B'. Each update, the math looks something like this:

float Velocity = (this->Target - this->Current)
Current = Target - Velocity*exp(-DeltaTime/this->SmoothParam)

I use these smooth value objects for pitch and yaw however it should be trivial to adapt for quaternions.

1Instead of using exp() itself, you can get away with the first three or four terms of the Taylor series.

##### Share on other sites
What is the exact meaning of timeDelta? Doesn't it denote the time elapsed since the last recent frame update until this one? If so, is anywhere guaranteed that timeDelta doesn't exceed lag? Due to
t = timeDelta / lag
and no seen restriction of t, I assume that the intentional interpolation may behave accidentally as an extrapolation.

Still under the above assumption, I seems me to be a principle problem to make the "follow me" dependend on the frame rate. For a high framerate, timeDelta would be low, and as a consequence t would be close to 0: the camera wouldn't be rotated much. For a low framerate, timeDelta would be high (as said, perhaps also greater than 1), and the orientation would be closer to the target's one (or behave "chaotic" when exceeding 1).

IMHO, the approaches of Basiror and Drigovas have the advantage to be able to work unrelated to the framerate.

EDIT: This post is not related to Zipster's above, but to Gage64's.

[Edited by - haegarr on July 10, 2008 7:09:09 AM]

##### Share on other sites
Quote:
 Original post by haegarrWhat is the exact meaning of timeDelta? Doesn't it denote the time elapsed since the last recent frame update until this one? If so, is anywhere guaranteed that timeDelta doesn't exceed lag? Due tot = timeDelta / lagand no seen restriction of t, I assume that the intentional interpolation may behave accidentally as an extrapolation.

if (t > 1.0f)    t = 1.0f;

Doesn't make any difference. If I replace timeDelta with a small constant, the rotation feels jittery and the problem doesn't go away.

I seem to remember that slerp always takes the shortest path from one orientation to the other, but if you give it two orientations that are 180 degrees apart, the behavior is undefined. Could this have something to do with this?

##### Share on other sites
Quote:
Original post by Gage64
Quote:
 Original post by haegarrWhat is the exact meaning of timeDelta? Doesn't it denote the time elapsed since the last recent frame update until this one? If so, is anywhere guaranteed that timeDelta doesn't exceed lag? Due tot = timeDelta / lagand no seen restriction of t, I assume that the intentional interpolation may behave accidentally as an extrapolation.

if (t > 1.0f)    t = 1.0f;

Doesn't make any difference. If I replace timeDelta with a small constant, the rotation feels jittery and the problem doesn't go away.

I seem to remember that slerp always takes the shortest path from one orientation to the other, but if you give it two orientations that are 180 degrees apart, the behavior is undefined. Could this have something to do with this?

why not just slerp the movement. I used this for a camera in a 3rd camera and it worked great.

percent = magic number which feels right in-gamecurrent = current camera positiontarget = character position + rigid camera offseteffective = slerp (target, current, percent * time-delta)set camera pos to effective

Time delta works to smooth out movement so that a short frame moves less and a longer frame moves more. Its usually in seconds so that figures can be comprehendible before multiplied by time-delta (which may be 0.02 for a 20ms frame).

// this moderates the player's movement so that a short frame moves less (smaller multiplier) and a longer frame moves more to 'catch up'player.pos += player.direction * (player.speed * time_delta)