Implementing lag for a 3rd person camera

Started by
26 comments, last by Gage64 15 years, 9 months ago
I've implemented a simple 3rd person camera. Right now, the camera moves exactly like the player object moves. I want the camera to lag slightly like in many 3rd person games. Here's the pseudo code I had in mind (runs every frame):
const lag = 0.25;  // quarter second delay

if moved
    moved = false;
    save current orientation;
    interp = true;
  
if interp
    timer += timeDelta;
    if timer > lag
        timer = 0;
        interp = false;
    t = transform timer from [0, lag] to [0, 1]
    interpolate by t from saved to current (which keeps changing!)
moved is set to true when the object moves. I implemented this but I'm not getting the results I want. Before I post my code - am I thinking about this correctly? Is there a simpler way to do this? Any help is greatly appreciated. [Edited by - Gage64 on July 9, 2008 7:06:51 AM]
Advertisement
Is this lag on the rotation of camera to face what the player object is facing, or lag on moving the camera as the player object moves away?
Quote:Original post by Naxos
Is this lag on the rotation of camera to face what the player object is facing, or lag on moving the camera as the player object moves away?


The former. I will try to implement lag for the movement after it works for the rotation (although I think the idea should basically be the same?).
If I'm reading your psuedo-code correctly, then the behavior currently looks like it would wait for a fourth of a second, then begin to move.

I'm not really sure what 't = transform timer from [0, lag] to [0,1]' means, but I'm assuming its just a time-based view movement.


What sticks out my mind is that the camera doesn't immediately move. Perhaps that's what you want, but when I think of most lagged third-person camera movement, the camera will begin to move immediately, but the velocity of the turn is slower than the players.

So the player can turn X radians per second, and the camera can turn CAM_LAG*X radians per second. Where CAM_LAG would be some fraction between 0 and 1.

Or just set a velocity for the camera regardless of how fast the player can turn.

And if the player can turn quite fast, then you could say, if the angle between the camera view and the player vector is greater than some limit (for example: 90 degrees), then increase the camera turn velocity.

Or the velocity could be based directly on the different between the view vector and the player vector.

CamRotationVelocity = FEEL_GOOD_NUMBER * PlayerVector.getAngleBetween(CameraVector)


Which would be very elastic, I think. Angle grows, thus velocity grows, and vice-versa.


Might not be the way you would like to go about it though... Would this work for you?
Yes there is.
Just store the last n camera positions can calculate the mean.
Use this mean as your current position when rendering the camera.

This will make the camera lack somewhat behind the object.

To make this time consistent you need to make n dependent on your framerate.

e.g.:
fps= 50
n=20
if your fps raise to 100 you need to double n
n=40 because you are effectively storing twice as many points in the same time.

A better solution is to keep n fixed and only store the position of your camera every 20ms if your fps = 50 1000ms/50
http://www.8ung.at/basiror/theironcross.html
Quote:Original post by Naxos
If I'm reading your psuedo-code correctly, then the behavior currently looks like it would wait for a fourth of a second, then begin to move.


The intention is that the camera should start moving immediately, but it should take 0.25 seconds for it to catch up. Are you saying that that's not how it works?

Quote:I'm not really sure what 't = transform timer from [0, lag] to [0,1]' means, but I'm assuming its just a time-based view movement.


t is the interpolation parameter given to the slerp function, so it should vary from 0 to 1, instead of from 0 to delay.

Quote:...the camera will begin to move immediately, but the velocity of the turn is slower than the players.


That's what I'm trying to do. Actually I'm not sure how your proposed method is different than what I'm doing now (it feels different, but I can't quite explain to myself how).
Quote:Original post by Basiror
Yes there is.
Just store the last n camera positions can calculate the mean.
Use this mean as your current position when rendering the camera.

This will make the camera lack somewhat behind the object.


That actually sounds more complicated than what I'm doing. I'm not sure how to implement this but I will think about it.

Thank you for the suggestion.
I think you're right. I must've misread or misunderstood the code.

There is something about it though, that I can't quite put my finger on.

What behavior is it exhibiting which is different from what you wanted/expected ?

Edit:
Also, 'interpolate by t from saved to current (which keeps changing!)'
What orientation is 'saved' and what is 'current'? When are they updated?

I wouldn't mind a peek at the code, too.
How about have an expected position for the camera that is fixed to the character, and the actual position of the camera that chases the expected position with finite velocity [or even acceleration], with the focus of the camera fixed. The player starts to move, and the camera stays fixated on the character, but sort of smoothly attempts to chase where the camera *should* be. This way you do not need any previous camera positions, nothing gets saved, and you don't need to keep a bunch of old information around. This also allows you to take advantage of all the stuff you can do with steering so that your camera doesn't clip through your terrain. or go inside other characters.

Very simple example:
camera velocity = final position - camera position;bound camera velocity between a minimum speed and a maximum speedcamera delta = camera velocity * dt;if(camera delta length < distance between camera position and final position)     camera position += camera velocity * dt;else     camera position = final position; /* so it doesn't overshoot it and bounce around
@Drigovas: Thanks, I will reread your description several times until I understand it. [smile]

@Naxos: Here is the relevant code (C#, MDX):

// VariablesMesh obj;Matrix objOrient;Vector3 objPosition;Vector3 cameraOffset;Matrix camOrient;Vector3 camPosition;float timer;bool moved = false;bool interp = false;Quaternion savedOrient;// This is in MoveObjectif (keys.KeyState(Keys.Right)){    Matrix rot = Matrix.RotationY(DegToRad(timeDelta * rotateSpeed));    objOrient *= rot;    moved = true;}// Similar handling for other keys// ...// All this is in the update function which is called every frameMoveObject(timeDelta);const float delay = 0.25f;if (moved){    moved = false;    savedOrient = Quaternion.RotationMatrix(camOrient);    interp = true;}if (interp){    timer += timeDelta;    if (timer > delay)    {        timer = 0.0f;        camOrient = objOrient;        interp = false;    }    else    {        float t = timer / delay;        Quaternion target = Quaternion.RotationMatrix(objOrient);        Quaternion q = Quaternion.Slerp(savedOrient, target, t);        camOrient = Matrix.RotationQuaternion(q);    }}else{    camOrient = objOrient;}


Quote:Original post by Naxos
There is something about it though, that I can't quite put my finger on.


That's exactly how I feel. [smile]

Quote:What behavior is it exhibiting which is different from what you wanted/expected ?


Hard to explain. Say I want to rotate right. If I just tap the right arrow key, it seems to work fine (the object rotates and the camera lags behind slightly). Not great, but fine (hard to explain why it's not great...). But if I continue to press it, it's like the camera lags behind, quickly catches up, starts lagging again, quickly catches up, etc. The rotation feels very choppy.

Also, changing the value of delay doesn't change this behavior.

This topic is closed to new replies.

Advertisement