Sign in to follow this  
hymerman

Smooth framerate-independent chase camera

Recommended Posts

Hi, I want my camera to chase a moving object, but I want to do it in a smooth way. I work out the desired position (the position the camera tends to) each frame, and move the camera some fraction of the distance towards this. That works alright when the framerate is steady, but as the framerate changes the camera gets closer to or further away from the chased object, since obviously the fraction is applied more or less frequently. Moving a fraction of the distance each *frame* is obviously wrong, it needs to be something like "move a fraction of the distance each *second*", but how would I split this over multiple frames that could each be different durations? I can't just multiply the fraction by the frame duration since A) a particularly long frame could yield a fraction > 1 and B) it would be like applying interest at irregular intervals, the end result would be different due to the compound nature of it. So, please help out, how do I do this? Thanks :)

Share this post


Link to post
Share on other sites
This can be one way of doing it, where you will get a constant delta time between updates (thereby making your calculations very easy).
First, you split up the rendering and the updating, so the updating will only take place 30 or 60 times a second, for instance, and the rendering will take place as many times possible between the required updating.
That is, updating has always the highest priority amongst those two.
On a side note with this method, just make sure that you get a decent amount of fps on the minimum required system, since otherwise this could indicate that the updating is taking all the time, leaving no room for rendering (remember, updating > rendering).
Back to the question. Further more, you need to save 2 or more (look below) update samples, most likely the current one and the last one.
Then, when you need to render the camera, you simply interpolate between the update samples with a delta frame time in order to get the "frame position" right, thereby only updating the temporary frame position, not the actual position.
A simple, linear interpolation (other algorithms might require more than 2 samples) should provide a decent result in most cases, depending on your camera's movement path/trajectory, of course.

Thereby you also avoid calculating the whole update-thingy each frame (meaning more fps), and at the same time you prevent small calculation-based errors (floating point inaccuracy) from accumulating each frame.
Although they will still accumulate, since it will still be floating-point math, they will now accumulate in a much smaller scale and not on a per frame basis.
The latter one is actually the primary reason for doing this, not the first one. The first one is just a happy side effect :)

[Edited by - nife87 on July 3, 2008 4:19:25 AM]

Share this post


Link to post
Share on other sites
Surely that's effectively fixing the framerate? No, that's not a workable solution - I can't ever guarantee that the frame rate of updates will remain constant, and the whole idea doesn't really sit well with me.

There must be some way this can be done with a variable frame-rate, I refuse to believe that this hasn't been solved in any game, ever...

Share this post


Link to post
Share on other sites
Allow your camera's update method to handle a variable framerate (i.e. give it the time elapsed since the last update), and use an exponential decay as the interpolation function. You'll get the benefits of both smooth movement and robust variable framerate handling.

Share this post


Link to post
Share on other sites
Quote:
Original post by hymerman
Surely that's effectively fixing the framerate? No, that's not a workable solution - I can't ever guarantee that the frame rate of updates will remain constant, and the whole idea doesn't really sit well with me.


I think you misunderstood. I do not fix the framerate at all, I just prioritize updating higher than rendering.
Try to look at it this way (assuming Time() is returning milliseconds in double).


const double UpdateInterval = 1000.0/60.0;
const double EpochTime = Time();
double update_time = 0.0;

while (1)
{
while (Time() - EpochTime - update_time >= UpdateInterval)
{
UpdateFunction(update_time);
update_time += UpdateInterval;
}
RenderAndInterpolateFunction()
}


Here, the updating might take place several times each frame, and the time of the updating will always remain constant.
No matter how many times the rendering can/will take place, the updating will always happen at these times:
EpochTime + 0.0
EpochTime + 60.0
EpochTime + 120.0
EpochTime + 180.0

And so forth. Well, actually the rate will just remain constant, not guaranteed to be incremented with 60.0 exactly, since we are dealing with floating-point math, but it can easily be converted to integer math instead.

There might be input synchronization problems and possibly also not a totally solution, but here is the basic idea. I will dig into it later and check it again.

Share this post


Link to post
Share on other sites
It sounds like your current process is something like this:

weight = 0.99
camera_position = weight*camera_position + (1-weight)*target_position


This produces a negative exponential dropoff with respect to the number of frames processed. Knowing this, you can compensate for uneven framerates by calculating the weight like this:

weight = (1-k) ^ dt


where k is the fractional change in distance in one second, and dt is the number of seconds since the last frame.

Share this post


Link to post
Share on other sites
Thanks dwahler, I think that's exactly what I was after :)

nife87: ah, I did misunderstand. Ok, that's better than locking the framerate, but it's still not really suitable for me, sorry :) I prefer just being able to let all systems do their update once a frame. If you go defining priorities of all systems in the game, I can see the code getting hairy, when it should be very, very simple.

Share this post


Link to post
Share on other sites
Quote:
Original post by hymerman
nife87: ah, I did misunderstand. Ok, that's better than locking the framerate, but it's still not really suitable for me, sorry :) I prefer just being able to let all systems do their update once a frame. If you go defining priorities of all systems in the game, I can see the code getting hairy, when it should be very, very simple.


All systems? I only deal with updating and rendering in terms of "systems" in this manner :)
The only issue I see is with input, where each keystroke (or other input data) should have a timestamp to be processed in the right update, compared to the usual brute method of processing all known keystrokes each update.
You only need to prioritize subsystems in the updating function as usual (AI before physics or whatever you need), so it does not have any disadvantages in terms of that.

The main issue with your delta-time method is just that it will get much more inaccurate as time goes by because of the floating-point issues than my constant delta-time method, no matter which integration algorithm you choose (hopefully not plain Euler).

Of course, you should choose whichever you prefer, but you should at least be aware of the rounding errors that will occur.

[Edited by - nife87 on July 3, 2008 1:10:24 PM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this