• Advertisement
Sign in to follow this  

Logic to graphics interpolation

This topic is 3296 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm having some issues obtaining a smooth game experience. I'll explain my current situation and I hope someone has suggestions for improvement. Our game logic runs at a fixed rate of 50 FPS. Our graphics run at a variable rate. There is currently no threading, so our main loop looks something like this in pseudo code:
float timeLeft = 0;
while (true)
{
    // Update game logic
    timeLeft += timeSinceLastFrame;
    while (timeLeft >= 1/50)
    {
        logic.previous = logic.current;
        logic.current = updateLogic();
        timeLeft -= 1/50;
    }
    // Update graphics
    float alpha = timeLeft / (1/50);
    graphics.position = logic.previous.position + ((logic.current.position - logic.previous.position) * alpha);
    graphics.orientation = slerp(alpha, logic.previous.orientation, logic.current.orientation);
    render();
}
I believe this should give a pretty smooth experience, but it's still a bit jerky. I mainly believe the jerkyness to be caused by an irregular graphics FPS. Any ideas to get the game smooth? I'll also settle for links to papers that attack this problem (though, most I found seem to be very basic and flawed). There's also data like velocity available if needed. But I would prefer someone would speak from personal experience before setting me off on a path of incorporating that.

Share this post


Link to post
Share on other sites
Advertisement
Couple of possibilities - timeLeft is creeping due to floating point inaccuracy; this could cause some logical glitches in your loop structure and introduce stuttering.

The other thing that comes to mind is if you get lucky enough to actually hit 0 timeLeft when exiting the while loop. For example, suppose our frame time is 0.04 seconds (25 FPS). If we get lucky with floating point inaccuracy, then timeLeft will be 0 after 2 iterations of the inner while loop. This will cause alpha to be 0, which means you will use the last frame's position instead of the current frame's position. Since your world simulation wants to run at 50 FPS, this will cause a stutter.

Of course it is also possible that your logic can't reach 50 FPS. Suppose the game logic runs at 40 FPS due to a maxed out CPU. Your frame time is then 0.025 seconds. The inner while loop will run once, and carry over a "spillage" of 0.005 seconds in timeLeft. After 200 frames (5 seconds) you will get an extra tick of game world simulation, which means again you will experience a stutter.


I would do some debug logging and trace exactly how much time is elapsing between frames, what your true FPS rate is for the game logic, and so on.

Share this post


Link to post
Share on other sites
	float timeLeft = 0.225;
float alpha = timeLeft / (1/50);
std::cout << alpha << std::endl;


Produces:
Quote:
1.#INF


1 (int) / 50 (int) = 0.
alpha = timeLeft / 0;

Share this post


Link to post
Share on other sites
Quote:
Original post by ApochPiQ
Couple of possibilities - timeLeft is creeping due to floating point inaccuracy; this could cause some logical glitches in your loop structure and introduce stuttering.

Given the range of the values, this should be really minor.
I'll give it a try to resolve this but I don't think it'll matter much

Quote:
Original post by ApochPiQ
The other thing that comes to mind is if you get lucky enough to actually hit 0 timeLeft when exiting the while loop. For example, suppose our frame time is 0.04 seconds (25 FPS). If we get lucky with floating point inaccuracy, then timeLeft will be 0 after 2 iterations of the inner while loop. This will cause alpha to be 0, which means you will use the last frame's position instead of the current frame's position. Since your world simulation wants to run at 50 FPS, this will cause a stutter.

I believe if the value is 0 the behavior is correct.
If it's 0, it should be using the full value of the previous position.
Basically, the graphics always run 1 logic frame or less behind.

Quote:
Original post by ApochPiQ
Of course it is also possible that your logic can't reach 50 FPS. Suppose the game logic runs at 40 FPS due to a maxed out CPU. Your frame time is then 0.025 seconds. The inner while loop will run once, and carry over a "spillage" of 0.005 seconds in timeLeft. After 200 frames (5 seconds) you will get an extra tick of game world simulation, which means again you will experience a stutter.


I would do some debug logging and trace exactly how much time is elapsing between frames, what your true FPS rate is for the game logic, and so on.

I think what happens when it can't reach 50 fps it'll increasingly slow down and crash eventually.
I'll do some research into this direction though.

Quote:
Original post by Antheus
	float timeLeft = 0.225;
float alpha = timeLeft / (1/50);
std::cout << alpha << std::endl;


Produces:
Quote:
1.#INF


1 (int) / 50 (int) = 0.
alpha = timeLeft / 0;


It's pseudo code. In our actual code that value is pre-calculated using floats.
I wish the issue was something as easy as this :)
I appreciate the effort though.

Share this post


Link to post
Share on other sites
If your graphics time is just slightly different from 50 fps (20 ms), you will get low frequency 'beating'. For example let's consider a graphics time of 25 ms (40 fps). Every 4 graphics frames, you will skip a frame, resulting in a jerky effect at 10 fps.

Or consider a graphics rate of 60 fps (17 ms). This time, every 5 game logic frames you will run a graphics frame without updating the logic ... again, resulting in a jerky 'stop' effect every 10 ms.

Of course if your graphics frame rate can dip horribly then you're in trouble, but that shouldn't be the case if you're using hardware acceleration unless your game is very complicated.

The answer is to do one of:
- Fix logic rate to graphics rate (at a fixed FPS). That means you need to estimate the minimum graphical framerate and run your timestep at that rate all the time. The advantage is it's easy; the disadvantage is that it restricts the accuracy of your simulation based on the graphics, which can be a problem.
- Fix graphics to logic rate: similar to the above, but when the logic is computationally expensive and takes longer.
- Draw an interpolated state. This is better if you can do it, because it ensures a smooth progression of graphics state. But it is not possible in all engines, and it means that your graphical display needs to interpolating between the last two complete frames ... so it can be up to a complete frame behind.

Share this post


Link to post
Share on other sites
First of all, thank you all for your input.

Quote:
Original post by Bob Janova
If your graphics time is just slightly different from 50 fps (20 ms), you will get low frequency 'beating'. For example let's consider a graphics time of 25 ms (40 fps). Every 4 graphics frames, you will skip a frame, resulting in a jerky effect at 10 fps.

Or consider a graphics rate of 60 fps (17 ms). This time, every 5 game logic frames you will run a graphics frame without updating the logic ... again, resulting in a jerky 'stop' effect every 10 ms.

Of course if your graphics frame rate can dip horribly then you're in trouble, but that shouldn't be the case if you're using hardware acceleration unless your game is very complicated.

This is what the interpolation should be solving :(

Quote:
Original post by Bob Janova
The answer is to do one of:
- Fix logic rate to graphics rate (at a fixed FPS). That means you need to estimate the minimum graphical framerate and run your timestep at that rate all the time. The advantage is it's easy; the disadvantage is that it restricts the accuracy of your simulation based on the graphics, which can be a problem.

Fixed FPS is a bit tricky when targeting different hardware. Additionally, due to different refresh rates on monitors the ideal FPS differs per machine.
I also can't make the game logic at variable speed. The physics engine surely won't like that.
Quote:
Original post by Bob Janova
- Fix graphics to logic rate: similar to the above, but when the logic is computationally expensive and takes longer.

Same as above I guess.
Quote:
Original post by Bob Janova
- Draw an interpolated state. This is better if you can do it, because it ensures a smooth progression of graphics state. But it is not possible in all engines, and it means that your graphical display needs to interpolating between the last two complete frames ... so it can be up to a complete frame behind.

That's what it's currently doing. However the result doesn't appear to be completely smooth.
I'm looking for alternative algorithms people have had success with.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement