opinions on frame based vs time based movement

Started by
36 comments, last by graveyard filla 19 years, 10 months ago
interesting, i just started getting this problem too, (it had been working all day until right now. weird)

WOW this is wierd, it just stopped being jerky (all i did was reuibld it)

[edited by - unliterate on May 30, 2004 2:11:45 AM]
Advertisement
has anyone solved this yet? does anyone know why we are getting jerky movement? this is not cool... could it be because im using SDL_GetTicks() (what are you guys using for timing?).. maybe i should look into using a more high performance timing system thats cross platform (maybe use #if''s to use Query on windows and something else on linux?)... thanks for any help!!
FTA, my 2D futuristic action MMORPG
try using this class and see if it fixes it
you dont have to use this in your final product
you can just test it out
if you get the same results you know SDL_GetTicks is ok

class CTimer{  protected:    double m_Frequency;    __int64 m_StartClock;    float m_FrameTime;  float m_FrameStart;  float m_FrameEnd;    float m_FpsCount;  float m_FpsUpdate;  float m_Fps;  public:    float GetFrameTime() { return m_FrameTime; }    float GetFps() { return m_Fps; }    double GetTime();    void Init();  void Update();  };


#include <windows.h>#include "timer.h"  double CTimer::GetTime(){  __int64 EndClock;    QueryPerformanceCounter((LARGE_INTEGER*)&EndClock);    return (double)(EndClock-m_StartClock)*m_Frequency;}  void CTimer::Init(){  __int64 rate;    // Get the performance frequency  QueryPerformanceFrequency((LARGE_INTEGER*)&rate);    // Invert it so we can multiply instead of divide  m_Frequency = 1.0/(double)rate;    // Get the start time  QueryPerformanceCounter((LARGE_INTEGER*)&m_StartClock);    m_FrameTime  = 0.0f;  m_FrameStart = (float)GetTime();  m_FrameEnd   = 0.0f;  m_FpsCount   = 0.0f;  m_FpsUpdate  = 0.0f;  m_Fps        = 0.0f;}  void CTimer::Update(){  // Update the timing  m_FrameEnd   = (float)GetTime();  m_FrameTime  = m_FrameEnd - m_FrameStart;  m_FrameStart = m_FrameEnd;    // Increase the Fps counter  m_FpsCount++;    // Update the Fps  if( (m_FrameStart - m_FpsUpdate) > 1.0f )  {    m_FpsUpdate = m_FrameStart;    m_Fps       = m_FpsCount;    m_FpsCount  = 0.0f;  }}
I don''t really like relying on frames per second to give me a guess as to how fast my objects should be moving in time, nor do I like relying on ticks because well, that''s an inaccurate amount of time when you''re getting massively high FPS''s. I really haven''t found a way around it being jerky, but what you *can* do is make the program perform at a constant 60FPS (assuming all your frames are being done in a time that would be faster than 60 FPS). So once you determine the frame interval, you can just sleep() the extra few milliseconds until 1/60 of a second has passed since the last frame was drawn. Make sense? The human eye can''t see more than around 60 FPS anyway
My fellow Americans I have just signed legislation that outlaws Russia forever. Bombing will commence in five minutes.
The easiest way to do time-based movement is what steveth45 was trying to describe (although he erroneously labeled it as frame-based and his code is a mess).

Basically, you have a timer set to your target/maximum framerate such as 100fps. In your main loop you check each frame how many ticks have passed since the previous frame. Run the game logic/physics code that many times. If that number was greater than zero, display. Repeat.

while (running){  oldtime = time;  time = get_time();  if (time > oldtime)  {    d = time - oldtime;    while (d--)      run_game_logic();    display();  }} 


"massively high FPS" is meaningless when your monitor can only display <100. The ideal FPS is the same as the monitor''s refresh rate. Unfortunately it varies on PC (unlike consoles) so you can''t just always use 60...
The jerkiness might be from a low resolution timer. You could try to averaging the last few time deltas.

[edit] Short example I haven't tested but I think should work. Try it if you're feeling bold.

Uint32 delta_table[8], *delta_pos = delta_table, delta_total = 0;  Uint32 old_time = SDL_GetTicks(), new_time;  memset(delta_table, 0, sizeof delta_table);    while(running)   {    delta_total -= *delta_pos;    delta_total += (*delta_pos = (new_time = SDL_GetTicks()) - old_time);    old_time = new_time;        if(++delta_pos == delta_table + sizeof delta_table / sizeof *delta_table)     delta_pos = delta_table;        float game_speed = delta_total / (float)(sizeof delta_table / sizeof *delta_table) / TICKS_PER_SECOND;        // do stuff here   }


[edited by - smart_idiot on June 1, 2004 5:56:18 PM]
Chess is played by three people. Two people play the game; the third provides moral support for the pawns. The object of the game is to kill your opponent by flinging captured pieces at his head. Since the only piece that can be killed is a pawn, the two armies agree to meet in a pawn-infested area (or even a pawn shop) and kill as many pawns as possible in the crossfire. If the game goes on for an hour, one player may legally attempt to gouge out the other player's eyes with his King.
quote:
I agree except that you said that an advantage of time-based movement is that you guarantee game speed will be the same regardless of the system. I have acheived that with frame based code so I don''t see how that''s an advantage.


There is no way to guarantee a frame rate on any system, so you haven''t achieved that.

---------------------------Hello, and Welcome to some arbitrary temporal location in the space-time continuum.

well i found the cause of the jerykness in my program: vsync. with it the game stayed at about 85 fps as it should but it seemed to jump into the quad digits at a random interval averaging out to one second. does using the .quadpart of a large_int only use part of it? (meaning it would loop back to a value of 0 every once and a while). with vsync off and the program running at 2,2k fps, there is no jerkyness.

This topic is closed to new replies.

Advertisement