blueshogun96

C++ How to fix this timestep once and for all?

Recommended Posts

One of the biggest reasons why I haven't released my game is because of this annoying timestep issue I have.  To be frank, this game was poorly planned, poorly coded, and was originally written as a small tech demo and a mini-game.  Now it has evolved into a fully featured (and very messy code base of a) game.  If you thought Lugaru was bad, Looptil is far worse!  So what happens is that the delta is not really consistent.  Sometimes enemies don't spawn fast enough because the delta isn't even consistent at 60fps, which is a big reason why the game is broken.

static uint64_t last_time = 0;
uint64_t current_time = time_get_time(); //( get_current_time() * 1000.0f );

int fps_limit = 60;
float frame_time = float( current_time - last_time );

if( last_time != 0 )
	This->m_delta_speed = frame_time / ( 1000.0f / 60.0f );

And this is my timing function:

uint64_t time_get_time()
{
#ifdef _WINRT
	return GetTickCount64();
#endif
	
#if __ANDROID__	/* TODO: Fix std::chrono for Android NDK */
	uint64_t ms = 0;
	timeval tv;

	gettimeofday( &tv, NULL );

	ms = tv.tv_sec * 1000;
	ms += tv.tv_usec / 1000;

	return ms;
#else
	std::chrono::system_clock::time_point now = std::chrono::system_clock::now();
    std::chrono::system_clock::duration tp = now.time_since_epoch();
	std::chrono::milliseconds ms = std::chrono::duration_cast<std::chrono::milliseconds>(tp);
	return (uint64_t) ms.count();
#endif
}

Now I know some of you will cringe when you see GetTickCount64(), but that's the only function that gives me reliable results on Windows 10 (UWP) ports, so that's staying. 

One more thing to note here, my game has a badly written game loop.  It uses a switch statement, followed by draw_game_mode(), update_game_mode(), so I kinda screwed myself there.  I tried changing it, but it broke the game completely, so I left it in it's messy state.  Is it possible to simply just have a proper delta calculation function?  Because it's adjusting itself based on the current frame time.  This may not be the best of ideas, but it was something I whipped up because I needed to have this run okay when it goes down to 30fps without running half the speed.  This works in general, but it's innacurate and causes problems.

Any ideas?  Thanks.

Shogun

EDIT: Feel free to ask anything in case I missed a vital detail.  My lunch break is ending and it's time for me to go.  Thanks.

Share this post


Link to post
Share on other sites

Although I see that link shared alot, it actually made my timing issues worse for this particular game.  In the future, I'll be sure to follow that guide to avoid future headaches.

Also, I fixed the problem.  Instead of using frame times, I used my game's actual frame rate divided by 1000.  Now it works perfectly (so far).  L. Spiro is going to kill me if he reads this, but I just want this game to work!

Thanks.

Shogun

Share this post


Link to post
Share on other sites

Frame rate = Frames Per Second by most usual definitions. Or, Frames Per 1000 Milliseconds, if you will.

If your framerate is N frames per second, then it is also true that your framerate is N/1000 frames per millisecond. Frame time is milliseconds per frame, or seconds per frame / 1000.

It sounds like you just have a units/order-of-magnitude mixup in the original code, and your 1000 is adjusting for it.

Share this post


Link to post
Share on other sites
On 7/30/2017 at 11:43 PM, blueshogun96 said:

Although I see that link shared alot, it actually made my timing issues worse for this particular game.  In the future, I'll be sure to follow that guide to avoid future headaches.

Also, I fixed the problem.  Instead of using frame times, I used my game's actual frame rate divided by 1000.  Now it works perfectly (so far).  L. Spiro is going to kill me if he reads this, but I just want this game to work!

Thanks.

Shogun

You.

Dirty.

RAT!!

You didn’t make the game work, you just hid the problem under a rug.  It will work differently on various devices so I am not sure how this helps you release anything.

You don’t show the whole game loop.  What is This->m_delta_speed?
Are you accumulating time from 0 = launch of game?  Why the conversion to float?


L. Spiro

Share this post


Link to post
Share on other sites
On 30/07/2017 at 6:40 AM, blueshogun96 said:

 


static uint64_t last_time = 0;
uint64_t current_time = time_get_time(); //( get_current_time() * 1000.0f );

int fps_limit = 60;
float frame_time = float( current_time - last_time );

if( last_time != 0 )
	This->m_delta_speed = frame_time / ( 1000.0f / 60.0f );

 

Is that the real algorithm? You never update last_time in that algorithm.

On 30/07/2017 at 6:40 AM, blueshogun96 said:

Now I know some of you will cringe when you see GetTickCount64(), but that's the only function that gives me reliable results on Windows 10 (UWP) ports, so that's staying. 

 

Every Win10/UWP device supports the QueryPerformanceCounter/Frequency API - the normal way to do precision timing.

Milliseconds are shitty for game timing. If your loop frequency is 60Hz, then a millisecond timer is accurate to +/- 6%... What's worse is that GetTickCount64 says that it typically has 10-16ms accuracy (+/-60% to 96% error) :(

On 31/07/2017 at 4:43 PM, blueshogun96 said:

Also, I fixed the problem.  Instead of using frame times, I used my game's actual frame rate divided by 1000.  Now it works perfectly (so far).  L. Spiro is going to kill me if he reads this, but I just want this game to work!

 

Maybe you've got code that just happens to output a small value every frame, which is not actually a measurement of delta time, but happens to simply be some arbitrary number that's small enough to act as a plausible fixed timestep value.

e.g. if you simply hardcode an arbitrary delta, such as "m_delta_speed = 0.06f;" do you get similar results?

Share this post


Link to post
Share on other sites
On 8/4/2017 at 4:41 AM, L. Spiro said:

You.

Dirty.

RAT!!

You didn’t make the game work, you just hid the problem under a rug.  It will work differently on various devices so I am not sure how this helps you release anything.

You don’t show the whole game loop.  What is This->m_delta_speed?
Are you accumulating time from 0 = launch of game?  Why the conversion to float?


L. Spiro

Yes, now I am finding the flaws as they surface.  Sometimes after coming out of the background or a suspended state, the FPS calculation will spew a really high number and cause the game to move rapidly for one second, then go back to normal.  This will result in death many times for the user.  So yes, I dun f@#%ed up even more.

The entire gameloop is too large and is a complete mess (I'll never code a game this way ever again).  The delta_speed variable is a percentage that is multiplied against the entity's speed value so that it moves at an adjusted speed based on frame rates.  I am not accumulating time as I did not plan this thing ahead or even consider the need for time based movement when I originally wrote it.  Then when primitive counts started reaching the millions, frame rates dropped and then I realize "I dun screwed up".

 

On 8/4/2017 at 7:30 AM, Hodgman said:

Is that the real algorithm? You never update last_time in that algorithm.

Every Win10/UWP device supports the QueryPerformanceCounter/Frequency API - the normal way to do precision timing.

Milliseconds are shitty for game timing. If your loop frequency is 60Hz, then a millisecond timer is accurate to +/- 6%... What's worse is that GetTickCount64 says that it typically has 10-16ms accuracy (+/-60% to 96% error) :(

Maybe you've got code that just happens to output a small value every frame, which is not actually a measurement of delta time, but happens to simply be some arbitrary number that's small enough to act as a plausible fixed timestep value.

e.g. if you simply hardcode an arbitrary delta, such as "m_delta_speed = 0.06f;" do you get similar results?

The loop is updated further down.  I forgot to add that.

If milisecond timing is a bad design choice, then I will do a way with it pronto.  I wasn't aware of the poor accuracy, and if the margin of error is that great, then I'll most definitely stop using it.  I wrote that half arsed timing function out of laziness.  Speaking of high resolution timers, I'll need one that's portable to all three major OSes.  Which I did find here: http://roxlu.com/2014/047/high-resolution-timer-function-in-c-c--

/* ----------------------------------------------------------------------- */
/*
  Easy embeddable cross-platform high resolution timer function. For each 
  platform we select the high resolution timer. You can call the 'ns()' 
  function in your file after embedding this. 
*/
#include <stdint.h>
#if defined(__linux)
#  define HAVE_POSIX_TIMER
#  include <time.h>
#  ifdef CLOCK_MONOTONIC
#     define CLOCKID CLOCK_MONOTONIC
#  else
#     define CLOCKID CLOCK_REALTIME
#  endif
#elif defined(__APPLE__)
#  define HAVE_MACH_TIMER
#  include <mach/mach_time.h>
#elif defined(_WIN32)
#  define WIN32_LEAN_AND_MEAN
#  include <windows.h>
#endif
static uint64_t ns() {
  static uint64_t is_init = 0;
#if defined(__APPLE__)
    static mach_timebase_info_data_t info;
    if (0 == is_init) {
      mach_timebase_info(&info);
      is_init = 1;
    }
    uint64_t now;
    now = mach_absolute_time();
    now *= info.numer;
    now /= info.denom;
    return now;
#elif defined(__linux)
    static struct timespec linux_rate;
    if (0 == is_init) {
      clock_getres(CLOCKID, &linux_rate);
      is_init = 1;
    }
    uint64_t now;
    struct timespec spec;
    clock_gettime(CLOCKID, &spec);
    now = spec.tv_sec * 1.0e9 + spec.tv_nsec;
    return now;
#elif defined(_WIN32)
    static LARGE_INTEGER win_frequency;
    if (0 == is_init) {
      QueryPerformanceFrequency(&win_frequency);
      is_init = 1;
    }
    LARGE_INTEGER now;
    QueryPerformanceCounter(&now);
    return (uint64_t) ((1e9 * now.QuadPart)  / win_frequency.QuadPart);
#endif
}
/* ----------------------------------------------------------------------- */

Since this game is cross platform, it has to work on everything.  If nano seconds are the way to go, then I'll use that instead.

And yes, using the frame rate isn't really a reliable way to do this (it blew up in my face).  I found that using a fixed value will give me consistent results.  A fixed delta doesn't generate any issues for me. 

Shogun

Share this post


Link to post
Share on other sites

As a side note, in case it helps you (on some future project I guess): bullet (the physics API) ticked at constant rate, but it still allowed for frame-specific updates. OFC it only interpolated those between two known states. So it's has both predictable behavior and hi-frame-rate-butter-smooth goodness; apparently nobody noticed it's a tick late.

I tried something similar in an TD game I tried years ago I don't think you remember: the implication is that you have to correct for inconsistencies as an enemy spawned at 0.5 tick still has to be half-a-tick evolved and cannot be backwards interpolated at 0.3 ticks. Since the game wanted to be deterministic in nature I couldn't let players the chance to get different patterns due to hardware power. Ew! Hopefully you don't need this detail!

Share this post


Link to post
Share on other sites
On 10/08/2017 at 10:56 AM, blueshogun96 said:

  I found that using a fixed value will give me consistent results.  A fixed delta doesn't generate any issues for me. 

;(

Try it in a PC with a 144Hz monitor now..

Share this post


Link to post
Share on other sites

'Easy' fix: run your physics with a 720 Hz timestep. Iterate it 5 times per frame for 144Hz monitors/vsync, 12 times for a 60Hz monitor, and 24 times for a 30Hz monitor. No interpolation needed!

Share this post


Link to post
Share on other sites

Wow, didn't realize I had more responses to the thread...

Anyway, I'm fixed it "forrealzees" this time.  Using the roxlu portable nanosecond timer in place of my millisecond one, then converting the numerator from 1000 milliseconds to the appropriate number (1000*1000*1000), it appears to work fine this time.  Even without Vsync, ran nicely at 120+fps. 

It was a combination of a low resolution timer plus my own spawning code was causing some entities to spawn yet rapidly disappear!  Since it happens in the blink of an eye, it was a rather hard bug to catch until today.  So far, no more spawning issues!

Now to try it on my desktop Mac and PC, as well as mobile devices.

On 8/13/2017 at 2:52 PM, Hodgman said:

;(

Try it in a PC with a 144Hz monitor now..

If only I had one.  All of my monitors are 60hz only :/

Shogun.

Share this post


Link to post
Share on other sites

Why not use a high-precision timer?

You can look at mine (Timer.h, Timer.cpp) for example :) .

I always used this kind of timer when coding a game and it always gave me great result in the past.

Honestly, what's the point of fixed timestep except in some circumstance, and hard coded frames per seconds limits if you

can just use vsync to do the job for you? Just my 2 cents here...

Share this post


Link to post
Share on other sites
3 hours ago, Vortez said:

Honestly, what's the point of fixed timestep except in some circumstance, and hard coded frames per seconds limits if you

can just use vsync to do the job for you?

Vsync doesn't guarantee you a fixed frame duration - it guarantees you one of several frame durations that is a multiple of the hypothetical minimum, and that minimum can vary from one machine to another. That is a problem if you want consistent physics for all players.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Forum Statistics

    • Total Topics
      627744
    • Total Posts
      2978902
  • Similar Content

    • By 7th_Continuum
      I'm trying to implement a frictional constraint using position based dynamics, but it is only half working. I am using the formulation in the paper "Unified Particle Physics for Real-Time Applications".
      Here is my implementation:
      Particle *np = particle->nbs[j].neighbour; vec3 r = particle->x - np->x; float r_length = glm::length(r); float distDiff = r_length - restDistance; if(distDiff >= 0 ) continue; //Frictional Constraint vec3 n = r/r_length; vec3 xxi = particle->x - xi[particle->getIndex()]; vec3 xxj = np->x - xi[np->getIndex()]; vec3 tangentialDisplacement = (xxi - xxj) - glm::dot(xxi - xxj, n) * n; float td_length = glm::length(tangentialDisplacement); float genMass = ( imass / (imass + imass) ); if(td_length < (staticFriciton * distDiff)){ particle->x += genMass * tangentialDisplacement; np->x += -genMass * tangentialDisplacement; }else{ float upper = kineticFriction * distDiff; particle->x += genMass * tangentialDisplacement * std::min(upper/td_length, 1.f); np->x += -genMass * tangentialDisplacement * std::min(upper/td_length, 1.f); }  
    • By ScyllaBus
      Using my loop based on this: https://gafferongames.com/post/fix_your_timestep/
      Trying to get my game to run at fixed 60FPS (both update and render) for all machines. Studied the link above and have been stuck on this game loop for weeks trying to get it to work smoothly to glide this image across the screen. I had dealt constantly with jittering and possible tearing. I can't recall what I did to fix it exactly, but I believe it may have something to do with not rounding a variable properly (such as delta).
       
      So yeah, currently the loop works but I'm afraid as I develop the game more and have to render more, eventually something I'm doing in my loop could cause slowdowns or larger CPU usage. Does the structure of the game loop below seem okay or is there something I can do to optimize it?
      The 2D game is a generic sidescroller. Not too heavy on physics, mainly just simple platformer physics. I feel as though I'm using way too much CPU.
       
      void Game::mainLoop() { double fps = 60.0f; int frameSkip = 5; int deltaSkip = frameSkip; double miliPerFrame = 1000.0 / fps; double xx = 0.0f; double playSpeed = 5; Uint64 previous = SDL_GetPerformanceCounter(); double accumulator = 0.0f; bool shouldRender = false; bool running = true; while(running){ Uint64 current = SDL_GetPerformanceCounter(); double elapsed = (current-previous) * 1000; elapsed = (double) (elapsed / SDL_GetPerformanceFrequency() ); previous = current; // handleEvents() handleEvents(); // when we press escape reset x to 0 to keep testing // when he goes off screen if(key_states[SDL_SCANCODE_ESCAPE]) xx = 0; accumulator+=elapsed; if(accumulator >= miliPerFrame * frameSkip) accumulator = 0; shouldRender = accumulator >= miliPerFrame; while(accumulator >= miliPerFrame){ // update() //cout << playSpeed << endl; double delta = ceil(elapsed); if(delta > deltaSkip) delta = 1; //if(elapsed >= 1) delta = elapsed; xx+= playSpeed * delta;// * (1 / fps); // /update() accumulator -= miliPerFrame; //get what's left over } if(shouldRender){ // render() SDL_SetRenderDrawColor(gameRenderer, 0xFF, 0xFF, 0xFF, 0xFF); SDL_RenderClear(gameRenderer); imageController.drawImage("colorkeytest", floor(xx), 0); SDL_RenderPresent(gameRenderer); // /render() } } }  
    • By SR D
      I've been learning how to do vertex buffers plus index buffers using Ogre, but I believe this is mostly the same across several engines. I have this question about using vertex buffers + index buffers.
      Using DynamicGeometryGameState (from Ogre) as an example, I noticed that when drawing the cubes, they were programmatically drawn in order within the createIndexBuffer() function like so ...
       
      const Ogre::uint16 c_indexData[3 * 2 * 6] = { 0, 1, 2, 2, 3, 0, //Front face 6, 5, 4, 4, 7, 6, //Back face 3, 2, 6, 6, 7, 3, //Top face 5, 1, 0, 0, 4, 5, //Bottom face 4, 0, 3, 3, 7, 4, //Left face 6, 2, 1, 1, 5, 6, //Right face };
      From the above, the front face is drawn using the vertices 0, 1, 2, 2, 3, 0. But when reading in thousands of vertices from a file, one obviously doesn't code an array specifying which vertices make up a face.
      So how is this done when working with a large number of vertices?
    • By Josheir
      I am working on a SFML c++ program that uses two rendering windows passed from main to the function drawmessage in a class textmessage.  I was using the second window for displaying debug information that is displayed because I could not get the appropriate information from the SFML object.
      With that said, here is the part of that function that works the first time through and does not on the second usage.  I really have changed the code to try and get it working.   For example I created the two objects locally here for testing.  I am sorry about the extra commented statements they help convey the message too.
      There is the same problem though, the statement :     string test =     message_holder10.getString(); is working and shows "asty" on every run.  On the first run of the program there is a display of the text correctly however on the second call there is no display of it.  (I am stepping through until the display command.)
      I feel like I have exhausted my tries so I am asking for help please.
      If it is the font I will just die, I really don't think it is.
       
       
       
                  sf::Text message_holder10;
                  sf::RenderWindow windowtype3(sf::VideoMode(700, 1000), "a");

                  if ((space_is_used && on_last_line) || (space_is_used && ((line_number) == (total_lines - 2))))
                  {

                      //string temp_string = message::Get_Out_Bound_String();
                      //int length_of_string = temp_string.length();
                      sf::Font Fontforscore;
                      if (gflag == 0)
                      {
                          gflag = 1;
                          
                          if (!Fontforscore.loadFromFile("ARIALBD.ttf"))
                          {
                              exit(1);
                          }

                          message_holder10.setFont(Fontforscore);
                          message_holder10.setCharacterSize(100);
                          message_holder10.setFillColor(sf::Color::Red);
                          message_holder10.setOrigin(0, 0);
                          message_holder10.setPosition(0, 0);
                          windowtype2.close();
                      }
                      message_holder10.setString("asty");
                          
                          //int y_for_space = display_y_setting + (total_lines - 2) * each_vertical_offset_is;
                          //int this_width = 0;
                          
                          //float x = message_holder.getLocalBounds().width;

                          
                          
                          //message_holder.setPosition( ( (first_width - x )/2), y_for_space);
                          
                  
                          //windowtype2.close();

                          string test =     message_holder10.getString();
                          
                          windowtype3.clear();
                          windowtype3.draw(message_holder10);
                          windowtype3.display();
                          
                          
                           
       
                          //windowtype.display();
                      

                       Wait_For_Space_Press();
                      
       
      /////////////////////////
       
      Before, the :      windowtype3.display()  without the clear was drawing other text in this call, just not this one particular text message with it!
       
      Thank you so much I am wondering what it can be,
       
      Josheir
    • By Tispe
      Hi
      I want to test out a polymorphic entity component system where the idea is that the components of an entity are "compositioned" using templated multiple inheritance. But I am running into an issue because I am stacking a bunch of methods with the same names inside a class (but they have different signatures). I want these methods to be overloaded by the template type but my compiler says the access is ambiguous. I have issues making them unambiguous with the using declaration because the paramter pack expansion causes a syntax error.
      Can anyone here give me some advice on this?
       
      template <class T> class component { T m_data; protected: component() {}; ~component() {}; public: void set(const T& data) { m_data = data; }; }; template <class ...Ts> class entity : public component<Ts>... { public: entity() {}; ~entity() {}; //using component<Ts>::set...; // syntax error }; struct position { float x{}; float y{}; float z{}; }; struct velocity { float x{}; float y{}; float z{}; }; int main() { entity<position, velocity> myEntity; position pos = { 1.0f, 1.0f, 1.0f }; velocity vel = { 2.0f, 2.0f, 2.0f }; myEntity.set(pos); // error C2385: ambiguous access of 'set' //myEntity.set(vel); return 0; }  
  • Popular Now