Timers for game loop

Started by
6 comments, last by 21st Century Moose 13 years, 3 months ago
[font="arial, verdana, tahoma, sans-serif"]I'm trying to make my simulation framerate constant and independent of the renderer framerate. I need my simulation framerate to be accurate, since it requires networking between a server and multiple clients, so I'm try to use the computer's system time as a base. The problem is I'm not sure how to get the system time for Windows and Linux. An additional problem, for my mac build (my working build, as it happens) I don't seem to be able to use the gettimeofday() function properly. Despite have the correct header files included, my compiler insists that "timeval", the struct the time data is stored in, has not been declared.[font="arial, verdana, tahoma, sans-serif"] [font="arial, verdana, tahoma, sans-serif"] float highrestimeinseconds (void){
timeval t1, t2;

double elapsedTime;



// start timer

gettimeofday(&t1, NULL);

// do something
//...

// stop timer

gettimeofday(&t2, NULL);



// compute the elapsed time in millisec

elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms

elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms

//Convert to seconds...

return elapsedTime;

}

[font="arial, verdana, tahoma, sans-serif"]
[font="arial, verdana, tahoma, sans-serif"]You can ignore that slightly fuzzy bit at the end, it's just the use of timeval I need help with.
Advertisement
Here's my timer - you are free to use it. It contains extra glop for drawing the FPS and related graph onto the screen, so you can just ignore that part. It works with linux/unix (likely also mac, but not tested). If you use SDL, it will work with anything. There is also windows-specific code included, but it is commented out.

TIMER.h

#ifndef GAMETIMER_H
#define GAMETIMER_H

#include <stdlib.h>
#include <vector>
#include <deque>

#include "../core/Fonts.h"

#ifdef _LINUX
#include <sys/time.h>
#else
// #include <windows.h>
#include <SDL/SDL.h>
#endif




/**
USAGE:
Creating a timer will automatically begin the timing process.
Generally, you simply will want to call the GetTimeDelta() function
every trip through the game loop. This will update the timer's
internal data in addition to returning the delta. A delta is then
defined as the time between calls to GetTimeDelta(). To visualize,
call DrawGraph() every frame;
*/



namespace LGC {



class Timer {

public:

Timer();

~Timer();

/** Returns the time difference between now and the last time you punch out */
float GetTimeDelta();

/** */
void LogFPSToScreen( bool x = true ) { log_to_screen = x; }

/** */
void LogFPSToFile( bool x = true ) { log_to_file = x; }

/** */
void SetFramerateSamples( int x ) { framerate_samples = x; }

/** */
void DrawGraph();

/** */
void DrawFPS();

/** */
void SetDeltaSmooth( bool x ) { smooth_deltas = x;}

/** */
void ToggleDeltaSmooth() { smooth_deltas = !smooth_deltas; }

/** */
bool DeltaSmoothStatus() { return smooth_deltas; }

/** */
float GetFrameRate() { return FrameRate; }

static void ReserveResources();
static void DumpResources();

private:

void Log();
void Track( float delta );
float SmoothDelta( float x );

bool log_to_file;
bool log_to_screen;

int framerate_samples;
int FrameCount;
float FrameRate;

std::deque<float> most_recent;
float recent_max;
std::vector<long int> distrib;
bool distrib_inited;
long int distrib_max;
float distrib_unit;

float delta_avg;
bool smooth_deltas;

#ifdef _LINUX
struct timeval _tstart, _tend;
struct timezone tz;
#else
// windows uses the SDL_GetTicks() function because
// i don't have a windows machine to test on!
// you can comment out the existing stuff and uncomment
// the experimental timer data if needed.

Uint32 _tstart, _tend;

// this is the more precise but untested windows-specific timer data:
// LARGE_INTEGER _tstart, _tend;
// LARGE_INTEGER freq;
#endif

Font font;
};



} // end namespace LGC



#endif


TIMER.cpp

#include "Timer.h"
#include <iostream>
#include <float.h>
#include <GL/gl.h>
#include "../core/Fonts.h"
#include "../core/ResourceMgr.h"
#include <sstream>





namespace LGC {



static const unsigned int GRAPH_SIZE = 500;
static const unsigned int DELTA_SMOOTH_SAMPLE = 5;
static const float MAX_DELTA = 1.0 / 20.0;


Timer::Timer() {
log_to_file = false;
log_to_screen = false;
framerate_samples = 100;
FrameCount = 0;
FrameRate = 0;
delta_avg = 0;
smooth_deltas = false;

// init the record keepers:
most_recent = std::deque<float>();
distrib = std::vector<long int>(GRAPH_SIZE,0);
distrib_inited = false;
distrib_max = 1;
recent_max = 0;

// skip the normal resource acquisition interface
font = RM->GetFont("FPScounter","Timer");


// Get the clock started:

#ifdef _LINUX
gettimeofday(&_tstart, &tz);
#else
_tstart = SDL_GetTicks();
// EXPERIMENTAL WINDOWS-SPECIFIC (untested):
// QueryPerformanceFrequency(&freq);
// QueryPerformanceCounter(&_tstart);
#endif
}


Timer::~Timer() {
RM->DumpFont(font, "Timer");
}

float Timer::GetTimeDelta() {

static double fps_delta = 0;
static double fps_last = 0;

#ifdef _LINUX
gettimeofday(&_tend,&tz);
double t1 = (double)_tstart.tv_sec + (double)_tstart.tv_usec / 1000000;
double t2 = (double)_tend.tv_sec + (double)_tend.tv_usec / 1000000;
#else
_tend = SDL_GetTicks();
double t1 = double(_tstart) / 1000.0;
double t2 = double(_tend) / 1000.0;
// EXPERIMENTAL WINDOWS-SPECIFIC (untested):
// QueryPerformanceCounter(&_tend);
// double t1 = (double)_tstart.QuadPart / (double)freq.QuadPart;
// double t2 = (double)_tend.QuadPart / (double)freq.QuadPart;
#endif

float delta = float(t2-t1);

// calculate FPS
if (++FrameCount >= framerate_samples) {
fps_delta = t2 - fps_last ;
FrameRate = double(framerate_samples) / fps_delta ;
FrameCount = 0;
if (fps_last != 0) { Log(); }
fps_last = t2;
}

_tstart = _tend;

// average the delta out if requested
float new_delta = (smooth_deltas) ? SmoothDelta(delta) : delta;

// hiccup protection:
if (!smooth_deltas) {
if ( new_delta > MAX_DELTA ) {
// std::cout << "Detected time delta hiccup" << std::endl;
new_delta = MAX_DELTA;
}
}

Track( float(delta) );
return new_delta;
}


float Timer::SmoothDelta( float x ) {

float total = 0;
float samples = 0;
if (most_recent.size() <= DELTA_SMOOTH_SAMPLE+3) { return x; }
for ( unsigned int i = most_recent.size()-1; i >= most_recent.size()-1-DELTA_SMOOTH_SAMPLE ; i--) {
total += most_recent;
samples += 1.0;
}

delta_avg = total / samples;

return ( delta_avg * (float)DELTA_SMOOTH_SAMPLE + x ) / ((float)DELTA_SMOOTH_SAMPLE + 1);

}


void Timer::Log() {
if (log_to_screen) {
std::cout << "FPS: " << FrameRate << std::endl;
}
if (log_to_file) {
// TODO
}
}



void Timer::Track( float delta ) {

static bool first_trip = true;
static unsigned int hits = 0;

if (first_trip) { first_trip = false; return; }

// log most recent
if ( most_recent.size() == GRAPH_SIZE) { most_recent.pop_front(); }
most_recent.push_back( delta );
if (recent_max == 0) { recent_max = delta; }
else if ( delta > recent_max ) {
if ( delta < recent_max * 4 ) { recent_max = delta; }
}

// set up the distribution graph if needed
if ( !distrib_inited && ++hits >= GRAPH_SIZE ) {

// first find the high and low points + average
float high = 0, low = FLT_MAX;
for ( std::deque<float>::iterator i = most_recent.begin(); i != most_recent.end(); i++ ) {
if ( *i < low ) { low = *i; }
if ( *i > high ) { high = *i; }
}

// setup distrib vector
//distrib_unit = (high - low) / float(GRAPH_SIZE);
distrib_unit = ( (1.0f/10.0f) - (1.0f/250.0f) ) / float(GRAPH_SIZE);
distrib_inited = true;

}

else if (distrib_inited) {
// log the distribution
unsigned long int index = int( delta / distrib_unit );
if (index >= GRAPH_SIZE) { index = GRAPH_SIZE - 1; }
else if (index < 0) { index = 0; }
distrib[index]++;
if ( distrib[index] > distrib_max ) { distrib_max = distrib[index]; }
}
}


void Timer::DrawGraph() {

glBindTexture( GL_TEXTURE_2D, 0 );

// draw base quad:
if (smooth_deltas) {
glColor4f(0.1,0.1,0.3,0.5);
}
else {
glColor4f(0,0,0,0.5);
}
glBegin(GL_QUADS);
glVertex2f( 0, 50 );
glVertex2f( GRAPH_SIZE, 50 );
glVertex2f( GRAPH_SIZE, 0 );
glVertex2f( 0, 0 );
glEnd();

// draw most recent graph
glBegin(GL_LINE_STRIP);
glColor4f(0.8, 0.6, 0.6, 0.5);
glVertex2f( GRAPH_SIZE, 50 ); // top of bar
glVertex2f( 0, 50 ); // top of bar
glColor4f(1.0, 0.8, 0.8, 1.0);
for ( unsigned int i = 0; i < most_recent.size(); i++ ) {
glVertex2f( i, (1.0 - (most_recent / recent_max)) * 50); // top of bar
}
glEnd();

// draw distrib graph
if (distrib_inited) {
glBegin(GL_LINES);
glColor4f(1.0, 1.0, 0.5, 1.0);
for ( unsigned int i = 0; i < distrib.size(); i++ ) {
glVertex2f( i, 100 ); // bottom of bar
glVertex2f( i, (1.0 - ((float)distrib / (float)distrib_max)) * 50 + 50); // top of bar
}
glEnd();
}

}

void Timer::DrawFPS() {
std::stringstream ss;
ss << (int)FrameRate;
glColor4f(1,1,1,1);
font.RenderFontToScreen( ss.str(), 5, 0 );
}


} // end namespace LGC
So it turns out my brain doesn't work properly beyond about 11pm, I came back to think and I think I've managed to fix it pretty much immediately. I forgot to mention I'm writing this in C, not C++, but I'll have a look and see if I can get the timers functioning on Windows too. Thanks for the reply.
You can still pick out the windows bits of the code. It's really only like six lines. It uses QueryPerfomanceCounter which generally has a higher resolution than the other system timers.
It uses QueryPerfomanceCounter which generally has a higher resolution than the other system timers.

Be aware that QPC and friends still cause problems on modern CPUs; the only really safe timing API to use on Windows is timeGetTime. It only has 1ms resolution, but that's sufficient for the majority of real world uses. Microsoft themselves even had to patch SQL Server on account of this: http://support.microsoft.com/kb/931279

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

My Unix timer is running off microseconds, but converting to milliseconds anyway. Thanks for the help.

'leiavoia' said:
It uses QueryPerfomanceCounter which generally has a higher resolution than the other system timers.

Be aware that QPC and friends still cause problems on modern CPUs; the only really safe timing API to use on Windows is timeGetTime. It only has 1ms resolution, but that's sufficient for the majority of real world uses. Microsoft themselves even had to patch SQL Server on account of this: http://support.microsoft.com/kb/931279


Using QueryPerformanceCounter can be used to handle fast timing in a game, there are quite the same methods in other OS's but I can not remember right now (UNIX uses clock_gettime I think: http://stackoverflow.com/questions/2738669/getting-the-system-tick-count-with-basic-c).

To use cycles per second to handle timing you could sue somthing like the following:

Get cycles per second and save it somewhere (A global var for example) in your startup:


LARGE_INTEGER PerfCount;
QueryPerformanceFrequency(&PerfCount);
gSecondsPerCycle = 1.0 / PerfCount.Quadpart;


Then make a function to get the current cycles, we use it as our hightres cycle counter.


inline double getCurrentCycles() {
LARGE_INTEGER PerfCount;
QueryPerformanceCounter(&PerfCount);
return PerfCount.QuadPart;
}



Now we can create a function to get the current seconds the app is running


inline double getTimeSeconds() {
LARGE_INTEGER PerfCount;
QueryPerformanceCounter(&PerfCount);
return PerfCount.QuadPart*gSecondsPerCycle+16777216.0f;
}


The time method has 2 drawbacks:
  1. The time origin is arbitrary, so it's only useful for time differences. To handle app start time just accumulate the time diff for each frame and you got it.
  2. This function CAN produce precision errors. Because of that issue we are adding the highest float value to it which should be set to 16777216.0f. There is a nice thread in stackoverflow about this number: http://stackoverflow...of-float-number.

I hope it helped you so now you are able to use a very fast way of handling time in games.
The specific problems with QueryPerformanceCounter come from modern CPUs. First of all they have power-saving modes available, so the CPU frequency when you call QueryPerformanceFrequency may not be the same as later in the run. Secondly, a thread can move from one core to another, and different cores can run at different speeds. These functions are definitely no longer reliable, and I've personally observed strange timing problems with games that use them, even on a recent Intel i7, meaning that the problems are not resolved and nor are they isolated to machines from a specific manufacturer or time period.

You should not use QueryPerformanceCounter, full stop.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement