Confused About Frames Per Second

Started by
28 comments, last by frob 7 years, 10 months ago

Currently, I determine frames per second like this:


void GameTimer::tick()
{
	frameCount_++;
	fpsFrameCount_++;

	currentTime_ = clock.now();

	durationDouble time_span = currentTime_ - previousTime_;

	deltaTime_ = time_span.count();

	totalTime_ += deltaTime_;
	fpsTimeCount_ += deltaTime_;

	previousTime_ = currentTime_;

	// Count the number of frames displayed each second.
	if (fpsTimeCount_ > 1.0f) 
        {
		fps_ = fpsFrameCount_;
		fpsFrameCount_ = 0;
		fpsTimeCount_ = 0.0f;
	}
}

Every iteration of the game loop, tick is called, and increments fpsFrameCount_. After one second has elapsed, I store this in fps_ and reset the counter.

I am confused however because it typically ends up being > 1000 fps. If it drops below 400, I start to notice a lag. If it were to run at 60 fps it would be cripplingly slow. But 60 fps is what most games run at, isn't it? That is my understanding anyway. Why are the numbers I'm seeing not matching up with my understanding? Is this method incorrect?

Advertisement

I honestly am not totally understanding the logic of that code. I would instead just use the deltaTime and do a formula to get the FPS than to actually count frames. It should just be a simple division. You need to get the deltaTime in either seconds(likely 0.xxxx) or in miliseconds. Then use either 1 second or 1000 milliseconds and divide by the deltaTime. That gives you how many "deltaTime" there is in a second, which happends to be frames per second. Now, you may want to add on some "smoothing" and/or averaging things out, but that gets you started.

Also, I would pay attention, maybe more attention, to the deltaTime itself. Some people find it easier to understand the numbers with that, because it is directly saying how many MS the frames are taking, so it is more direct.



I honestly am not totally understanding the logic of that code. I would instead just use the deltaTime and do a formula to get the FPS than to actually count frames. It should just be a simple division. You need to get the deltaTime in either seconds(likely 0.xxxx) or in miliseconds. Then use either 1 second or 1000 milliseconds and divide by the deltaTime. That gives you how many "deltaTime" there is in a second, which happends to be frames per second. Now, you may want to add on some "smoothing" and/or averaging things out, but that gets you started.

Also, I would pay attention, maybe more attention, to the deltaTime itself. Some people find it easier to understand the numbers with that, because it is directly saying how many MS the frames are taking, so it is more direct.

Thanks, this is good advice, however I don't think it really clears up my confusion. Even after making this change I'm still getting the same results. My deltaTimes are typically ~.001 seconds so when I divide 1.0 by ~.001 I'm still getting around 1000 FPS. Why is this so far off from the 60FPS I hear is the norm, and why, when it drops to anything lower than 400FPS, is it drastically lagging?

1. I assume your objects all are moving based on the deltaTime?! If not, you will need to make that change because if not, things will move at different speeds on different computers.

2. On the line "durationDouble time_span = currentTime_ - previousTime_;" it seems like time_span would be the correct deltaTime. I don't know what libraries you are using, and I don't know what time_span.count() is actually doing. But I DO know that current-previous is generally how to get the deltaTime.



What are your data types? You could be misinterpreting the effects of floating point precision (or rounding errors) as "lag".

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

1. I assume your objects all are moving based on the deltaTime?! If not, you will need to make that change because if not, things will move at different speeds on different computers.

2. On the line "durationDouble time_span = currentTime_ - previousTime_;" it seems like time_span would be the correct deltaTime. I don't know what libraries you are using, and I don't know what time_span.count() is actually doing. But I DO know that current-previous is generally how to get the deltaTime.

1. Yes, that assumption is correct.

2. I'm using boost::chrono. Sorry, I should have included the header too. Here is the relevant part of the header file:


#include <boost/chrono.hpp>

typedef boost::chrono::high_resolution_clock hiResClock;
typedef boost::chrono::duration<double> durationDouble;
typedef boost::chrono::time_point<hiResClock, durationDouble> hiResDoubleTimePoint;

class GameTimer
{ 
public: 

        // ... 

private: 
	
	float			fpsTimeCount_;
	float			timeScale_;

	double			deltaTime_;
	double			totalTime_;
	
	int			fps_;
	int			frameCount_;
	int			fpsFrameCount_;

	hiResClock		clock;
	hiResDoubleTimePoint	currentTime_;
	hiResDoubleTimePoint	previousTime_;
};

I am using it in a similar manner as seen here: http://www.boost.org/doc/libs/1_60_0/doc/html/chrono/users_guide.html#chrono.users_guide.tutorial.time_point

OK, I assume that verifies that you do have a high resolution timer.

Another common problem is: have you any Sleep calls anywhere?

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Back when I was using C++ I didn't use boost or chronos any, so I don't know it at all. But still wonder about that call to the count() function is for. deltaTime, as in the time passed between frames, is the current clock - previous clock, so I'm not sure what that call is for. Most of the time I see a count() function it is to verify how many indices are in an array or similar, but I assume it is not the case here, so what IS it doing?



Back when I was using C++ I didn't use boost or chronos any, so I don't know it at all. But still wonder about that call to the count() function is for. deltaTime, as in the time passed between frames, is the current clock - previous clock, so I'm not sure what that call is for. Most of the time I see a count() function it is to verify how many indices are in an array or similar, but I assume it is not the case here, so what IS it doing?

I'm not 100% sure why boost does it this way, but that is how it is done in every tutorial I've found. When I display the timer (the summation of the deltaTime values), it counts seconds in real time at the same rate as the stopwatch app on my phone and my PC's system clock, so I'm pretty sure the deltaTime values are correct,

OK, I assume that verifies that you do have a high resolution timer.

Another common problem is: have you any Sleep calls anywhere?

No, I just did a find for "sleep" in the entire solution and there were no results.

I honestly am not totally understanding the logic of that code. I would instead just use the deltaTime and do a formula to get the FPS than to actually count frames. It should just be a simple division. You need to get the deltaTime in either seconds(likely 0.xxxx) or in miliseconds. Then use either 1 second or 1000 milliseconds and divide by the deltaTime. That gives you how many "deltaTime" there is in a second, which happends to be frames per second. Now, you may want to add on some "smoothing" and/or averaging things out, but that gets you started.

Also, I would pay attention, maybe more attention, to the deltaTime itself. Some people find it easier to understand the numbers with that, because it is directly saying how many MS the frames are taking, so it is more direct.

Thanks, this is good advice, however I don't think it really clears up my confusion. Even after making this change I'm still getting the same results. My deltaTimes are typically ~.001 seconds so when I divide 1.0 by ~.001 I'm still getting around 1000 FPS. Why is this so far off from the 60FPS I hear is the norm, and why, when it drops to anything lower than 400FPS, is it drastically lagging?

1,000 fps isn't unusual, don't think you have done it wrong just because you are getting significantly more fps than 60. If you have AAA quality graphics going on and you are getting 1,000 fps then chances are you are doing something wrong but if your scene is very basic then you will probably have high fps.

This bit looks odd:


// Count the number of frames displayed each second.
if (fpsTimeCount_ > 1.0f) 
{
	fps_ = fpsFrameCount_;
	fpsFrameCount_ = 0;
	fpsTimeCount_ = 0.0f;
}

What happens if for one frame deltaTime is 5 seconds? that would make fpsTimecount >5 and suddenly your fps is showing as 5 times or more than it actually is. Just make sure you account for that extra time:


fps_ = fpsFrameCount_ / fpsTimeCount_;

It's all in the name, frames(fpsFrameCount_) per(/) second(fpsTimeCount_).

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

This topic is closed to new replies.

Advertisement