# Very strange FPS fluctuation

This topic is 1175 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I have a simple non-threaded game in C# and DirectX that has a very simple gameloop consisting of 1 update and 1 render per loop, and a frame rate calculation:

PseudoCode Below:


long frameCount = 0;
long frameTime = 0;

while (isGameRunning) {
UpdateGame(deltaMillis); // deltaMillis is the milliseconds since the last update
RenderAll();

frameCount++;
frameTime += deltaMillis;
if (frameTime > 1000) {
print "FPS IS " + 1000 * frameCount / frameTime;
frameCount = 0;
frameTime -= 1000;
}
}

(I really hope I didnt screw up the FPS calc here, I'm doing this from memory. Point is I'm very confident in the way the FPS is calculated. I'm pretty sure the strangeness I'm seeing is related to the computer, or DirectX, or something not related to code logic...)

The really wierd thing is, half the time I launch the game, it runs between approx 90 and 120 frames per second, and about the other half the time I launch the game, it runs between 650 and 750 frames per second. Yup you read that right. Its like the game randomly either runs in "Fast mode" or "slow mode" and I have no control over it. I certainly dont have any code that messes with the frame rate like that. I dont have enough randomization in the game to account for that huge range of frame rate. I'm not watching movies or calculating PI to the trillionth digit at the same time either. I could literally run the game right now and get 90-120 FPS, then immediately quit and run it again and get 650-750 FPS.  And the really wierd part is I NEVER see frame rates between 120 and 650. Its always completely in the lower range or the upper range, no crossing over.

Has anyone ever experienced anything like this before? I really dont know whats going on.

##### Share on other sites

Dammit, I knew should have waited until I got home to post the question ! I think my pseudo-code might have thrown you off track and caused more confusion than it was worth. Here is the real code:

Stopwatch sw = new Stopwatch();
sw.Start();
long previousElapsedMilliseconds = sw.ElapsedMilliseconds;

// FPS vars
static int framesPerSecond = 0;
static int frameCount = 0; // how many frames
static long frameMillis; // how many milliseconds spent in frames

while (isGameRunning)
{
// calc difference in time since last update
long currentElapsedMilliseconds = sw.ElapsedMilliseconds;
long deltaMillis = currentElapsedMilliseconds - previousElapsedMilliseconds;
previousElapsedMilliseconds = currentElapsedMilliseconds;

// do all updates and draw
UpdateAndRender(deltaMillis);

// calc FPS
frameCount ++;
frameMillis += deltaMillis;
if (frameMillis >= 1000)
{
// 1 second elapsed - time to dispaly FPS, and restart counting
framesPerSecond = Convert.ToInt32(frameCount * 1000.0 / frameMillis);
frameMillis -= 1000;
frameCount = 0;
}
displayFPS (framesPerSecond);

}


Now for my comments based on the responses so far (thanks by the way!):

First: I don't know why you see what you're seeing. Likely has to do with using integers and how you determine "milliseconds."

Can you check my updated code for any glaring errors? I've been staring at it forever and I don't see anything that would randomly increase the FPS by a factor of 7ish.

Second: why do you care? Your updates should take place with fixed intervals (for various reasons, you should fix your timestep ) and you should be controlling your framerate. Anything over 60 fps (actual rendering frequency, not loop delta-time) isn't needed visually, likely isn't compatible with some monitors, can be hard on hardware (GPU and cooling fans, in particular), and is a waste of energy.

Theres something extremely weird going on here, any decent programmer would care and want to know what & why. Fixed timesteps are an alternative sure, but this game isn't using them currently. I also know that my eyes cannot detect 650 FPS, and that my current code might be unnecessarily burdening the video card... however none of that really has to do with the issue. I still want to solve the issue. I might make some changes to all of the above later, but first I want to know whats happening.

Third: you should be determining delta-times in microseconds**, not milliseconds. If you're really interested in fine differences in time, you shouldn't be using integers to total up floating point values such as 8.33 (1/120), 1.5 (1/650) and 1.3 (1/750). The integer sum = 8 + 1 + 1 = 10. The floating point sum = 11.13. That's just over 3 possible time intervals and you already have a 10% error.

I've never heard of people using microseconds instead of milliseconds. I'll look into that but I'm not really convinced that milliseconds aren't good enough. You're definitely right about not using integers to add up float values. However now that I posted my real code you can see that the millisecond time values are indeed longs, not floats. so theres no rounding/truncating errors going on.

Also about using microseconds, I don't see how lack of accuracy is my current problem. Lets say that my current code is determining that 1 frame takes 12 milliseconds. Maybe if I replaced my StopWatch with a Microsecond timer, I would find out that the frame was in fact 12,183 microseconds (or 12.183 milliseconds). So what? If you follow the logic and math you can see that microseconds would not make much of a difference. Maybe my FPS calculation would be more accurate by 1 - 5 FPS. It does not explain why it varies between 90 FPS and 650 in between executions.

I think your measurements are off.

if (frameTime > 1000) {
print "FPS IS " + 1000 * frameCount / frameTime;
frameCount = 0;
frameTime -= 1000;
}


I'm not very good at math, but there's several things that seem odd to me. First, what if your frameTime is greater than two seconds? But you only subtract one second worth of frame? That means, from then on, your framerate would appear doubled.

It should be "frameTime %= 1000"  (i.e. get the remainder of a division by 1000, which is basically the same as saying, "keep subtracting until it's below 1000").

My guess it, sometimes your program happens to take a few extra seconds to start up, or lags for a second, or is minimized for a second (perhaps behind your IDE after starting up), and gains 5000 or so frametime on the first frame... but you never get rid of the extra time, so it sticks around and accidentally inflates your measurements.

I see your point, but the framerate wouldn't be doubled "from then on", it would only be doubled for 1 or 2 iterations through the loop. So only for a few milliseconds, ie, not even noticeable.

Also, 'FPS' isn't the best measurement to use, because it doesn't scale linearly. One extra FPS when you're running at 100 FPS is not the same gain as one FPS when you are running at 10 FPS. You ought to measure your average frametime, not just the number of frames per second.

Yeah I know - I'm not going to put too much stock in the FPS of my game, I'm more interested in how it looks and feels. But now that I have this very strange problem happening, I just have to get to the bottom of it. Saying "oh well, I didn't really need FPS anyway" is not really my style :) I gotta figure this out :)

Thanks all!

##### Share on other sites

You have a simple math problem

What if UpdateGame() & RenderGame() take 7000ms? At the end of the frame you're subtracting 1000ms, when in fact you should be subtracting 7000ms.

Dewitters game loop

know you time-step

Fixed-Time-Step Implementation

In that order

##### Share on other sites

I'm not really convinced that milliseconds aren't good enough. You're definitely right about not using integers to add up float values. However now that I posted my real code you can see that the millisecond time values are indeed longs, not floats. so theres no rounding/truncating errors going on.

That should read "millisecond time values are indeed longs, not floats, so there are rounding/truncation errors."

You're reporting frame-rates of 650 to 750. 1/650 = 1.54 milliseconds. Because you're using integers, that's either 1 mS or 2 mS. Either way, that's 25%-50% error for one frame time. Using a long just means you can represent a large number of milliseconds. If each of those milliseconds is really 1.5 milliseconds, storing it as a byte, an integer, long or long long makes no difference in the accuracy.

If you're coding in .net, you would, at a minimum, be better off using ElapsedMilliseconds for the entire 1000 frames with perhaps an error of 1 frame time, rather than timing each frame, and accumulating some error every frame. I.e., why not start the timer, count 1000 frames, stop the timer and divide the time by 1000?

Edited by Buckeye

##### Share on other sites
Even with microseconds, your clock may lose a second per 4 hours. I wouldn't be satisfied with a watch that did that :lol:
It might not seem like much, but may be enough to lead to bugs in long play sessions.

IMHO, absolute time values should either be:
* in a 64bit float (i.e. a double), in seconds, which provides the convenience of making all your blah-per-second math easy, and has the necessary precision to remain accurate even if the user leaves the game running for months.
* in a 64bit integer, in the CPU's native timer frequency (whatever QueryPerformanceCounter/etc is in), which is likely a fraction of a nanosecond. This is simpler in a lot of ways, but requires dividing by the CPU timer's frequency to convert from arbitrary ticks into time before using it for any calculations.

Delta time variables can almost always be 32bit - Either the difference of two absolute time doubles with the result truncated to float, or the difference between two int64's.

##### Share on other sites

Also, the math won't work as you expected. Some people fixed it in their code samples but nobody explicitly called it out:

long frameCount;

long frameTime;

...

1000 * frameCount / frameTime;

This will not give the result you seem to expect from your description.

Since both values are integral type (int, long, short, byte, char, whatever) the result will also be the same integer type. You won't get anything on the decimal side.  As an example, 99/100 does not equal 0.99, it equals 0 because of integer math. 3/2 = 1,  4/5 = 0, 49/5 = 9, and so on.

Since it looks like you are expecting a number like "17.231" then you need to have a floating point value in at least one spot, probably in all the spots.

1000.0f * (float)frameCount) / (float)frameTime;

##### Share on other sites

Also, the math won't work as you expected. Some people fixed it in their code samples but nobody explicitly called it out:

long frameCount;
long frameTime;
...
1000 * frameCount / frameTime;

This will not give the result you seem to expect from your description.

Since both values are integral type (int, long, short, byte, char, whatever) the result will also be the same integer type. You won't get anything on the decimal side.  As an example, 99/100 does not equal 0.99, it equals 0 because of integer math. 3/2 = 1,  4/5 = 0, 49/5 = 9, and so on.

Since it looks like you are expecting a number like "17.231" then you need to have a floating point value in at least one spot, probably in all the spots.

1000.0f * (float)frameCount) / (float)frameTime;

He fixed this in his “real” code: Convert.ToInt32(frameCount * 1000.0 / frameMillis);.

The result of (frameCount * 1000.0) is a double, and that causes the division to be a double, so the math will work correctly.

L. Spiro

• 10
• 17
• 9
• 14
• 41