Limiting the load my game produces

Started by
10 comments, last by japro 13 years, 1 month ago
I wrote this breakoutesque game for a course project last semester and did all the development on Linux. But since it uses SFML I now went ahead and compiled it for windows. The game loop does a physics/logic update every 20ms (50Hz) and linearly interpolates for frames in between. Now on Linux it runs at about 500-550 FPS and only after several minutes of play the fan of my laptop slightly increases it's speed and stays there. But the windows version goes up to 1200-1500FPS and has the fans go to full speed in less than a minute. My questions:

1. What could be the reason for the big difference? I use Nvidia drivers (GTX260M gpu) on Linux and Windows.

2. (the important Question) What is the proper way to keep my program from putting so much load on the system? I clearly don't need 500 or even 1500 FPS to make it look "smooth". At the moment I put something like "Sleep(0.8*(time_to_next_physics_update))" into the game loop, but that seems more like a dirty fix than a proper solution to me.
Advertisement
1. Windows drivers are generally much more optimized than their Linux counterpart on behalf of 99.5% of the gamer userbase using it.

2. Sleep is generally the solution to keep the FPS in-check and give the system time to do other things. If you don't tell it you don't need to do anything, how is it supposed to know? There is nothing else to be done so it uses all available processing power on what needs it.


2. Sleep is generally the solution to keep the FPS in-check and give the system time to do other things. If you don't tell it you don't need to do anything, how is it supposed to know? There is nothing else to be done so it uses all available processing power on what needs it.


It's not so much the use of Sleep that I think is a "dirty fix" it's more how I use it :). I mean, this is a pretty standard problem right? So there must be some sort of "canonical solution" that is well understood. But my searching efforts so far didn't lead me anywhere.
Another option would be to enable vsync. This caps the FPS of your program to the refresh rate of the monitor, which for most people is about 60 - 90fps.
You can turn on vsync. This will limit the framerate to the update rate of the monitor (almost always 60hz), the actual wait will happen inside your "SwapBuffers" call or similar. This is generally how fast it makes sense to update anyways since any drawn frames in between two monitor refreshes won't display on screen (there still is some point to having vsync turned off since it allows the game to have a frame ready immediately when the buffer becomes available, rather than starting drawing once it becomes available).
The solution with VSync is quite elegant and should work for you. The only possible problem will be that you will be limited with a fixed FPS - which will vary by the monitor setting.

I don't see anything dirty on Sleep(). But don't forget that the time you pass to Sleep is not guaranteed to be precisely produced. I think especially short times turn out to be higher in reality, so you can find for example that there is no difference between Sleep(2) and Sleep(20). But that doesn't have to be a problem for you.
You should definitely add in either a sleep (for a bit more flexibility) or vsync. When Starcraft 2 came out, they forgot about that for the menus screens. The graphics cards ran at 100% and were causing people's computers to overheat.
I don't see anything dirty on Sleep(). But don't forget that the time you pass to Sleep is not guaranteed to be precisely produced. I think especially short times turn out to be higher in reality, so you can find for example that there is no difference between Sleep(2) and Sleep(20). But that doesn't have to be a problem for you.

To elaborate on what Tom said, I recently ran in to this problem with Sleep() not sleeping for the required time. This caused my engine to update at 30-ish Hz, instead of the desired 50Hz. It wouldn't happen every time, but it happened and was annoying. My solution was to turn on vsync like people have mentioned, and use GetTickCount() to watch the milliseconds. (This is a busy wait for your engine update thread, however).
/ Visual Studios 2010 / Codeblocks 10.05 / Windows 7 / Ubuntu 10.10 / - I might be wrong

[quote name='Vectorian' timestamp='1298023803' post='4775785']
2. Sleep is generally the solution to keep the FPS in-check and give the system time to do other things. If you don't tell it you don't need to do anything, how is it supposed to know? There is nothing else to be done so it uses all available processing power on what needs it.


It's not so much the use of Sleep that I think is a "dirty fix" it's more how I use it :). I mean, this is a pretty standard problem right? So there must be some sort of "canonical solution" that is well understood. But my searching efforts so far didn't lead me anywhere.
[/quote]

There is, and it's pretty much Sleep(). The usual way to do it looks like this in psuedocode:


int timeUntilNextUpdate = 0;

while(true)
{
timeUntilNextUpdate += 1000 / DESIRED_FPS; // 1000 because sleep uses milliseconds
update();
render();
if(GetSystemTimeInMilliseconds() < timeUntilNextUpdate)
{
Sleep(timeUntilNextUpdate - GetSystemTimeInMilliseconds());
}
}


This will make sure you never do more than DESIRED_FPS, and your CPU will spend the rest of the time idling or running other programs. This is exacly what goes on under the hood when you enable vsync, btw. The video drivers puts your thread to sleep until it can flip the buffer synchronously.
Note that Sleep does NOT say "sleep for X seconds" it says "sleep for at least X seconds". That means you may not wake up for a long time. The default timeslice on your computer under windows is ~15ms. If you sleep for any time between 1 and 15ms, you likely won't wake up for 15ms or longer. This can lead to some undesireable behavior.

This topic is closed to new replies.

Advertisement