CPU consumption in games

Started by
22 comments, last by CrazyCdn 16 years, 9 months ago
Here is my method to get 60 fps with a quite good cpu consummation.
First in the main method I create a separate thread in witch the main loop is done, main method just dispatches messages.

DWORD WINAPI run( LPVOID lpParameter){     unsigned long long freq; //cpu frequency     unsigned long long freq60; //for 60 fps     unsigned long long count; // current frequency     unsigned long long nextCount; // next count for a completed second      unsigned long long nextFrame; //for a frame     unsigned long nbframe = 0;//initialisation     QueryPerformanceFrequency((LARGE_INTEGER *)&freq);//grab the cpu frequency     freq60 = freq/60; // calculation for a frame     QueryPerformanceCounter((LARGE_INTEGER *)&count); // the current elapsed frequency     nextCount = count + freq; //when we rich nextCount it mean a second has elapsed     nextFrame = count + freq60; // a frame has elapsed     while(System::isRunning()) // if the game is running     {          ++nbframe; //inc frame count	//do the job          QueryPerformanceCounter((LARGE_INTEGER *)&count); // the current elapsed frequency          if(nextCount<=count) // if we rich a second          {	   printf(nbframe); //show the fps               nbframe = 0;  // start back to 0               nextCount += freq; // next second limit          }          long long res = (long long) nextFrame - (long long) count; // calculation of the time to sleep for this frame          if(res>0) // if there's some          {               res = (res*(long long)1000)/(long long)freq; // conversion               Sleep(res); //and sleep          }          nextFrame += freq60; // next frame limit     }


And I think cpu time saving is useful if you have a multiplayer game. If the firewall doesn't have enough time to do its job it will result a new layer of lag.

nico
Advertisement
Quote:Original post by Mike2343
Articles please, I've tried it on about 30 systems now and nothing but more accurate time happens. 95/98/ME was defaulted to 1ms resolution only 2k/XP changed it. I also do my best to use QPC/QPF unless it's an unpathced AMD dual core processor. Then I roll back to timeGetTime().

Quote:From Guidelines For Providing Multimedia Timer Support
The obvious drawback of this implementation is that a large clock interrupt period could result in long delays. For an application to be notified with more precision on todays systems, it must request a smaller clock interrupt period. In the previous example, if the multimedia application wanted its code executed precisely on time, it would request a clock interrupt period of 1 millisecond. Then, the system would check every millisecond to see if there was work to do, as is illustrated in Figure 2.

While this allows the multimedia application to execute its code and play a sound on time, it also degrades overall system performance. Microsoft tests have found that, while lowering the timer tick frequency to 2 milliseconds has a negligible effect on system performance, a timer tick frequency of less than 2 milliseconds can significantly degrade overall system performance. On faster systems, the cost of lowering the clock interrupt period below 2 milliseconds may become affordable, but the subtle effect of the increased interrupt frequency on cache consistency and power management may not be desirable.


Quote:From General-Purpose Timing: The Failure of Periodic Timers
We have calibrated an empty loop (a computation phase) to finish after 1 ms, and ran it a million times on a Pentium-IV 2.8GHz Linux machine with 1000 Hz ticks, saving a cycle-resolution timestamp after each phase. No other user processes were executing. At the end of the benchmark we computed the duration
of each phase by subtracting successive measurements.
...
Instrumenting the kernel to log all interrupts revealed that the only activity present in the system while the measurements took place were about a million ticks and 3,000 network interrupts, indicating ticks are probably the main cause of the problem. This was verified by repeating the measurements with kernels compiled with 100 and 10 Hz ticks, which experienced far smaller time variability, respectively. But measuring direct overhead of the tick handler indicated that it only accounts for 0.8% of available cycles (using the data from Fig. 3, indirect overhead is found to be about 14% significant even for a uniprocessor). We therefore concluded that most of the effect is indirect overhead, due to cache misses. This was verified by repeating the experiment with the cache disabled.

(Note: 1000Hz is the same as 1ms resolution)
This was running Linux, but I doubt matters are much different on Windows. Of course it might not have been your average game code, but any code depending on cache consistency is likely to be slowed significantly down even if 14% might be a bit higher than what occurs under real circumstances.

Quote:If you're making the next oblivion, Unreal 2037, Doom 12 or whatever, you'll likely be using 100% of all CPUs to blow peoples minds. Take the extra time and be kind to the user. If your game kills their batteries in 15mins they won't be playing your game that much.

But you still can't expect that Doom 12 will be able to fully utilize the user's computer, because by the time Doom 14 is out most computers will be able to run Doom 12 smoothly on full settings at 100FPS and still only use 10% CPU. Both GTA2 and Age of Empires could easily use 100% CPU without being wasteful when they were released, but today we have much faster hardware.

I somewhat agree with the desktop assertion that we can claim the whole CPU, but I still prefer to save CPU power if I know the extra processing won't improve the gaming experience. Taking up 800W when we can achieve the same with 400W is stupid.

Also it depends on the game, if some of your users will run in windowed mode to take advantage of other applications at the same time, then you should consider that. It is somewhat rare that users do other things while playing, but I know at least one person who uses an Excel spreadsheet to computer his chances while playing MMORPGs in windowed mode. Even in fullscreen mode you might need to consider this, because the player might have a dual screen setup and run other applications at the other screen.
Quote:Original post by Ra
If your game carelessly runs at 100% CPU then I'm probably not going to play it. On single processor PCs this can choke other apps, and unnecessarily causes things to get hot. Even in fullscreen mode I don't appreciate everything running in the background taking a huge performance hit when it's not necessary.

"Oh sure, but Windows will take the timeslice away when other programs need it."

Have any of you actually run a program that uses 100% CPU? On a single processor PC explorer slows to a crawl and you have to wait 10 seconds for task manager to come up so you can kill the offending app. Windows dishes out longer timeslices than it's possible to run when you have a certain number of applications running, which means that when one program doesn't give up the time it doesn't need (and instead burns it off in some kind of loop) it ends up hurting the performance of other programs. And it doesn't matter if you're in fullscreen or not, it still happens regardless of whether you can see it. I don't appreciate the programs running in the background taking a major performance hit just because your game thinks it needs all my CPU time.

As a final note, notice how many production titles actually use all of your CPU. I can't name one.

This is kind of an annoying trend I've seen around GDNet. 100% CPU is not okay. Sleep(0) will free up what remains of your timeslice, not take away time that you were actually using for anything. Play nice with other programs.


Actually I have done this. Counter-Strike: Source takes up 99-100% of my CPU on my Intel E6400 with 4GB or RAM. I can task out no problem and don't experience anything you state above. Nor did I on my single core AMD system (nor do I still).

Since you say you won't play a game that does this, you don't play 95% of major games then? I started up about 10 different commerical games I own and all hit 99-100% CPU usage. I can task out easily of all of them too.

But yes, still play nice :)

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

Quote:Original post by yopyop
Here is my method to get 60 fps with a quite good cpu consummation.
First in the main method I create a separate thread in witch the main loop is done, main method just dispatches messages.

*** Source Snippet Removed ***

And I think cpu time saving is useful if you have a multiplayer game. If the firewall doesn't have enough time to do its job it will result a new layer of lag.

nico


Your code will likely break on AMD dual core processors just FYI. See the issues with QueryPerformanceCounter and switching cores... The frequency is different on each so you can get massive changes. Also on weaker systems your code will use more CPU, it's just a given.

Also don't worry about the firewall, it's a non-issue. Sending 60 packets a second is nothing at all. You should also limit the number of packets like most games do anyways. Some set it to 30hz, some 60hz. Also you're assuming a software firewall, don't program for specifics like that. Everyone I personally know has a hardware one in their router that they use.

@CTar, thanks for the links I'll read them tomorrow.

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

This topic is closed to new replies.

Advertisement