• Advertisement
Sign in to follow this  

CPU consumption in games

This topic is 3840 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all! I've been testing my game on different machine, and i noticed that the cpu consumption was different from machine to machine. Why do some machine get 100% cpu while other have 50% and some other have 12~20% ?? I tried playing with my main loop and by testing on every machine here is my code
MSG msg;
while(!done)
{
	ZeroMemory (&msg, sizeof(MSG));

	while(PeekMessage(&msg, NULL, NULL, NULL, PM_REMOVE))
	{
		TranslateMessage(&msg);
		DispatchMessage(&msg);

	        if (msg.message == WM_QUIT) return 0;
	}


	return 0;
}
If there's anything wrong, or unsafe way of doing things, please tell me. I tried adding a Sleep(1) before the ZeroMemory, it caused the consumption to go down quite a bit, but on 1 of the computer, i had lag when keyboard or mouse was in use, like if a WM_MESSAGE would make it lag. Thanks for any lights on this

Share this post


Link to post
Share on other sites
Advertisement
Quote:

Why do some machine get 100% cpu

Because on those systems, there was only one core, and it was utilized all the time.

Quote:

while other have 50%

Because there were two cores, and one of them was utilized all the time.

Quote:

and some other have 12~20% ??

That one is trickier. Could be related to vsync preventing these systems from running at full speed.
Or it could be a slower GPU holding back the CPU?

Share this post


Link to post
Share on other sites
Quote:
Original post by Spoonbender
Quote:

Why do some machine get 100% cpu

Because on those systems, there was only one core, and it was utilized all the time.


Some of them are dual core machines. Some of them only gets to 100% cpu after a certain time playing

edit : are you saying that for a single core cpu, it is normal to have 100% cpu usage in a 3d rendered game ?

[Edited by - md_lasalle on June 21, 2007 7:09:18 PM]

Share this post


Link to post
Share on other sites
If you are using Windows Task manager, the amount of percentage displayed
is not correct for each process. (Its distributed among processes--I forget
the exact details here)

If you want to make Task Manager happy, just add sleep(1)
to the end of your main animation loop.

Share this post


Link to post
Share on other sites
A few things, drop the use of ZeroMemory(). There's no point its just wasted cycles.

DON'T Sleep(1), it can actually go up to 5ms. If you want to Sleep for one (1) ms then first use timeBeginPeriod(1) and restore it after with timeEndPeriod(1). The increases the accuracy of Sleep() and timeGetTime() to 1ms from I believe a default 10ms on 2000/XP/Vista. I put the timeBeginPeriod() before I enter my loop and restore it after (when the games shutting down and time accuracies don't matter.

I use Sleep(0) which just frees up the rest of your time slice. It's polite if you have spare cycles.

Oh and as a side note, if you're game is full screen don't worry about how much CPU you use. Heck I don't care even if I'm in a windowed mode with ours. They choose to start my application :) I am polite it might look like 100% usage but its really not.

Share this post


Link to post
Share on other sites
Quote:
Original post by md_lasalle
Quote:
Original post by Spoonbender
Quote:

Why do some machine get 100% cpu

Because on those systems, there was only one core, and it was utilized all the time.


Some of them are dual core machines. Some of them only gets to 100% cpu after a certain time playing

edit : are you saying that for a single core cpu, it is normal to have 100% cpu usage in a 3d rendered game ?


It's perfectly normal.. and probably preferred. Whichever operating system you have dishes out the amount of CPU usage evenly to each application as needed. (To prove this, you can make an ifinite loop, and it won't freeze the rest of the computer because the OS will still give the other processes the time they need to run). HOWEVER, this is when priorities come in, if you set your priority of your application higher than 'normal', then the OS will give it more CPU time, freezing up the rest of your computer.

So, 100% CPU time is normal for games/other 3D apps, just don't mess with the process priorities.

~zix

EDIT: Also get process explorer. It gives the CPU usage out to two decimal places, so you may be a little clearer about what's using what.

Share this post


Link to post
Share on other sites
Quote:
Original post by Mike2343
A few things, drop the use of ZeroMemory(). There's no point its just wasted cycles.

DON'T Sleep(1), it can actually go up to 5ms. If you want to Sleep for one (1) ms then first use timeBeginPeriod(1) and restore it after with timeEndPeriod(1). The increases the accuracy of Sleep() and timeGetTime() to 1ms from I believe a default 10ms on 2000/XP/Vista. I put the timeBeginPeriod() before I enter my loop and restore it after (when the games shutting down and time accuracies don't matter.

I use Sleep(0) which just frees up the rest of your time slice. It's polite if you have spare cycles.

Oh and as a side note, if you're game is full screen don't worry about how much CPU you use. Heck I don't care even if I'm in a windowed mode with ours. They choose to start my application :) I am polite it might look like 100% usage but its really not.


thanks for the good suggestion, im playing with that right now, and working perfectly on my laptop

I agree with you on the fact that i should not care about my fullscreen game, but the thing is, with todays laptop, people do play on the go, so if i can reduce cpu usage from 100% to 50%, its better against heat, battery power etc

can you be more precise on where you use Sleep(0) in your code ? i tried it and doesnt seem to give the cpu a break, while Sleep(1) does

oh and btw, putting vsync on helps a lot for cpu loads... but i do all my tests with vsync of to measure the difference.

Share this post


Link to post
Share on other sites
Sleep(0) doesn't give the CPU a break. It just allows other processes to run, if necessary. If you want to rest the CPU then you need a value of 1 or above. Even that isn't guaranteed to rest the CPU if you have other processes running to take advantage of that free time, but there's little you can do about that.

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Sleep(0) doesn't give the CPU a break. It just allows other processes to run, if necessary. If you want to rest the CPU then you need a value of 1 or above. Even that isn't guaranteed to rest the CPU if you have other processes running to take advantage of that free time, but there's little you can do about that.


Indeed, Sleep(0) basically gives control back to the OS kernel and gives the kernel's process scheduler a chance to figure out what process is next in line for execution.
If you process is the only one in line it gets CPU control back immediately, thus not freeing up the CPU. If there is another process waiting to get CPU time that one will get control.
Doing a Sleep() with a value of 1 or up will put your process in a suspended mode, and it is guaranteed not to receive CPU control until the next time the scheduler checks if it can give a process CPU time. Mike2343 says this is 5 milliseconds, and that could very well be true.

There is one *possible* negative effect by doing a Sleep(0), and that is when a second process is consuming a lot of CPU power your own process might not get any CPU time for a "significant" time. I think that would be 25 milliseconds at least on Windows, the length of a "timeslice".
However, Mike2343 is right in saying it is "polite" to give up CPU control whenever you can. Thereby it is the user's responsibility to close any heavy applications.

Modern desktop OSes use a concept called timeslicing and if you want to learn more about process scheduling you should read up on that topic.

Share this post


Link to post
Share on other sites
Indeed if you use the timeBeginTime(1) stuff you should have a lot more accurate Sleep() function. So if you use 1 it will likely be 1 ~ 2ms. I place Sleep(0) at the bottom of my main loop. I also have a check if we have extra cycles (time) and if the application doesn't want to do extra AI or something gives it to the OS in a Sleep(1).

The only time Sleep(1) can be up to 5ms (sometimes more!) is when you haven't set timeBeginTime() or the next process is hogging of course.

Share this post


Link to post
Share on other sites
Why is everyone so hyped up over the difference between 1ms and 5ms? It's a game, you have your own timer which determines how much time has elapsed so you can act accordingly. It's unlikely that you'll even be able to tell the difference.

When working with windows code, I update my graphics using WM_PAINT but don't validate the region (so another WM_PAINT message is sent). My game stays below 10% CPU usage, and is very smooth.

Sometimes I'll put a Sleep(20) at the end of the message loop instead. It all depends on your game.

Share this post


Link to post
Share on other sites
Quote:
Original post by Wavarian
Why is everyone so hyped up over the difference between 1ms and 5ms? It's a game, you have your own timer which determines how much time has elapsed so you can act accordingly. It's unlikely that you'll even be able to tell the difference.

When working with windows code, I update my graphics using WM_PAINT but don't validate the region (so another WM_PAINT message is sent). My game stays below 10% CPU usage, and is very smooth.

Sometimes I'll put a Sleep(20) at the end of the message loop instead. It all depends on your game.


If you use your own timer then your process is still running that timer, if you use Sleep() then it gives control back to the OS.
If you are in a single-threaded game running at 30fps, and you put Sleep(20) in, then you are "wasting" ~20ms of your 33.3ms frame time, which is fine unless your game needs to do something..

Degra

Share this post


Link to post
Share on other sites
Quote:
Original post by md_lasalle
Quote:
Original post by Spoonbender
Quote:

Why do some machine get 100% cpu

Because on those systems, there was only one core, and it was utilized all the time.


Some of them are dual core machines. Some of them only gets to 100% cpu after a certain time playing

edit : are you saying that for a single core cpu, it is normal to have 100% cpu usage in a 3d rendered game ?


From what I understand some dual core chips over clock the core being used if the other isn't being used. At least I've heard such plans, not sure if any of it has actaully been put to market...

Share this post


Link to post
Share on other sites
Quote:
Original post by md_lasalle
Hi all!

I've been testing my game on different machine, and i noticed that the cpu consumption was different from machine to machine.

Why do some machine get 100% cpu while other have 50% and some other have 12~20% ??


I tried playing with my main loop and by testing on every machine

here is my code


MSG msg;
while(!done)
{
while(PeekMessage(&msg, NULL, NULL, NULL, PM_REMOVE))
{
TranslateMessage(&msg);
DispatchMessage(&msg);

if (msg.message == WM_QUIT) return 0;
}


return 0;
}


If there's anything wrong, or unsafe way of doing things, please tell me.

I tried adding a Sleep(1) before the ZeroMemory, it caused the consumption to go down quite a bit, but on 1 of the computer, i had lag when keyboard or mouse was in use, like if a WM_MESSAGE would make it lag.

Thanks for any lights on this

If it's a turn-based game, you should probably be doing GetMessage, it's more friendly for laptops. You also might consider it for pause loops if they aren't doing anything.

MSG msg;
while(!done)
{
while(GetMessage(&msg, NULL, NULL, NULL))
{
TranslateMessage(&msg);
DispatchMessage(&msg);

if (msg.message == WM_QUIT) return 0;
}


return 0;
}

Share this post


Link to post
Share on other sites
Quote:
Original post by Wavarian
Why is everyone so hyped up over the difference between 1ms and 5ms? It's a game, you have your own timer which determines how much time has elapsed so you can act accordingly. It's unlikely that you'll even be able to tell the difference.


That's 4ms difference. If your making a pong reproduction no biggy. But in any semi-professional or above engine this does matter. I want my logic to run ever x ms with as little variation as possible. Something as inaccurate as Sleep() without timeBeginPeriod() jumping randomly between 1-5ms is a big deal. Yes I use a fixed time step but you can still get into issues.

Quote:

When working with windows code, I update my graphics using WM_PAINT but don't validate the region (so another WM_PAINT message is sent). My game stays below 10% CPU usage, and is very smooth.


Have you measured the FPS? From my understanding with WM_PAINT you get no more then 30 updates a second (I could be wrong but I'd still never use it). But 30fps isn't what I'd call smooth but more minimum, BYMMV. I have my rendering and logic updates independent but you can specify a maximum FPS with our engine. Or flat out unlimited which is the default.

Quote:

Sometimes I'll put a Sleep(20) at the end of the message loop instead. It all depends on your game.


Indeed, with that many spare cycles you're making a very, and don't take this the wrong way, basic game. There are plenty of games like this but it far from suits most people on these boards with their ambitions projects :)

If the above works for you, great. But never knock accuracy, some of us like it ;-)

Share this post


Link to post
Share on other sites
Quote:
Original post by Mike2343
Oh and as a side note, if you're game is full screen don't worry about how much CPU you use. Heck I don't care even if I'm in a windowed mode with ours. They choose to start my application :) I am polite it might look like 100% usage but its really not.


You gotta be kidding me.. cpu usage IS important, especially on today's laptops and low-power cpus! so do free up cycles your game doesn't need, since it is a good thing to do so! it helps keep the cpu fans run slow and silently as well.

Share this post


Link to post
Share on other sites
I think games should use all the hardware in the computer as much as possible. I dont mean to uselessly waste all the CPU cycles, but, if it can handle, put more moving boxes on the screen (physics), or recalculate waypoints more often (AI).
Games can use 100% CPU, what else are you doing when you are playing?
And if the games use 5% of your mega super 5Ghz-Quadro-core-CPU, why do you have it?

Share this post


Link to post
Share on other sites
Quote:
Original post by Gagyi
I think games should use all the hardware in the computer as much as possible. I dont mean to uselessly waste all the CPU cycles, but, if it can handle, put more moving boxes on the screen (physics), or recalculate waypoints more often (AI).
Games can use 100% CPU, what else are you doing when you are playing?
And if the games use 5% of your mega super 5Ghz-Quadro-core-CPU, why do you have it?


You have it because you might sometimes really need it, but when playing Pong you don't. I know that some of the guys at my school sometimes play GTA2, N! or AoE1 and none of those should use 100% CPU, because they can run with much less. I don't want to draw 600Watts from the outlet because some programmer refused to consider power. I don't want my laptop's battery to last 15 minutes because some programmer refused to consider power. Power is becoming increasingly important today, and you can see that all the hardware companies try to reduce their power consumption. We, as software developers, can make just as much impact if we try to.

Let the user explicitly set the quality settings like graphics quality, reflections, etc. and warn them if their system is capable of much more, but don't force them to waste power. If they are fully utilizing their system there is no need to waste time trying to limit the framerate. But if we run the application at 1500FPS the user won't lose anything by having their framerate limited to their refresh rate. Personally I think VSync is the best approach to limit FPS, but it's also possible to do with the HPET timer on Vista (through QueryPerformanceCounter) or a somewhat lower resolution timer on XP.

Quote:
Indeed if you use the timeBeginTime(1) stuff you should have a lot more accurate Sleep() function.

Of course calling timeBeginTime(1) will significantly degrade system performance; far beyond what is considered acceptable for many games. I have heard sources claim an average 14% slowdown which just is too much to ignore for most games. According to Microsoft their tests shows that lowering the tick frequency to 2ms will only have negligible effects on performance, but lowering the tick frequency to 1ms will significantly degrade overall system performance.

Clearly the limit shouldn't be unconditional. Some people just put a Sleep(n) in the end of their game loop, but if you are actually using the CPU then this is an extremely bad approach. Instead you should try to determine how much you can safely sleep without going into the next frame's time. For example we might want to limit our framerate to 60FPS, so we want to use 16.7ms per frame. If you have spend 16.7ms or more in a frame you don't call sleep, but instead just goes on to the next frame (actually you might want to consider alerting the user if this happens several times in a row). If you have spend a time less than 16.7, but the difference between them is still less than the tick rate (what is set by timeBeginPeriod) then you shouldn't call Sleep since you can't expect it to return before the next frame is set to run. If you have spend so little time that the difference between it and 16.7ms is greater than the tick rate then you call sleep with an appropriate parameter. For instance Sleep(16.7ms-dt-tickrate) is a good approach where dt is the time used on this frame and tickrate is the value you called timeBeginPeriod with. Of course you might also want to take into account that dt might be somewhat imprecise because of the inaccuracy in timers.

Quote:
If it's a turn-based game, you should probably be doing GetMessage, it's more friendly for laptops. You also might consider it for pause loops if they aren't doing anything.

The problem is just that animation is still likely to happen when someone is waiting to make move or when the game is paused, so GetMessage is rarely an adequate solution. Also pre-loading might happen while waiting for someone to make a move.

Share this post


Link to post
Share on other sites
Quote:
Original post by CTar
You have it because you might sometimes really need it, but when playing Pong you don't. I know that some of the guys at my school sometimes play GTA2, N! or AoE1 and none of those should use 100% CPU, because they can run with much less. I don't want to draw 600Watts from the outlet because some programmer refused to consider power. I don't want my laptop's battery to last 15 minutes because some programmer refused to consider power. Power is becoming increasingly important today, and you can see that all the hardware companies try to reduce their power consumption. We, as software developers, can make just as much impact if we try to.

Let the user explicitly set the quality settings like graphics quality, reflections, etc. and warn them if their system is capable of much more, but don't force them to waste power. If they are fully utilizing their system there is no need to waste time trying to limit the framerate. But if we run the application at 1500FPS the user won't lose anything by having their framerate limited to their refresh rate. Personally I think VSync is the best approach to limit FPS, but it's also possible to do with the HPET timer on Vista (through QueryPerformanceCounter) or a somewhat lower resolution timer on XP.


I agree, that's why I said our engine has a setting for max FPS and logic updates. Though the user doesn't get to set the logic updates obviously. It also lets you know if you're running fast (just re-rendering the same scene) and you can have a custom function called during this time that does say extra AI, physics, whatever the engine doesn't care. It just gets x ms based on whats left over and then cuts out. Or, it sleeps for a bit less then the extra time.

Quote:
Indeed if you use the timeBeginTime(1) stuff you should have a lot more accurate Sleep() function. Of course calling timeBeginTime(1) will significantly degrade system performance; far beyond what is considered acceptable for many games. I have heard sources claim an average 14% slowdown which just is too much to ignore for most games. According to Microsoft their tests shows that lowering the tick frequency to 2ms will only have negligible effects on performance, but lowering the tick frequency to 1ms will significantly degrade overall system performance.


Articles please, I've tried it on about 30 systems now and nothing but more accurate time happens. 95/98/ME was defaulted to 1ms resolution only 2k/XP changed it. I also do my best to use QPC/QPF unless it's an unpathced AMD dual core processor. Then I roll back to timeGetTime().

Quote:

Clearly the limit shouldn't be unconditional. Some people just put a Sleep(n) in the end of their game loop, but if you are actually using the CPU then this is an extremely bad approach. Instead you should try to determine how much you can safely sleep without going into the next frame's time. For example we might want to limit our framerate to 60FPS, so we want to use 16.7ms per frame. If you have spend 16.7ms or more in a frame you don't call sleep, but instead just goes on to the next frame (actually you might want to consider alerting the user if this happens several times in a row). If you have spend a time less than 16.7, but the difference between them is still less than the tick rate (what is set by timeBeginPeriod) then you shouldn't call Sleep since you can't expect it to return before the next frame is set to run. If you have spend so little time that the difference between it and 16.7ms is greater than the tick rate then you call sleep with an appropriate parameter. For instance Sleep(16.7ms-dt-tickrate) is a good approach where dt is the time used on this frame and tickrate is the value you called timeBeginPeriod with. Of course you might also want to take into account that dt might be somewhat imprecise because of the inaccuracy in timers.


Covered above but left in as this is exactly how it should be done.

I don't like the idea of telling the user though. Not because I want to hide anything, but I don't want to ruin the game for them. I would make a window pop up the next time they started the game, before they got into it. Gah, for me intruding on a gaming experience is a tough one. I'd leave it for the designer to decide ;-)

Edit:
The honest moral of the story is use as little as you need while still maintaining the best gaming experience possible. If you're making pong and taking 100% of the CPU, bad, bad programmer! If you're making the next oblivion, Unreal 2037, Doom 12 or whatever, you'll likely be using 100% of all CPUs to blow peoples minds. Take the extra time and be kind to the user. If your game kills their batteries in 15mins they won't be playing your game that much. Desktop *shrug* who cares. But we do need to remember those poor laptop guys as they become more powerful and more common.

Share this post


Link to post
Share on other sites
If your game carelessly runs at 100% CPU then I'm probably not going to play it. On single processor PCs this can choke other apps, and unnecessarily causes things to get hot. Even in fullscreen mode I don't appreciate everything running in the background taking a huge performance hit when it's not necessary.

"Oh sure, but Windows will take the timeslice away when other programs need it."

Have any of you actually run a program that uses 100% CPU? On a single processor PC explorer slows to a crawl and you have to wait 10 seconds for task manager to come up so you can kill the offending app. Windows dishes out longer timeslices than it's possible to run when you have a certain number of applications running, which means that when one program doesn't give up the time it doesn't need (and instead burns it off in some kind of loop) it ends up hurting the performance of other programs. And it doesn't matter if you're in fullscreen or not, it still happens regardless of whether you can see it. I don't appreciate the programs running in the background taking a major performance hit just because your game thinks it needs all my CPU time.

As a final note, notice how many production titles actually use all of your CPU. I can't name one.

This is kind of an annoying trend I've seen around GDNet. 100% CPU is not okay. Sleep(0) will free up what remains of your timeslice, not take away time that you were actually using for anything. Play nice with other programs.

Share this post


Link to post
Share on other sites
Here is my method to get 60 fps with a quite good cpu consummation.
First in the main method I create a separate thread in witch the main loop is done, main method just dispatches messages.


DWORD WINAPI run( LPVOID lpParameter)
{
unsigned long long freq; //cpu frequency
unsigned long long freq60; //for 60 fps
unsigned long long count; // current frequency
unsigned long long nextCount; // next count for a completed second
unsigned long long nextFrame; //for a frame
unsigned long nbframe = 0;

//initialisation
QueryPerformanceFrequency((LARGE_INTEGER *)&freq);//grab the cpu frequency
freq60 = freq/60; // calculation for a frame
QueryPerformanceCounter((LARGE_INTEGER *)&count); // the current elapsed frequency
nextCount = count + freq; //when we rich nextCount it mean a second has elapsed
nextFrame = count + freq60; // a frame has elapsed

while(System::isRunning()) // if the game is running
{
++nbframe; //inc frame count
//do the job

QueryPerformanceCounter((LARGE_INTEGER *)&count); // the current elapsed frequency

if(nextCount<=count) // if we rich a second
{
printf(nbframe); //show the fps
nbframe = 0; // start back to 0
nextCount += freq; // next second limit
}
long long res = (long long) nextFrame - (long long) count; // calculation of the time to sleep for this frame
if(res>0) // if there's some
{
res = (res*(long long)1000)/(long long)freq; // conversion
Sleep(res); //and sleep
}
nextFrame += freq60; // next frame limit
}



And I think cpu time saving is useful if you have a multiplayer game. If the firewall doesn't have enough time to do its job it will result a new layer of lag.

nico

Share this post


Link to post
Share on other sites
Quote:
Original post by Mike2343
Articles please, I've tried it on about 30 systems now and nothing but more accurate time happens. 95/98/ME was defaulted to 1ms resolution only 2k/XP changed it. I also do my best to use QPC/QPF unless it's an unpathced AMD dual core processor. Then I roll back to timeGetTime().

Quote:
From Guidelines For Providing Multimedia Timer Support
The obvious drawback of this implementation is that a large clock interrupt period could result in long delays. For an application to be notified with more precision on todays systems, it must request a smaller clock interrupt period. In the previous example, if the multimedia application wanted its code executed precisely on time, it would request a clock interrupt period of 1 millisecond. Then, the system would check every millisecond to see if there was work to do, as is illustrated in Figure 2.

While this allows the multimedia application to execute its code and play a sound on time, it also degrades overall system performance. Microsoft tests have found that, while lowering the timer tick frequency to 2 milliseconds has a negligible effect on system performance, a timer tick frequency of less than 2 milliseconds can significantly degrade overall system performance. On faster systems, the cost of lowering the clock interrupt period below 2 milliseconds may become affordable, but the subtle effect of the increased interrupt frequency on cache consistency and power management may not be desirable.


Quote:
From General-Purpose Timing: The Failure of Periodic Timers
We have calibrated an empty loop (a computation phase) to finish after 1 ms, and ran it a million times on a Pentium-IV 2.8GHz Linux machine with 1000 Hz ticks, saving a cycle-resolution timestamp after each phase. No other user processes were executing. At the end of the benchmark we computed the duration
of each phase by subtracting successive measurements.
...
Instrumenting the kernel to log all interrupts revealed that the only activity present in the system while the measurements took place were about a million ticks and 3,000 network interrupts, indicating ticks are probably the main cause of the problem. This was verified by repeating the measurements with kernels compiled with 100 and 10 Hz ticks, which experienced far smaller time variability, respectively. But measuring direct overhead of the tick handler indicated that it only accounts for 0.8% of available cycles (using the data from Fig. 3, indirect overhead is found to be about 14% significant even for a uniprocessor). We therefore concluded that most of the effect is indirect overhead, due to cache misses. This was verified by repeating the experiment with the cache disabled.

(Note: 1000Hz is the same as 1ms resolution)
This was running Linux, but I doubt matters are much different on Windows. Of course it might not have been your average game code, but any code depending on cache consistency is likely to be slowed significantly down even if 14% might be a bit higher than what occurs under real circumstances.

Quote:
If you're making the next oblivion, Unreal 2037, Doom 12 or whatever, you'll likely be using 100% of all CPUs to blow peoples minds. Take the extra time and be kind to the user. If your game kills their batteries in 15mins they won't be playing your game that much.

But you still can't expect that Doom 12 will be able to fully utilize the user's computer, because by the time Doom 14 is out most computers will be able to run Doom 12 smoothly on full settings at 100FPS and still only use 10% CPU. Both GTA2 and Age of Empires could easily use 100% CPU without being wasteful when they were released, but today we have much faster hardware.

I somewhat agree with the desktop assertion that we can claim the whole CPU, but I still prefer to save CPU power if I know the extra processing won't improve the gaming experience. Taking up 800W when we can achieve the same with 400W is stupid.

Also it depends on the game, if some of your users will run in windowed mode to take advantage of other applications at the same time, then you should consider that. It is somewhat rare that users do other things while playing, but I know at least one person who uses an Excel spreadsheet to computer his chances while playing MMORPGs in windowed mode. Even in fullscreen mode you might need to consider this, because the player might have a dual screen setup and run other applications at the other screen.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ra
If your game carelessly runs at 100% CPU then I'm probably not going to play it. On single processor PCs this can choke other apps, and unnecessarily causes things to get hot. Even in fullscreen mode I don't appreciate everything running in the background taking a huge performance hit when it's not necessary.

"Oh sure, but Windows will take the timeslice away when other programs need it."

Have any of you actually run a program that uses 100% CPU? On a single processor PC explorer slows to a crawl and you have to wait 10 seconds for task manager to come up so you can kill the offending app. Windows dishes out longer timeslices than it's possible to run when you have a certain number of applications running, which means that when one program doesn't give up the time it doesn't need (and instead burns it off in some kind of loop) it ends up hurting the performance of other programs. And it doesn't matter if you're in fullscreen or not, it still happens regardless of whether you can see it. I don't appreciate the programs running in the background taking a major performance hit just because your game thinks it needs all my CPU time.

As a final note, notice how many production titles actually use all of your CPU. I can't name one.

This is kind of an annoying trend I've seen around GDNet. 100% CPU is not okay. Sleep(0) will free up what remains of your timeslice, not take away time that you were actually using for anything. Play nice with other programs.


Actually I have done this. Counter-Strike: Source takes up 99-100% of my CPU on my Intel E6400 with 4GB or RAM. I can task out no problem and don't experience anything you state above. Nor did I on my single core AMD system (nor do I still).

Since you say you won't play a game that does this, you don't play 95% of major games then? I started up about 10 different commerical games I own and all hit 99-100% CPU usage. I can task out easily of all of them too.

But yes, still play nice :)

Share this post


Link to post
Share on other sites
Quote:
Original post by yopyop
Here is my method to get 60 fps with a quite good cpu consummation.
First in the main method I create a separate thread in witch the main loop is done, main method just dispatches messages.

*** Source Snippet Removed ***

And I think cpu time saving is useful if you have a multiplayer game. If the firewall doesn't have enough time to do its job it will result a new layer of lag.

nico


Your code will likely break on AMD dual core processors just FYI. See the issues with QueryPerformanceCounter and switching cores... The frequency is different on each so you can get massive changes. Also on weaker systems your code will use more CPU, it's just a given.

Also don't worry about the firewall, it's a non-issue. Sending 60 packets a second is nothing at all. You should also limit the number of packets like most games do anyways. Some set it to 30hz, some 60hz. Also you're assuming a software firewall, don't program for specifics like that. Everyone I personally know has a hardware one in their router that they use.

@CTar, thanks for the links I'll read them tomorrow.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement