My game server is eating my CPU

Started by
26 comments, last by rip-off 12 years, 9 months ago
I use an algorithm similar to EJH's. But my server is consuming 45% CPU if I do Sleep(1). I have to pull Sleep(30) or so for CPU usage to drop low enough. How can it be? just checking the timer isn't supposed to be much of a CPU hog, even if it's done 1000 times per second.
Advertisement

this happens at tick 1+offset 2ms, this one in tick 3+offset 4ms.


In my opinion, there should be no offsets within the ticks. Everything happens "at" the tick. On the CPU, of course, some things happen before other things, but they are all logically expected to happen during that particular tick.

The only time when sub-ticks matter is when you do presentation things like animations and sound effects -- and those are entirely client-side, derived from the simulation state, and thus do not need any explicit sub-frame simulation synchronization.
enum Bool { True, False, FileNotFound };

hplus, can you give an example algorithm similar to EJH's using select() instead of Sleep() and the clock? His algorithm is more or less like mine.

The last argument to select is a structure to hold a timeout suggestion. Sleep will wait for AT LEAST the timeout requested regardless of if anything is waiting. Select on the other hand will wait for UP TO the timeout requested if nothing is happening on the socket sets being polled. If something happens on the socket sets being polled by select then the select call returns regardless of if the timeout has expired or not. I believe libevent has a similar mechanism but I don't recall it right now.

That being said - if you are using select already to poll sockets - just insert a small timeout in the timeout value. Note that this argument is a timeval structure and not just a simple number

man select
Evillive2

[quote name='ApochPiQ' timestamp='1311204664' post='4838187']
this happens at tick 1+offset 2ms, this one in tick 3+offset 4ms.


In my opinion, there should be no offsets within the ticks. Everything happens "at" the tick. On the CPU, of course, some things happen before other things, but they are all logically expected to happen during that particular tick.

The only time when sub-ticks matter is when you do presentation things like animations and sound effects -- and those are entirely client-side, derived from the simulation state, and thus do not need any explicit sub-frame simulation synchronization.
[/quote]

In the common case I would tend to agree, but there are times (such as when doing expensive physics simulation server-side) where it's nice to be able to offset within a tick and do some tweening from that to get to client-presented values. Also helps a bit with perceived latency in certain edge cases.

Suppose you need to run complex physics in addition to some higher-level game logic. Your physics threads get snarled on a nasty collision resolution, for instance, and take slightly longer than the tick budget to return. Instead of deferring the results of the collision by an entire tick, or assigning it to the prior tick and possibly getting premature collision response animation/etc., you use an offset to hint to the client that it needs to do some interpolation to make things look continuous.

Coupled with roundtrip time estimates, you can use this to help resolve "I shot you first" type situations, although admittedly it requires a degree of care to ensure that the arbitration actually produces results that "feel" correct. Bungie did some interesting stuff with this in Reach, and talked about it at length at GDC 2011.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]


Suppose you need to run complex physics in addition to some higher-level game logic. Your physics threads get snarled on a nasty collision resolution, for instance, and take slightly longer than the tick budget to return. Instead of deferring the results of the collision by an entire tick, or assigning it to the prior tick and possibly getting premature collision response animation/etc., you use an offset to hint to the client that it needs to do some interpolation to make things look continuous.



There is no case in reality where this makes sense. If your physics simulation takes longer to run one tick than the duration of a tick, you're likely heading for the Death Spiral of Death. However, let's assume that there's a temporary CPU stall for about one tick's worth of time -- maybe because of virtualization, maybe because of scheduling, maybe because a backup process started -- whatever. Then what? How is this different from a network stall for about one tick's worth of time?
Your system needs to be able to deal with this; typically by adapting the estimated clock offset when there's a snag. In physics simulation time, there is only "the step," and no events happen at a resolution finer than "the step." Separately, your client may lerp the display of various events to times between the actual "step" times, but that's entirely a client-side decision. There is no case in a fixed time-step simulation where it makes sense for the server to try to offset events by less than a step.
enum Bool { True, False, FileNotFound };
Decoupled physics simulations can run into this. Game logic can be set to, say, 30Hz and physics to run as fast as possible (which may be closer to 10-20Hz). This can also occur if you run multiple simulation contexts within the same network hosting process.

I'm not going to argue that it's the best thing ever, but it certainly does happen.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]


Decoupled physics simulations can run into this. Game logic can be set to, say, 30Hz and physics to run as fast as possible (which may be closer to 10-20Hz). This can also occur if you run multiple simulation contexts within the same network hosting process.


I would go so far as to say that variable-frame-rate simulation is the only thing that can cause mid-step events.
I'd further go on and say that variable-frame-rate simulation is a terrible thing that you should avoid at all costs. It requires significantly more expensive integrators to run stable, and it still will suffer various tunneling or other inaccuracy problems.
enum Bool { True, False, FileNotFound };
use linux, recode your server 2 times = forget for your problems :)
u dont really need win& its IOCP , you just need epoll / select and non blocking sockets - way faster to code.
Sarcasm is a form of art.
That is an unreasonable solution. There is nothing wrong with using IOCP and Windows.

This topic is closed to new replies.

Advertisement