Locking functions to timers

Started by
13 comments, last by Hnefi 16 years, 9 months ago
If it's only a few thousand multiplications, I very seriously doubt that the game will run below 30-50 FPS anyway and the technique becomes irrelevant. For games with many thousands of objects that move in several dimensions, you essentially add one floating point operation to every dimension of freedom of every object and every (temporal) action. That can mount to upwards a million FLOPs - every frame. If it takes even a significant fraction of a second (such as 1/100) to perform those operations on a recommended-specs computer, the results is too much of a performance loss (and gets progressively worse with cheaper computers).

One exception is applications that are completely GPU-bound, of course. But then, why not let the extra CPU be used for other applications instead of worrying about people with underspeced computers being able to run the app at full speed?

Another exception, one that is legitimate, is synched real-time P2P games that, as part of the game design philosophy, should not be slowed down by one crappy client. But that's a pretty rare exception.
-------------Please rate this post if it was useful.
Advertisement
Quote:
If it's only a few thousand multiplications, I very seriously doubt that the game will run below 30-50 FPS anyway and the technique becomes irrelevant. For games with many thousands of objects that move in several dimensions, you essentially add one floating point operation to every dimension of freedom of every object and every (temporal) action. That can mount to upwards a million FLOPs - every frame. If it takes even a significant fraction of a second (such as 1/100) to perform those operations, the results is too much of a performance loss.

This is completely silly. Pretty much all games out there that are nontrivial used some kind of elapsed-time-based mechanism that involves multiplications you're talking about. They run perfectly well.

Even if this becomes an issue (it is highly unlikely, about one step removed from completely impossible, as it is far more likely that any more complex updating mechanism would vastly overshadow the "cost" of a the multiplication by the elapsed time factor), the solution is not to make the multiplication faster but to do less multiplication (that is, update less objects by improving your algorithm).

Piddling about in this fashion is premature micro-optimization. The benefits in simulation scalability and stability you gain from elapsed-time-based updates far aways the minuscule performance impact they have. The only reasons not to use such mechanisms are if you are lazy or if they are unsuitable (as in physics, where fixed time steps help preserve the stability of the integrator).
Quote:Even if this becomes an issue (it is highly unlikely, about one step removed from completely impossible, as it is far more likely that any more complex updating mechanism would vastly overshadow the "cost" of a the multiplication by the elapsed time factor), the solution is not to make the multiplication faster but to do less multiplication (that is, update less objects by improving your algorithm).

Doing less multiplications is a separate issue. Obviously, that's more important, but it doesn't have anything to do with this particular situation.

Also, the alternative to keeping track of elapsed time is not a "more complex updating mechanism", but rather the opposite. The example I showed above is much simpler than calculating a timeslice and multiplying everything by it. You do, of course, need other methods to deal with interacting objects (collision detection that is independent of the size of a tick) but you need that with your method too.

As far as I can see, the only advantage to calculating and multiplying timeslices is that you don't experience logic slowdowns on low-end computers. The cost is maintainability, simplicity and (arguably) performance.

Quote:Piddling about in this fashion is premature micro-optimization. The benefits in simulation scalability and stability you gain from elapsed-time-based updates far aways the minuscule performance impact they have. The only reasons not to use such mechanisms are if you are lazy or if they are unsuitable (as in physics, where fixed time steps help preserve the stability of the integrator).

Apart from the exception mentioned in my previous post, I don't see how you can claim to improve stability with that method. The point of scalability I will accept though. I also don't see how my method is "piddling about" - it is the very opposite. 4 function calls and you're done.
-------------Please rate this post if it was useful.
You've misunderstood the focus of my arguments. Let me try to clarify:

Quote:
Also, the alternative to keeping track of elapsed time is not a "more complex updating mechanism",

Correct, it isn't. The "more complex updating mechanisms" (I actually meant to use "machinery" which might have helped to disambiguate; I don't know) I was referring to were not related to timeslicing, but rather to other operations that would need to be performed on a given object to "update" it (testing and setting state, checking logic against other objects state, collision test and response, et cetera). Things that need to be done regardless of whether or not you use a fixed-step or delta-step method and whose computational complexity is much larger than a couple trivial multiplications.

Quote:
The cost is maintainability, simplicity

Both of these are arguable as well. Maintainability depends heavily on what the game needs to be doing and the context that its running in (and on). The amount of code involved for (correct) fixed-step updates and variable-step updates is pretty much the same. For similar reasons, the simplicity argument only holds unarguably if you don't include any timeslice control, in which case the updates become entirely CPU-bound (and will run slower on slower machines and faster on faster machines, to the point of being too slow or too fast, which is obviously not desirable).

Quote:
I also don't see how my method is "piddling about"

Your method isn't, but the general concept of being prematurely concerned about the overhead of an extra floating-point multiply per calculation is. There's nothing wrong with the example you posted (near as I can tell); what I'm taking issue with is your implication that updating objects based on an elapsed time is too slow for a large set of objects (as this is done in a vast majority of games, and every game and software product I've worked on, and the bottleneck always turned out to be elsewhere).

I don't have a problem with the update step locking method you posted at all (what I would have a problem with is no update step control at all, as I mentioned briefly above). Just the performance "impact" issue.

Ah, ok. I see we're not really in disagreement at all, we just focus on different things. I admit I overstated the potential impact on performance, probably because it was the primary reason I started using fixed timing once upon a time.
-------------Please rate this post if it was useful.

This topic is closed to new replies.

Advertisement