do most games do a lot of dynamic memory allocation?

Started by
99 comments, last by Norman Barrows 9 years, 3 months ago

There is a difference over fragmenting OS requested process memory over time in naive way, and there is difference about being an obsesed happye programmer of "any time alloquance" of a variable that c++ came to offer. I as a programmer would never give up the feuture of requesting memory on runtime of program, but as I grew bigger and older, I realized one should be spare in this

Advertisement


I guess a slightly more interesting question might be: "is language support for dynamic memory ownership necessary in a language aimed at professional game programmers".

following along the lines of the video that prompted my O.P, that would be the next logical question.

like i said, i was watching more for the sprigs of gamedev insight, than the language he's making. as i recall, i only made it through half of the second demo video and stopped watching. the insights were becoming fewer and it was all about the language and compiler. i might watch the rest at later date. some of the features seem nice. when calculating frame times for f-y-t and round robin game loops i was thinking the compile time execution feature would be handy to work up a "spreadsheet" of sorts to crunch the numbers for me.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php


There is also a deep distrust of template code which may or may not be 100% justified again depending on the situation.

interesting,. i recently experienced an unexpected reproduce-able crash in caveman, and traced it back to the generic iterator. upon inspection there seemed no reason why it should fail. but i didn't explore further, i just replaced it with a good old fashioned for loop.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Rather than engaging directly in the valuable discussion here, I'm going to share a couple vignettes from 'in the trenches', so to speak.

The last time I worked in AAA, that engine was using a lot of modern C++ features and was, by all accounts, cutting edge code for 2007. Cutting edge enough to break some compilers, in fact. This meant a lot of STL containers, a lot of allocation, entity component systems, all the fun stuff except for exceptions. This was a 360/PS3 title and did run on PC, though that was not the intended target.

In the last couple months, optimization work began in earnest. Allocation was a significant problem. First up? Tons of traffic in std::vector. A lot of the usual suspects - improper reservation sizes, unnecessary temporaries at function level, etc. Nothing terribly interesting, but a lot to go through in aggregate. Eventually std::vector was dropped in favor of a custom vector with broadly similar behavior but more tightly specified, pooled, and instrumented. After that and a few other low hanging fruits, things were much better but not perfect. I think a lot of small allocations were cleaned up to deal with fragmentation issues, and ultimately the memory allocator itself was replaced with some well known open source third party thing. I don't remember the specific advantages, but it got us to shipping without fragmentation/OOM issues.

I heard about another game in a relatively similar timeframe that used entirely static allocation of everything. It may have been Halo. The idea is not dissimilar to Norman's code, though obviously much more complex in practice. The usual issues arise - game behaves unceremoniously when designers exceed hard coded limits, etc. But I have one simple point to make.

Let's assume you have a hard limit of 10 MB of memory for your new game, Aardvark Crossing. This memory has to be shared between Aardvarks and Zebras. Level 1, 2, and 3 feature 4 MB of Aardvarks and 3 MB of Zebras. You set the pools at 4 and 3, and leave the rest open for later. Now your designer adds level 4 with way more Aardvarks - 8 MB of them. Problem! But the designer won't relent on Aardvarks, so now you resize the pools and Zebras have to be cut down to 2 MB in all other levels.

Things really become a problem, though, when you hit level 5. See, every five levels are the Zebra Bonus Level. It's 9 MB of just Zebras! Or it would've been, if your engine could actually be reconfigured that way in the first place. There's no way you can make the Zebras fit, and there's no way to cut back the Aardvarks in a game called Aardvark Crossing. So now you have to dynamically choose your pool sizes when each level loads. One thing after another falls victim to the dynamic allocation virus, and by the end of it every type of object has its own pool allocator and you're juggling two dozen pools.

The truth is dynamic allocation was not invented for laziness, and static allocation is not foolproof. My personal opinion is that it's necessary to have a mix of both, and more important to be able to track everything.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.


interesting,. i recently experienced an unexpected reproduce-able crash in caveman, and traced it back to the generic iterator. upon inspection there seemed no reason why it should fail. but i didn't explore further, i just replaced it with a good old fashioned for loop.

There are almost certainly no bugs in the iterator itself, so it's like it was exposing a bug in your code somewhere. Replacing it with a for loop covered up the bug, but you don't know why. Now you likely still have a bug in your code that is causing some harder-to-detect (i.e. not a crash) problem. If I were you I would have found the source of the problem instead.

Sounds very similar to the philosophy you espoused in the original post:


if you don't use memory pointers, you can't get memory leaks, or dereference nil pointers. so all those headaches go away.

Those headaches go away, and are replaced with harder-to-debug headaches like updating the wrong or invalid entity. If you have memory leaks or are referencing nil pointers, you have bugs in tracking your game objects. Using static allocations instead doesn't make the bugs go away.


The last time I worked in AAA, that engine was using a lot of modern C++ features and was, by all accounts, cutting edge code for 2007. Cutting edge enough to break some compilers, in fact. This meant a lot of STL containers, a lot of allocation, entity component systems, all the fun stuff except for exceptions. This was a 360/PS3 title and did run on PC, though that was not the intended target.

In the last couple months, optimization work began in earnest. Allocation was a significant problem. First up? Tons of traffic in std::vector. A lot of the usual suspects - improper reservation sizes, unnecessary temporaries at function level, etc. Nothing terribly interesting, but a lot to go through in aggregate. Eventually std::vector was dropped in favor of a custom vector with broadly similar behavior but more tightly specified, pooled, and instrumented. After that and a few other low hanging fruits, things were much better but not perfect. I think a lot of small allocations were cleaned up to deal with fragmentation issues, and ultimately the memory allocator itself was replaced with some well known open source third party thing. I don't remember the specific advantages, but it got us to shipping without fragmentation/OOM issues.

So, I think that's not terribly uncommon, and I wonder, Promit, how you would characterize the changeover that occurred during those last months? Was it a relatively smooth transition, or was it hard won? If it was not unnecessarily difficult, I daresay this scenario you describe was nearly ideal. Unless you know for certain that it won't meet your needs, you *should* reach for std::vector first -- even if it must be replaced with something more specific and optimized at the end to push you over 60fps, the fact that it was immediately available, familiar, and battle-tested all those months ago has been paying productivity dividends all that time while the final shape of the game was still evolving.

Had the project started with what they ended up with, its possible (I daresay likely, depending on the experience of the team) that it may have actually taken *longer* to complete the project, and would have cost whatever additional flexibility/peace-of-mind that std::vector provided. Early on, we should absolutely prefer iteration time over run time. It doesn't reflect a failure of std::vector to carry you over the finish line, it just reflects that there might be different needs and priorities during development than at its end.

Its good to remember that things like std::vector are firstly correct in the general case, and secondly fast in the general case. This is an axiom that such a tool needs to follow, it cannot tolerate incorrectness in general that your specific system may be able to deal with, and it cannot know how to exploit your particular requirements for best performance. Its the greatest common denominator, but its the fastest common-denominator around -- it turns out that ends up being good enough a lot of the time. A replacement that specific to your requirements should indeed be faster, not because std::vector is stupid, but because your specific solutions can benefit from what it knows of your requirements and tolerances; std::vector isn't "slow", it just does more work behind the scenes so that its more broadly useful.

And so it goes for allocations or any other generally-useful tool in the chest.

[Edit] This post is not aimed at Promit, other than that I am genuinely interested in whether he observed during the transition. My response in general is meant for everyone to consider if they like. But its dragging off-topic, so if anyone would like to discuss further, feel free to PM or start a new thread.

throw table_exception("(? ???)? ? ???");

Something I've done in my latest project is typedef or wrap the STL containers I use so that if later on I can prove to myself that their general nature is causing issues that a more specialised custom container can improve upon, it will be trivial to swap them out.

I find it highly unlikely this will happen, but the points above about general vs specific case are well made. But I'd rather develop with the tried and tested versions and wait until profiling proved I could gain by my own version. A custom std::vector is non-trivial based on my experience.


There are almost certainly no bugs in the iterator itself,

it wasn't a lib function, it was a quick and dirty find first find next type thing. it may not have been the iterator, it may have been me not checking return codes correctly. either way, i replaced the more complex buggy code with simpler bug free code.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Simpler code is usually better. However, I personally am loathe to attempt to "fix" code without a thorough understanding of the bug I am trying to address. In an unmanaged language, a bug can surface in an unrelated area so simply re-writing the "affected" code can change the behaviour without addressing the underlying issue.

To provide another real-world example, I've worked on a variety of large console titles. One engine, in particular, did use fixed size containers for pretty much everything. Of course, that leads to some of the problems being discussed, but the solution was to use worst-case blocks in the pools for every possible type. (Among other things, the engine used a pretty deep entity hierarchy). Thus, there was a container of N blocks of memory, each large enough to hold the largest sized entity, but with factory support for instantiating any simpler type in that same space.

Wasteful? Yes. But the original intent was to prevent memory fragmentation (at all costs), particularly on limited memory platforms like PS2. We're actually still using code derived from that engine, but we've actually moved to using more dynamic allocation - the maintenance of magic numbers that thwart designers and artists was just too painful.

This topic is closed to new replies.

Advertisement