Dynamic memory allocation overhead (C++)

Started by
6 comments, last by Evil Steve 16 years, 11 months ago
Hello everyone. I was designing the scene management base for a game engine, and I was relying heavily on dynamic memory allocation even for the smallest of objects. However, recently I started thinking about the performance and memory overheads incurred by each memory allocation. In particular, I would really appreciate any info (or links to info) on how the new and delete (or malloc and free) work - what exactly happens for each memory allocation and how the memory manager keeps track of stuff. Thanks in advance.
Advertisement
Well, the short is that it's expensive. Most games spend a lot of time developing custom memory allocators to avoid this cost. The general algorithm is:

1) allocate a huge block of memory at startup
2) use placement new to allocate game objects directly into that memory (you dodge the actual allocation code but still get constructors called and whatnot.
3) free the memory on game shutdown

The reason you do this is to (1) avoid the OS overhead of allocating memory and (2) keep memory unfragmented.

This wikipedia article probably has some good nuts & bolts info:
http://en.wikipedia.org/wiki/Dynamic_memory_allocation

-me
Doug Lea's malloc has a rather nice description of it's algorithms, and you can poke around the code too if you're feeling adventurous. Small allocations are often handled differently to large allocations in order to reduce the overhead (such as allocating fixed blocks with potential unused space but lower book-keeping information required for each block).
Read the chapter about run-time environments (esp. storage allocation) in this book:
http://en.wikipedia.org/wiki/Compilers:_Principles%2C_Techniques%2C_and_Tools
If you are not up to the task of writing your own memory manager, a quick solution might be to try boost pools. From my experience, allocations via boost pool may end up as much as 10 times faster than ordinary malloc, but they come with their own set od disadvanteges. The pools allow you only fixed size allocations and they are fastest for smallest data sizes (PODs). Another great thing about boost pools is they can be used as allocators for STL containers.
Quote:Original post by OrangyTang
Doug Lea's malloc has a rather nice description of it's algorithms, and you can poke around the code too if you're feeling adventurous. Small allocations are often handled differently to large allocations in order to reduce the overhead (such as allocating fixed blocks with potential unused space but lower book-keeping information required for each block).
We actually use a modified version of this at work, which I've been poking around at...
Thanks for the replies. I think I will go ahead and implement a simplistic memory manager. Since it will only be used inside the scene manager module, it shouldn't be very hard to implement. Evil Steve, I'm assuming since you didn't complain that it's doing its job efficiently then, right?
Quote:Original post by hikikomori-san
Thanks for the replies. I think I will go ahead and implement a simplistic memory manager. Since it will only be used inside the scene manager module, it shouldn't be very hard to implement. Evil Steve, I'm assuming since you didn't complain that it's doing its job efficiently then, right?
Yep, definitely. It is on a Nintendo DS though, which has much tighter memory and speed restrictions than a PC.

For something like a scene manager, I'd recommend allocating all object of the same type (and therefore size) out of a fixed pool if possible. That way the memory allocation is kept to a minimum. It might also be a good idea to look into Free Lists if you don't know what they are.

This topic is closed to new replies.

Advertisement