Jump to content

  • Log In with Google      Sign In   
  • Create Account

Reasons for wanting custom malloc or allocators?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
10 replies to this topic

#1 Lode   Members   -  Reputation: 982

Like
0Likes
Like

Posted 25 April 2013 - 04:48 AM

Hello,

 

In C, users of a library or sometimes not happy when malloc, realloc and free are used to allocate the output.

 

In C++, the same applies I guess (needing custom allocators for the STL).

 

I'm basically wondering why?

 

If for mobile development or other small devices, doesn't the compiler for those devices use a proper malloc/realloc/free for that device?

 

If for a memory pool: why is it useful to also use your memory pool for the allocations another library you're using is doing?

 

Apart from tiny devices and memory pools, are there other possible reasons I missed why you'd want something else than malloc, on a desktop computer? If so, which reasons are this?

Knowing why will make me take these usages into account better when designing an API.

 

Thanks!


Edited by Lode, 25 April 2013 - 04:48 AM.


Sponsor:

#2 NightCreature83   Crossbones+   -  Reputation: 2823

Like
0Likes
Like

Posted 25 April 2013 - 05:35 AM

Maybe this article will answer your questions: http://www.gamedev.net/page/resources/_/technical/general-programming/c-custom-memory-allocation-r3010 But generally you want a custom allocator because then you have control over what it does, and you can have different allocations in the back end, think heap allocator and slot allocator. A slot allocator is far faster then a heap allocator as all the slots will have the same size, so there generally is no searching for free slots big enough in this allocator.


Edited by NightCreature83, 25 April 2013 - 05:36 AM.

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, Mad Max

#3 Hodgman   Moderators   -  Reputation: 30351

Like
1Likes
Like

Posted 25 April 2013 - 06:51 AM

The most important reason for me is debugging and profiling. If my code (and the libraries that I use) all allocate memory through an interface that I control, then I can generate all sorts of information during development. I can track down any memory leaks, visualize fragmentation, track memory usage per system to find waste, spot bits of code that use a lot of memory allocations so I can go and optimize them to use their own pool, etc, etc...



#4 Lode   Members   -  Reputation: 982

Like
0Likes
Like

Posted 25 April 2013 - 07:11 AM

Which debugging tools do you use?

For example, Valgrind is able to output all kinds of useful information while you use just malloc. What is the difference?

Thanks :)



#5 Hodgman   Moderators   -  Reputation: 30351

Like
4Likes
Like

Posted 25 April 2013 - 07:59 AM

Every big commercial game engine will have it's own tools for this kind of thing, so when they choose to use a new library, they will want that library to provide a way to 'hook' all it's memory allocations.

e.g. if the library uses void* MyMalloc( size_t size ) { return malloc(size); } instead of malloc, then the engine authors can just change that one function (and the corresponding MyFree) to convert the library over to using the engine's memory system.

It's the same with a few other system features -- you'd also like to be able to hook any debug output (log/print statements) generated by the library, and if it makes use of worker threads, you'd like to be able to have it use your existing worker/job/task system rather than have it spawn it's own threads.

 

The features of Valgrind/memcheck are pretty important -- leaks, double deletes, invalid accesses, etc, so these features will certainly be present in the engine's memory tools. There'll probably also be a live reporting feature, when a log of memory events can be streamed out of the game over a socket/etc to a separate analysis program, so you can inspect the game's memory as it's running. This should show all current allocations (and maybe a history so you can step back in time to view past allocations) broken down by module (which will usually be tagged by programmers -- e.g. them saying that certain allocations go in the "physics" category). e.g. if you get a crash where something has accessed some memory that's already been free'ed, how useful would it be to be able to quickly rewind the memory history and see what objects have been allocated at that address most recently, and who allocated them?

 

When developing for consoles, you might just have, say, just 200MiB of RAM to play with and no virtual memory, so fragmentation becomes a very big concern.

e.g. After going in and out of your game/main menu 100 times, maybe you've got 100MiB of memory free, but it's so damn fragmented that the largest contiguous unallocated block is only 1MiB!

To debug these issues, you want to have a visual display of the address space so you can see where in memory each different module is allocating it's memory.

For performance analysis, you'll want per frame per module stats like number of allocations, time in malloc, size distributions, lifespan distributions, etc... For keeping memory usage low, you'll want to have reports on current, average and min/max memory usage per system. Ideally you'd be able to easily track how these numbers change over the weeks so you can get the heads up when someone implements a memory hogging new system.

If possible, it would also be able to traverse an ownership tree between allocations to help spot memory that isn't needed by the game but is still allocated anyway, like assets that are loaded by no longer required at this point in the game.

 

The memory tools in these engines will be quite complex, and most likely only available to and used by big console game developers.

Apart from the ones that come built into the big engines, the only stand-alone tool/middleware combo that I know of in this niche is Elephant/Goldfish.


Edited by Hodgman, 25 April 2013 - 08:04 AM.


#6 BGB   Crossbones+   -  Reputation: 1554

Like
0Likes
Like

Posted 25 April 2013 - 09:15 AM

FWIW:

it is also possible to gain additional features as well.

 

for example:

being able to tag the types for memory objects (actually fairly useful);

being able to fetch the base or size of a memory object if given a pointer (can also be useful);

being able to reduce the cost of, say, small or fixed-size allocations (many malloc implementations actually deal pretty poorly with small allocations, and oddly also with many larger allocations as well, ...);

potentially also, features to aid with things like detecting leaks and array overruns;

ability to fine-tune performance;

potentially, having features like garbage-collection (this being a bit more controversial, but having a GC doesn't necessarily mean adopting a Java-like memory use model, and the GC can also partly double as a leak-detector, like "hey, if I am reclaiming these objects, you probably leaked them, here is where they came from!", though granted a person could just have it as a leak detector, but the "hard part" between both of them are basically pretty similar);

...

 

now vs Valgrind / etc:

Valgrind is Linux specific IIRC (so is less useful for Windows developers);

AFAICT, the available leak detectors for Windows tend to cost money;

...



#7 ApochPiQ   Moderators   -  Reputation: 15693

Like
1Likes
Like

Posted 25 April 2013 - 11:46 AM

Another reason is pure performance.

For example, if you know that you will allocate 1000 objects in a row and then delete them all at once, you can use a one-way allocator which creates space for all 1000 objects and then just constructs/destructs them as needed and blows away the entire allocation in one shot. This can be far faster than doing 1000 general-purpose allocations and helps alleviate things like fragmentation to boot.

Stack allocators can also be useful (where you can only free the most recently allocated object) and so on.

#8 Zipster   Crossbones+   -  Reputation: 652

Like
0Likes
Like

Posted 25 April 2013 - 11:51 AM

Another reason is pure performance.

For example, if you know that you will allocate 1000 objects in a row and then delete them all at once, you can use a one-way allocator which creates space for all 1000 objects and then just constructs/destructs them as needed and blows away the entire allocation in one shot. This can be far faster than doing 1000 general-purpose allocations and helps alleviate things like fragmentation to boot.

Stack allocators can also be useful (where you can only free the most recently allocated object) and so on.

 

When I worked on console games, we would allocate all the memory upfront from the system and use our own allocator to divvy it out. It was a lot faster than going to the system for every allocation and allowed us to place more strict restrictions on memory usage, i.e. this level can only use 200MB, and if it goes above that then crash dump and trace where all the memory is going.



#9 Hodgman   Moderators   -  Reputation: 30351

Like
1Likes
Like

Posted 25 April 2013 - 08:58 PM

Stack allocators can also be useful (where you can only free the most recently allocated object) and so on.

Yeah, stack allocators are way more useful than they first seem. In the engine I'm working on at the moment, malloc/new are treated like the global variable that they are, which means their use is avoided as much as possible. If a function needs to allocate memory, it will generally have an allocator passed in as an argument. The vast, vast majority of the time, a scope/stack allocator is used instead of a malloc-esque heap allocator.

Ranked by usage in the engine, the most used is probably Scope/Stack, then Pool, then Stack (without the scope layer), then Malloc. I can literally count malloc usage on one hand.

I actually find scope/stack allocation easier to use/maintain than shared_ptr/new allocation. Leaks are impossible because the scope is a parameter to the allocation call (I use a macro eiNew(scope, Type)(parameters)), there's no clean-up code because the scopes use the RAII pattern, and reasoning about the scope works exactly the same way as reasoning about built-in language scopes, like local variables, etc... It just extends this familiar concept to dynamic allocations.

 

For me, replacing malloc with a different version (dlmalloc, tcmalloc, etc) is nice to be able to do, but isn't that much of a big deal -- when I see "custom memory management", I don't think of "custom written malloc", I think of completely different paradigms for allocation, like pools and stacks wink.png

 

When I worked on console games, we would allocate all the memory upfront from the system and use our own allocator to divvy it out. It was a lot faster than going to the system for every allocation and allowed us to place more strict restrictions on memory usage, i.e. this level can only use 200MB, and if it goes above that then crash dump and trace where all the memory is going.

Yeah this is very common. When I worked on a adventure/platforming game, from the main allocation, we'd allocate three large contiguous chunks for the level to use. Two would be in use at once, and a 3rd would be streaming in the background. Each geographical 'chunk' of the level had to fit within this hard memory limit, but in return, managing the streaming of chunks was dead simple. When a chunk was no longer required, we'd just let it leak (remove all pointers to it's member structures), and then start streaming the next chunk over the top of it. There was no real memory allocation going on.

#10 Ed Welch   Members   -  Reputation: 478

Like
0Likes
Like

Posted 26 April 2013 - 12:57 AM

It depends on the system. For iOS I think the system memory allocation probably does a better job than anything that you can write yourself.



#11 EmeryBerger   Members   -  Reputation: 124

Like
2Likes
Like

Posted 26 April 2013 - 07:43 AM

I would strongly encourage anyone on this thread to read the paper I wrote on this topic over a decade ago. It just won a Most Influential Paper award but its influence has clearly not spread to this domain...yet.

 

TL;DR - a good malloc is often as fast / faster than your custom allocator because it does the same tricks; "region" allocators can be faster but can leak tons of memory.

 

Title: Reconsidering Custom Memory Allocation (ACM linkdirect PDF linkPowerpoint talk slides), OOPSLA 2002. I've attached the slides in PPT and PDF formats; I highly recommend looking at the PPT version, since it has animations that do not translate well to PDF.

 

Abstract:

 

Programmers hoping to achieve performance improvements often use custom memory allocators. This in-depth study examines eight applications that use custom allocators. Surprisingly, for six of these applications, a state-of-the-art general-purpose allocator (the Lea allocator) performs as well as or better than the custom allocators. The two exceptions use regions, which deliver higher performance (improvements of up to 44%). Regions also reduce programmer burden and eliminate a source of memory leaks. However, we show that the inability of programmers to free individual objects within regions can lead to a substantial increase in memory consumption. Worse, this limitation precludes the use of regions for common programming idioms, reducing their usefulness.We present a generalization of general-purpose and region-based allocators that we call reaps. Reaps are a combination of regions and heaps, providing a full range of region semantics with the addition of individual object deletion. We show that our implementation of reaps provides high performance, outperforming other allocators with region-like semantics. We then use a case study to demonstrate the space advantages and software engineering benefits of reaps in practice. Our results indicate that programmers needing fast regions should use reaps, and that most programmers considering custom allocators should instead use the Lea allocator.

 

StackOverflow discussion here.

 

Attached Files






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS