Sign in to follow this  
deffer

Techniques for freeing memory of mem-pools

Recommended Posts

deffer    754
Hi. So of course nearly everybody uses mem-pools nowadays. AFAIK, the most common scenario is: 1. Allocate a block of memory. 2. Divide it into chunks of needed size. 3. Create a free-list. And that every time you lack memory, so the quantity of used blocks grows as mem-usage grows. The question is, what can be done to release some of the blocks, when the memory usage drops? Or are there any other ways to efficiently implement mem-pools??? Cheers. /def

Share this post


Link to post
Share on other sites
doynax    850
A common way is to group allocations by their use. Such that all objects belonging to a level can be released simultaneously when the memory pool itself is destroyed.
The only generic method I can think of is to use some kind of garbage collection system. But that requires support from the objects themselves, and sudden performance drops might not be acceptable. And a system that allows objects to be relocated freely might benefit more from a simple vector.

For some applications it might even be preferable to statically allocate fixed size pools and handle allocation failures gracefully. But that approach can generally be avoided on architectures with virtual memory.

Share this post


Link to post
Share on other sites
deffer    754
Quote:
Original post by doynax
A common way is to group allocations by their use. Such that all objects belonging to a level can be released simultaneously when the memory pool itself is destroyed.


An object (usually a container with some "nodes") using a mem-pool, often uses only a part of it. So bringing a private block for it would be like having a private cache (private pool).

To the garbage collecting:

What I came up with is we can "renew" the objects. This is, once for a period of time (or triggered by some state), we can run through all the "nodes" in the object and perform:
- get new node from the mostly used block.
- simpleCopy(new node, old node).
- release old node.
- attach new node in place of the old node.
This would succesfully increase memory usage for the mostly used blocks, while other beeing somewhat freed.
Of course all external (from the view of the object) references to "nodes" would become invalid, e.g. iterators.

And the operation should run through, if possible, all the objects using the mem-pool.

It's costly, I know. Especially for "real-time" applications, like games.
But what about splitting it into parts? Or performing it only when the system is not busy enough, e.g. when the program enters a menu, or pauses?

/def

Share this post


Link to post
Share on other sites
doynax    850
Quote:
Original post by deffer
An object (usually a container with some "nodes") using a mem-pool, often uses only a part of it. So bringing a private block for it would be like having a private cache (private pool).
That's what I meant, create a separate pool for each level. Although I could've been clearer..
Quote:
Original post by deffer
It's costly, I know. Especially for "real-time" applications, like games.
But what about splitting it into parts? Or performing it only when the system is not busy enough, e.g. when the program enters a menu, or pauses?
If you can afford to relocate objects (through a system that can fix-up objects and external references) I think a vector (or a pool of continuous objects to avoid the reallocation overhead) would be preferable. While a garbage collection pass might be faster, having to deal with a periodic stall is probably worse.

Share this post


Link to post
Share on other sites
Trap    684
You usually don't want to free parts of your mem-pools. Mem-pools are used because they are fast, if you add a complicated scheme of freeing memory they will be slower.

In most applications there is a upper bound (not theoretical but practical bound) of objects of a certain kind that are active at the same time. If the upper bound isn't much bigger than the average number of objects you can usually afford the space wasted by mem-pool overhead.

Share this post


Link to post
Share on other sites
deffer    754
Quote:
Original post by doynax
Quote:
Original post by deffer
An object (usually a container with some "nodes") using a mem-pool, often uses only a part of it. So bringing a private block for it would be like having a private cache (private pool).
That's what I meant, create a separate pool for each level. Although I could've been clearer..


But I've written it as a negative sample. The blocks allocated would then have to be quite small (proportionally to the memory used by the "level"), so the allocating could be quite unefficient. E.g. VirtualAlloc() gives you a block 64k aligned (in user space). So allocating 4k size block wastes 93.75% space, isn't it? (Should I care?)

Quote:
Original post by doynax
If you can afford to relocate objects (through a system that can fix-up objects and external references) I think a vector (or a pool of continuous objects to avoid the reallocation overhead) would be preferable. While a garbage collection pass might be faster, having to deal with a periodic stall is probably worse.


What do you mean by "a pool of continuous objects" ?


-----------------------------------------

Quote:
Original post by Trap
You usually don't want to free parts of your mem-pools. Mem-pools are used because they are fast, if you add a complicated scheme of freeing memory they will be slower.

In most applications there is a upper bound (not theoretical but practical bound) of objects of a certain kind that are active at the same time. If the upper bound isn't much bigger than the average number of objects you can usually afford the space wasted by mem-pool overhead.


You are right. Mostly. But that's not the point.

That upper bound for all the objects together (at a specific moment) is much smaller than the sum of bounds for each object type. If I got plenty of particles at one moment, and plenty of creatures at another, and plenty of, let's say, arrows fired at yet another one, I would normally end up with sum of the memory usage for all the objects. Which I don't want, obviously.

It could be handled with having similar allocation sizes for the objects, but that's, again, not the point here.

/def

Share this post


Link to post
Share on other sites
doynax    850
Quote:
Original post by deffer
But I've written it as a negative sample. The blocks allocated would then have to be quite small (proportionally to the memory used by the "level"), so the allocating could be quite unefficient. E.g. VirtualAlloc() gives you a block 64k aligned (in user space). So allocating 4k size block wastes 93.75% space, isn't it? (Should I care?)
I simply meant that if you have a memory pool for actors you can usually release it when exiting a level. I.e. have the allocators tied to the lifetime of the objects they're storing.

Your specific example might not be a problem (depending on the virtual memory implementation) since it's using a full page, the rest of the buffer should merely be reserved. However, I wouldn't count on it.
I guess it would be possible to use a best-fit allocation scheme and manually release parts of a buffer but that still creates nasty worst-case situations.
Quote:
Original post by deffer
What do you mean by "a pool of continuous objects" ?
I meant that whenever you need to release an object in the middle of the pool you relocate the last one to fill it's place. This could of course be done with a normal vector but if you don't care about random access it would be more efficient use the allocation scheme of a memory pool, instead of a single continous block of memory, to avoid the reallocation overhead.

Share this post


Link to post
Share on other sites
Trap    684
Quote:
Original post by deffer
That upper bound for all the objects together (at a specific moment) is much smaller than the sum of bounds for each object type. If I got plenty of particles at one moment, and plenty of creatures at another, and plenty of, let's say, arrows fired at yet another one, I would normally end up with sum of the memory usage for all the objects. Which I don't want, obviously.

Correct. If you want a faster object creation you have to pay a price.

mempooling is an optimization technique that trades space and generality for speed.
A possible way to reduce the space cost in this special case: allocate all these objects from one pool (always get a memory block of the size of the biggest object)

Share this post


Link to post
Share on other sites
deffer    754
Quote:
Original post by doynax
I meant that whenever you need to release an object in the middle of the pool you relocate the last one to fill it's place. This could of course be done with a normal vector but if you don't care about random access it would be more efficient use the allocation scheme of a memory pool, instead of a single continous block of memory, to avoid the reallocation overhead.


Heh, in fact I DO care about random access. In most cases pool objects will be referenced by poiters, so I cannot realocate them from the "lower level".

Also (if I uderstand you well), I do not divide my world into static "levels", so at any time at any place, new objects my appear dynamically, so the massive deallocation is practically impossible.

/def

Share this post


Link to post
Share on other sites
doynax    850
Quote:
Original post by deffer
Heh, in fact I DO care about random access. In most cases pool objects will be referenced by poiters, so I cannot realocate them from the "lower level".
I meant that if you have the infrastructure required to perform garbage collection then relocating a single object shouldn't be a problem. And thus I see little use of garbage collection scheme in a realtime game.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this