I am implementing a set of containers for use with a static memory allocator.
The allocator does not support deletion of individual elements/arrays.
Dynamic Array
Many people like dynamic arrays (std::vector<>), but these containers
are extremely memory inefficient when used with a static memory allocator as
all of the previously used memory cannot be freed until the level ends. I've
alleviated some of the cost by making the main array a list of references and
individually allocating each object with the allocator, though I'm still not a fan of
throwing away memory even if it is on a much smaller scale.
Linked List
A linked list is very memory efficient with a static memory allocator, but the loss
of constant access times is unacceptable for potentially large lists.
Linked Dynamic Array
Right now I'm considering a hybridization of the two, a linked dynamic array
which maintains it's previous memchunk but can still grow by creating a new
node which should maintain fast access times without too much memory.
However, before I create the LDA I want to make sure there is not another
container type which is more appropriate for this scenario. Any ideas?
Container Types with Static Memory Constraints
Linked Dynamic Array
Right now I'm considering a hybridization of the two, a linked dynamic array
which maintains it's previous memchunk but can still grow by creating a new
node which should maintain fast access times without too much memory.
So in effect, the typical implementation of std::deque?
I don't think the standard mandates a particular implementation, but the typical approach seems to be to allocate fixed size chunks of items, and have an indirection table to find the chunk containing the given index. A fringe benefit is that it allows you to cheaply push/pop elements from both ends of the deque.
Dynamic Array
Many people like dynamic arrays (std::vector<>), but these containers
are extremely memory inefficient when used with a static memory allocator as
all of the previously used memory cannot be freed until the level ends. I've
alleviated some of the cost by making the main array a list of references and
individually allocating each object with the allocator, though I'm still not a fan of
throwing away memory even if it is on a much smaller scale.
vector can release claimed memory.
The overhead is between 1/2 and 1/3 of total allocation. If using some sort of paging (multiple vectors), then this overhead becomes n/2 of a single page. Even on embedded devices this isn't an issue. IIRC, such allocators are default on iPhones since the start.
A linked list is very memory efficient with a static memory allocator, but the loss
of constant access times is unacceptable for potentially large lists.[/quote]
Linked list is terribly inefficient. For small objects it can have up to 800% overhead and the overhead is constant. In addition, each node brings additional hidden overhead inside malloc/new.
For allocators, linked lists are used, but they are backed by paged array structure as mentioned above.
However, before I create the LDA I want to make sure there is not another
container type which is more appropriate for this scenario. Any ideas?
[/quote]
struct Allocator {
And you're done. Really, don't overthink it.
Allocator(size_t max_size) : data(max_size) {}
void * alloc(size_t count) {
if (count + offset < data.size()) return NULL;
void * ptr = (void*) &data[offset];
offset + count;
return ptr;
}
private:
vector<char> data;
size_t offset;
};
So in effect, the typical implementation of std::deque?
I don't think the standard mandates a particular implementation, but the typical approach seems to be to allocate fixed size chunks of items, and have an indirection table to find the chunk containing the given index. A fringe benefit is that it allows you to cheaply push/pop elements from both ends of the deque.
Exactly what I was talking about, I wasn't aware of std::deque.
This should make things a lot easier. Thanks for the reference!
struct Allocator {
And you're done. Really, don't overthink it.
Allocator(size_t max_size) : data(max_size) {}
void * alloc(size_t count) {
if (count + offset < data.size()) return NULL;
void * ptr = (void*) &data[offset];
offset + count;
return ptr;
}
private:
vector<char> data;
size_t offset;
};
I think that you're misunderstanding the goal.
This is using a vector to manage data inside of an allocator.
What I'm talking about are containers that use data distributed by an allocator.
The allocator itself is finished, right now I'm just making several container implementations
and designing them to work efficiently with the restrictions that a static memory allocator impose.
The static memory allocator can't release memory.
Therefore, the vector cannot release memory and becomes increasingly inefficient as it grows.
A deque (what I was referring to as an LDA) is able to grow without bleeding memory.
Does this make more sense? Sorry for not explaining properly.
Perhaps you might want to clarify just what you mean by a "static allocator". Do you mean an allocator object that exists in static storage? Or an allocator that returns memory form a pool of static storage? Or a linear allocator? Or something else?
Incidentally, I'd say that there's little or no problem in not freeing memory until the level ends. Typically you're going to know in advance how much memory a level and all its entities require before the level starts. So allocate that much up front and you're done.
Incidentally, I'd say that there's little or no problem in not freeing memory until the level ends. Typically you're going to know in advance how much memory a level and all its entities require before the level starts. So allocate that much up front and you're done.
Perhaps you might want to clarify just what you mean by a "static allocator". Do you mean an allocator object that exists in static storage? Or an allocator that returns memory form a pool of static storage? Or a linear allocator? Or something else?
Incidentally, I'd say that there's little or no problem in not freeing memory until the level ends. Typically you're going to know in advance how much memory a level and all its entities require before the level starts. So allocate that much up front and you're done.
Sure, what I mean by a static allocator is an object that allocates a chunk of memory and distributes that memory upon request.
I can see how static in this context could be ambiguous, in this context I mean that the memory is allocated once at the beginning
of the game and wiped at the end of each level.
I can see how static in this context could be ambiguous, in this context I mean that the memory is allocated once at the beginning
of the game and wiped at the end of each level.
Which is exactly what my allocator does. It's std::vector, but it never reallocates the memory.
There is no need to release 'data' between levels. Why? You'll just allocate it again.
But if you insist.
Which is exactly what my allocator does. It's std::vector, but it never reallocates the memory.
There is no need to release 'data' between levels. Why? You'll just allocate it again.
But if you insist.
Pseudo-code:
class DynamicArray
{
private:
MemHeap& m_Heap; // Memory allocator
public:
...
void reserve( u32 _capacity )
{
...
// Allocate a new memchunk
m_Data = m_Heap.AllocArray<T*>( m_Capacity );
// Allocate referenced objects
for( u32 i = 0; i < m_Capacity; ++i )
m_Data = m_Heap.Alloc<T>();
...
}
void push_back( const T& _object )
{
if( m_Size == m_Capacity )
reserve( 2 * m_Capacity + 1 );
*m_Data[ m_Size++ ] = _object;
}
};
void MemHeap::ClearMem()
{
if( m_StaticHeap )
{
CLEARMEM( m_StaticHeap, m_StaticHeapSize );
m_AllocHead = m_StaticHeap;
m_AllocTail = m_StaticHeap + m_StaticHeapSize;
}
}
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement