Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Container Types with Static Memory Constraints


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 SeiryuEnder   Members   -  Reputation: 199

Like
0Likes
Like

Posted 01 February 2012 - 10:26 AM

I am implementing a set of containers for use with a static memory allocator.
The allocator does not support deletion of individual elements/arrays.

Dynamic Array
Many people like dynamic arrays (std::vector<>), but these containers
are extremely memory inefficient when used with a static memory allocator as
all of the previously used memory cannot be freed until the level ends. I've
alleviated some of the cost by making the main array a list of references and
individually allocating each object with the allocator, though I'm still not a fan of
throwing away memory even if it is on a much smaller scale.

Linked List
A linked list is very memory efficient with a static memory allocator, but the loss
of constant access times is unacceptable for potentially large lists.

Linked Dynamic Array
Right now I'm considering a hybridization of the two, a linked dynamic array
which maintains it's previous memchunk but can still grow by creating a new
node which should maintain fast access times without too much memory.


However, before I create the LDA I want to make sure there is not another
container type which is more appropriate for this scenario. Any ideas?

Sponsor:

#2 swiftcoder   Senior Moderators   -  Reputation: 10242

Like
1Likes
Like

Posted 01 February 2012 - 11:02 AM

Linked Dynamic Array
Right now I'm considering a hybridization of the two, a linked dynamic array
which maintains it's previous memchunk but can still grow by creating a new
node which should maintain fast access times without too much memory.

So in effect, the typical implementation of std::deque?

I don't think the standard mandates a particular implementation, but the typical approach seems to be to allocate fixed size chunks of items, and have an indirection table to find the chunk containing the given index. A fringe benefit is that it allows you to cheaply push/pop elements from both ends of the deque.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#3 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 01 February 2012 - 11:08 AM

Dynamic Array
Many people like dynamic arrays (std::vector<>), but these containers
are extremely memory inefficient when used with a static memory allocator as
all of the previously used memory cannot be freed until the level ends. I've
alleviated some of the cost by making the main array a list of references and
individually allocating each object with the allocator, though I'm still not a fan of
throwing away memory even if it is on a much smaller scale.


vector can release claimed memory.
The overhead is between 1/2 and 1/3 of total allocation. If using some sort of paging (multiple vectors), then this overhead becomes n/2 of a single page. Even on embedded devices this isn't an issue. IIRC, such allocators are default on iPhones since the start.

A linked list is very memory efficient with a static memory allocator, but the loss
of constant access times is unacceptable for potentially large lists.

Linked list is terribly inefficient. For small objects it can have up to 800% overhead and the overhead is constant. In addition, each node brings additional hidden overhead inside malloc/new.

For allocators, linked lists are used, but they are backed by paged array structure as mentioned above.

However, before I create the LDA I want to make sure there is not another
container type which is more appropriate for this scenario. Any ideas?

struct Allocator {
  Allocator(size_t max_size) : data(max_size) {}

  void * alloc(size_t count) {
    if (count + offset < data.size()) return NULL;
    void * ptr = (void*) &data[offset];
    offset + count;
    return ptr;
  }
private:
  vector<char> data;
  size_t offset;
};
And you're done. Really, don't overthink it.

#4 SeiryuEnder   Members   -  Reputation: 199

Like
0Likes
Like

Posted 01 February 2012 - 11:16 AM

So in effect, the typical implementation of std::deque?

I don't think the standard mandates a particular implementation, but the typical approach seems to be to allocate fixed size chunks of items, and have an indirection table to find the chunk containing the given index. A fringe benefit is that it allows you to cheaply push/pop elements from both ends of the deque.


Exactly what I was talking about, I wasn't aware of std::deque.
This should make things a lot easier. Thanks for the reference!

struct Allocator {
  Allocator(size_t max_size) : data(max_size) {}

  void * alloc(size_t count) {
	if (count + offset < data.size()) return NULL;
	void * ptr = (void*) &data[offset];
	offset + count;
	return ptr;
  }
private:
  vector<char> data;
  size_t offset;
};
And you're done. Really, don't overthink it.


I think that you're misunderstanding the goal.
This is using a vector to manage data inside of an allocator.
What I'm talking about are containers that use data distributed by an allocator.

The allocator itself is finished, right now I'm just making several container implementations
and designing them to work efficiently with the restrictions that a static memory allocator impose.

The static memory allocator can't release memory.
Therefore, the vector cannot release memory and becomes increasingly inefficient as it grows.
A deque (what I was referring to as an LDA) is able to grow without bleeding memory.

Does this make more sense? Sorry for not explaining properly. Posted Image

#5 e‍dd   Members   -  Reputation: 2105

Like
0Likes
Like

Posted 01 February 2012 - 11:25 AM

Perhaps you might want to clarify just what you mean by a "static allocator". Do you mean an allocator object that exists in static storage? Or an allocator that returns memory form a pool of static storage? Or a linear allocator? Or something else?

Incidentally, I'd say that there's little or no problem in not freeing memory until the level ends. Typically you're going to know in advance how much memory a level and all its entities require before the level starts. So allocate that much up front and you're done.

#6 SeiryuEnder   Members   -  Reputation: 199

Like
0Likes
Like

Posted 01 February 2012 - 11:54 AM

Perhaps you might want to clarify just what you mean by a "static allocator". Do you mean an allocator object that exists in static storage? Or an allocator that returns memory form a pool of static storage? Or a linear allocator? Or something else?

Incidentally, I'd say that there's little or no problem in not freeing memory until the level ends. Typically you're going to know in advance how much memory a level and all its entities require before the level starts. So allocate that much up front and you're done.


Sure, what I mean by a static allocator is an object that allocates a chunk of memory and distributes that memory upon request.

I can see how static in this context could be ambiguous, in this context I mean that the memory is allocated once at the beginning
of the game and wiped at the end of each level.

#7 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 01 February 2012 - 11:59 AM

I can see how static in this context could be ambiguous, in this context I mean that the memory is allocated once at the beginning
of the game and wiped at the end of each level.


Which is exactly what my allocator does. It's std::vector, but it never reallocates the memory.

There is no need to release 'data' between levels. Why? You'll just allocate it again.

But if you insist.

#8 SeiryuEnder   Members   -  Reputation: 199

Like
0Likes
Like

Posted 01 February 2012 - 12:32 PM

Which is exactly what my allocator does. It's std::vector, but it never reallocates the memory.

There is no need to release 'data' between levels. Why? You'll just allocate it again.

But if you insist.


Pseudo-code:
class DynamicArray
{
private:
	MemHeap& m_Heap; // Memory allocator

public:
...

	void reserve( u32 _capacity )
	{
	...
	// Allocate a new memchunk
	m_Data = m_Heap.AllocArray<T*>( m_Capacity );

	// Allocate referenced objects
	for( u32 i = 0; i < m_Capacity; ++i )
		m_Data[i] = m_Heap.Alloc<T>();
	...
	}

	void push_back( const T& _object )
	{
		if( m_Size == m_Capacity )
			reserve( 2 * m_Capacity + 1 );

		*m_Data[ m_Size++ ] = _object;
	}
};


void MemHeap::ClearMem()
{
    if( m_StaticHeap ) 
    {
        CLEARMEM( m_StaticHeap, m_StaticHeapSize );

        m_AllocHead = m_StaticHeap;
        m_AllocTail = m_StaticHeap + m_StaticHeapSize;
    }
}






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS