Jump to content
  • Advertisement
Sign in to follow this  
sakky

memory management: array w/linked list

This topic is 4724 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm working on lots of things right now at the same time. This little feature I'm currently working on will be sued for my particle system, possible more. Okay, one array dynamically allocated during runtime that has a finite set of elements. However, the particle emitter also has a set limit of particles, but may not use that full limit. Most simple solution I came up with--> struct particle { bool inUse ; /* various attributes */ } ; If inUse is equal to true, then the particle is active and that means it's be processed. If not, then it's not. So how do I activate or deactivate particles this way? Simple solution--> for ( int I = 0 ; I < size_of_array ; I++ ) { if ( array[ I ].inUse == false ) { /* reactive particle */ } } Simple right? Well, I'm walking through a loop every time I want to add/remove or activate/deactivate a particle. This could be a problem so I came up with a better solution involving a linked list. Now I know what you are saying, a linked list requires a lot of memory usage because of the frequent allocation and freeing of memory. Well, what if it didn't? Simple solution-> struct particle { particl* prev, next ; /* various attributes */ } ; struct emitter { particle* heap ; particle* dead ; particle* active ; /* various attributes */ } ; Now I just have to initialize all the particles in heap with their next & prev pointers. And I assign dead to point to the first element in heap and like wise with the first element to dead. When I need a new particle I pop one off from dead and push on onto active and vise versa. Regardless if a particle gets killed in the middle of the heap, it becomes the new head of dead and active is just reset. Using the prev & next pointers this is a snap. So now I have a finite set of particles that I can active/deactivate and I don't have to iterate a for loop every time, it's just pop & push-a lot faster! What do ya think? I think I should make a template class for this that uses a std::iterator. This way I can use the same algorithm one many different data structures.

Share this post


Link to post
Share on other sites
Advertisement
This is exactly how my particle system(s) have worked for the past.. 3 or 4 versions :P I found it to be the quickest method, and you can implement dummies at the beginning and end of the Active Particles list so that when you're initializing a new particle (if it's the first one to become active) you don't need special cases where your Active Particle list pointer is NULL, etc etc.

Before that, I used the same array element method, and I noticed a *huge* step-up in speed.

The only thing you'd have to do at first initialization of the system is go through the entire list and initialize *only* the 'next' pointers for each dead particle, and NULLing the last. Don't bother with the prev as it's not needed in this phase. But by the way it appears, you have it all worked out, and I'm just flappin my "lips" :)

Share this post


Link to post
Share on other sites
Well thank you, I've been trying to figure this algorithm out ever since I seen a short example of it in an article.

A few thoughts. I'm wanting to employ this algorithm to a more broad range such as a heap manager for many different objects. I allocate a huge chunk of memory and only use that memory for all my program's tasks. This scheme may work to a point, I'm afraid that it wouldn't suffice.

For example, I can't use it for each byte nor would I. I could use a resource type to derive from, but that requires that all resources be the same size, yet can have different implementations-pointless.

I'm just trying to figure out how I could use one large chunk of memory. The problems exists only when previous allocated(referenced in this case) chunks are freed. The scheme I'm using for the particle system won't work because the if I request memory that is larger then the free space-error! Unless I fragment the heap. But what about those objects in use? They don't know that they are fragmented and will operate as if they were not. But if I adjusted the referenced data so that free space would always in front of used space then I would have solved that problem. BUT, this means I'll have to copy all the used space backwards.

I'm going to be researching the linux kernel to find out how it solves this issue. Because this issue if very similar to virtual memory a program uses any ways. If I could just use that, then I could very well have a more faster application with very few memory operations other then those that allocate/free from my heap. Which is perfectly fine because that memory has already been allocated. But then again, I may not want to copy and linunx's way of handling memory usage because new/delete are expensive. But perhaps I may be able to alter the algorithm to better suite my needs.

Share this post


Link to post
Share on other sites
you could write a memory manager in c++

and define blocksizes of x bytes per object

4 8 16 32 ...1024 ....

and hand linked lists of arrays which these blocksizes


and the overload the new operator an snap your requested memory to the next biggest block


that way you dont fragment your memory at all because you free it at the end when your program exits

the drawback is that memory intensive applications might do some extra page swapping , here and there because you memory lies all over the place in your address space

i suggest prerequest some amount of memory at startup for each block so your first x mbs of each blocksize lie close to another

Share this post


Link to post
Share on other sites
[cool]That's what my particle system did too[cool].

It's a very simple concept that I myself have used many, many, times.

[sarcasm]
Congratulations on being the 2,429,376th person to independently come up with the idea...
[/sarcasm]

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
you could write a memory manager in c++

and define blocksizes of x bytes per object

4 8 16 32 ...1024 ....

and hand linked lists of arrays which these blocksizes


and the overload the new operator an snap your requested memory to the next biggest block


that way you dont fragment your memory at all because you free it at the end when your program exits

the drawback is that memory intensive applications might do some extra page swapping , here and there because you memory lies all over the place in your address space

i suggest prerequest some amount of memory at startup for each block so your first x mbs of each blocksize lie close to another


I think you may have miss understood me. My original plan was to allocate a large heap of memory (say 30 megs) and use that as the application's primary source. My design (which really isn't much right now besides thoughts) depicts that each resource the application uses (in terms of memory) be allocated from this heap. Overloaded new and delete operators are not needed, but useful. However, my design makes use of global functions that handle the allocating and freeing of memory from this heap.

This is all fine, but the problem I was expressing was that fact that when smaller memory amount get freed they leave wholes that larger amounts can't use, unless fragmented in some way. The scheme is obviously error prone because I can't fragment objects that are not knowledgeable of it.

A very shrood example would be allocating a long, then a object that is say...32 bytes. After freeing the long's resources I have a whole of 4 bytes. Next, if I wanted to allocate another long I could just reuse the space some how, But If I wanted to allocate another object of 32 bytes I would have to fragment.

I would have to fragment because if I didn't, I would have to keep allocating in front of the last object. Soon, I would run out of memory. A simple way to solve this problem would be to keep track of the free memory in back. If it's large enough I could reuse the space but if not I would either have to fragment the data or use part of the memory in front.


Quote:
Original post by iMalc
[cool]That's what my particle system did too[cool].

It's a very simple concept that I myself have used many, many, times.

[sarcasm]
Congratulations on being the 2,429,376th person to independently come up with the idea...
[/sarcasm]


I actually feel very proud because of that comment! I'm glad that I'm walking the same road as others. But then again, I sort of wish I did develop something new and unique.

Well, thanks for the replies guys!

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
hi --

the idea of embedding the list pointer in the data itself is something we use in console games a lot -- precisely for the reasons you're interested in: no memory allocations to do list management.

also, the linux kernel uses this kind of list structure -- the first element of your class is a "list node" which the list routines can use, and which you can find your data from..

anyways, about the memory management stuff:

on console games, we do what you are talking about. we write memory managers. from my quick read of your post, your use of "fragment" is slightly incorrect: we would say that "allocating 4 bytes, allocating 32 bytes, freeing the 4 bytes" results in fragmented memory.

here's a few techniques to avoid this. one or all might be useful to you.

1) generic freelists. the memory manager can have a few freelists for elements of a certain size. for example, 32 bytes. you can setup a chunk of memory reserved for 32 byte (or less) allocations. then you know that a small allocation won't fragment, since you've got this chunk reserved for all small allocations.

2) specific freelists. allocate a lot of one kind of class ahead of time, and get them from the list instead of from the heap. now -- here's what you're wondering: if my different effects are different sizes, i can't keep them in a free list like this, right?

well, actually, you can. just figure out the size of the largest effect, and make a memory allocation with (N * sizeof_largest_effect) bytes. then to allocate effect i, you get the pointer at i * sizeof_largest_effect. then -- call placement new

like this:
byte* pMemoryPool = (byte*) malloc(N * sizeof_largest_effect);
void* pMemory = (void*) (pMemoryPool + (i * sizeof_largest_effect));

ParticleEffect* pEffect = new (pMemory) ParticleEffect;

...

and when you are done, call the destructor manually:

pEffect->~ParticleEffect();
and mark that element of your big bad memory buffer as usable again.

3) "scoped allocations"

you can provide hints in your memory functions that say "this is a temporary allocation" -- so you know that you are going to free it soon. then you could allocate it from the "top" of your heap -- the other end from the normal allocations. then when you free it, you don't end up with a whole.

4) usage patterns.
don't write usage patterns that you know will generate fragmentation. this sounds naive, but you can do it to yourself, even within a function. also get familiar with your memory usage at a higher level, so you know what goes where and why.

5) consider multiple heaps or heap segments.

if you know you need temp memory, segment that separately from memory used for permanent things.

just some memory management ideas.






Share this post


Link to post
Share on other sites
Guest Anonymous Poster
You are smart!

In my particle system I call new/delete for each particle!


... just joking [lol]

Share this post


Link to post
Share on other sites
I haven't read the entire thread however I have a suggestion instead of wasting time writing another mediocre linked-list in C++, use the standard library list with a custom allocator type, you could write a custom allocator type yourself or you could use a pre-made one the C++ boost library has a 2 allocator types you can use with the standard library containers both are pooled allocators:


#include <boost/pool/pool_alloc.hpp>
#include <list>

typedef boost::fast_pool_allocator<particle> particle_alloc;
typedef std::list<particle, particle_alloc> particle_list;

Share this post


Link to post
Share on other sites
Quote:
Original post by snk_kid
I haven't read the entire thread however I have a suggestion instead of wasting time writing another mediocre linked-list in C++, use the standard library list with a custom allocator type, you could write a custom allocator type yourself or you could use a pre-made one the C++ boost library has a 2 allocator types you can use with the standard library containers both are pooled allocators:


#include <boost/pool/pool_alloc.hpp>
#include <list>

typedef boost::fast_pool_allocator<particle> particle_alloc;
typedef std::list<particle, particle_alloc> particle_list;


What this smart fellow said.

std::list will have the overhead of both previous and next pointers, being a doubly linked list, of course.

Also, it's quite possible to write allocators that work with specific chunks of memory, e.g.:

pool< particle > particle_pool;
pool_allocator< particle > particle_allocator( particle_pool );
std::list< particle , pool_allocator< particle > > particles( particle_allocator );

(note that since allocators seem to get passed around by value a ton, it needs to reference external data (such as my particle_pool) rather than holding the memory itself)

Of course, your allocator will have to handle being rebound to something slightly more complex than your particle class (i.e. this:)
  /// @if maint An actual node in the %list.  @endif
template<typename _Tp>
struct _List_node : public _List_node_base
{
_Tp _M_data; ///< User's data.
};

Which inherits from this:
  /// @if maint Common part of a node in the %list.  @endif
struct _List_node_base
{
_List_node_base* _M_next; ///< Self-explanatory
_List_node_base* _M_prev; ///< Self-explanatory
};

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!