Allocating "some" memory
Is there a function (for any OS/platform) that allows one to request memory that is at least X bytes but may be more (up to some limit)? I imagine this could be useful to help the allocator combat memory fragmentation by making use of as much contiguous memory as possible. A memory pool implementation, for instance, could use variable-sized chunks like this.
Any other tips to make efficient use of memory are appreciated too.
Usually new/malloc in the CRT has some kind of support for this. For win32 you might want to examine _set_sbh_threshold: http://msdn.microsoft.com/library/en-us/vclib/html/_crt__set_sbh_threshold.asp
An old, but useful, article: http://msdn.microsoft.com/library/en-us/dngenlib/html/heap3.asp
An old, but useful, article: http://msdn.microsoft.com/library/en-us/dngenlib/html/heap3.asp
I'm pretty certain the standard memory allocation routines already do something like that. In my most recent test's, g++'s ::operator new allocates memory in 8 byte chunks, so it'll most often allocate some extra memory compared to your request.
How does the program know how much extra is allocated (e.g. by operator new)? Essentially what I want to do is tell the allocator "give me some memory between X and Y bytes and I'll do something useful with most/all of it".
After browsing through the AP's links I found _expand which seems to be the answer on Windows. Does anyone know of anything similar for other platforms?
After browsing through the AP's links I found _expand which seems to be the answer on Windows. Does anyone know of anything similar for other platforms?
Nope, you have to estimate your memory needs in advance. If you can't then you have to come up stuff. For instance, using zlib and not knowing what the decompressed size of a file is, I had to write a decompressor that worked in chunks and then reconstructed the finished file when it was done. Basically, the responsibility is yours to figure out what to do with the memory you have, not the OS. Its just there to give it to you and keep you within your bounds... maybe.
Just allocate the max memory you think you'll need.
The OS will not page (in RAM) the unused (but allocated) memory anyway so you don't reall'y have to worry about it.
And there is a C function called realloc() you can also use for requesting more memory.
The OS will not page (in RAM) the unused (but allocated) memory anyway so you don't reall'y have to worry about it.
And there is a C function called realloc() you can also use for requesting more memory.
Quote:Original post by Inmate2993
For instance, using zlib and not knowing what the decompressed size of a file is, I had to write a decompressor that worked in chunks and then reconstructed the finished file when it was done.
How did you end up with the data? It might help to simply save the original object size as sort of a header in your file. zlib supports saving compressed and uncompressed data into the same file.
Sorry, way off topic. I was just curious.
It seems I've been unable to express my question understandably. Who knows it might just be too dumb a question. =/
What I had in mind was a memory pool system that has a list of variable-sized memory chunks. What I'd like to do is get a piece of memory that the allocator would find convenient to give away and make use of all of that. Exactly how large it would be wouldn't matter. I want to avoid taking very large chunks and leave the fragmented smaller chunks unused.
The _expand() I mentioned is different from realloc() because it only takes what extra is available without relocating. On Windows I could write a pool that allocates some minimum useful chunk size and, when it fills up, tries to _expand() it before creating another chunk.
Anyway the replies so far have given some useful tips. Thanks.
What I had in mind was a memory pool system that has a list of variable-sized memory chunks. What I'd like to do is get a piece of memory that the allocator would find convenient to give away and make use of all of that. Exactly how large it would be wouldn't matter. I want to avoid taking very large chunks and leave the fragmented smaller chunks unused.
The _expand() I mentioned is different from realloc() because it only takes what extra is available without relocating. On Windows I could write a pool that allocates some minimum useful chunk size and, when it fills up, tries to _expand() it before creating another chunk.
Anyway the replies so far have given some useful tips. Thanks.
Quote:Original post by 255
What I had in mind was a memory pool system that has a list of variable-sized memory chunks. What I'd like to do is get a piece of memory that the allocator would find convenient to give away and make use of all of that. Exactly how large it would be wouldn't matter. I want to avoid taking very large chunks and leave the fragmented smaller chunks unused.
On modern systems, you don't need to worry about memory fragmentation. The memory pointer that's given to you after a calloc() or whatever is not the physical address of the memory.
Instead, it's a virtual memory, where each page (usually 4kb) can be at a different physical location, only that the OS maps it into a seemingly linear space.
Quote:Original post by 255
It seems I've been unable to express my question understandably. Who knows it might just be too dumb a question. =/
What I had in mind was a memory pool system that has a list of variable-sized memory chunks. What I'd like to do is get a piece of memory that the allocator would find convenient to give away and make use of all of that. Exactly how large it would be wouldn't matter. I want to avoid taking very large chunks and leave the fragmented smaller chunks unused.
The _expand() I mentioned is different from realloc() because it only takes what extra is available without relocating. On Windows I could write a pool that allocates some minimum useful chunk size and, when it fills up, tries to _expand() it before creating another chunk.
Anyway the replies so far have given some useful tips. Thanks.
The standard library will deal with fragmentation, most likely better than you can since many professionals have worked on it, for general purpouse allocations. That said, for lots of small or same sized allocations, different strategies can prevail - but again, professionals have beaten you to the punch - see the Boost Pool Library.
It'd be smart to use an advanced profiling tool to see if you're actually suffering preformance problems during (de)allocation and/or experiencing a lot (or ever-increasing) cache-misses and/or swap page thrashing. If you're not, you probably don't have a situation worth worrying about (that said, this is all extremely situation-specific stuff).
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement