Sign in to follow this  
Steno

Memory Allocation Limits (C++)

Recommended Posts

Hi All,

I recently started maintaining some very poorly written C++ code that was causing crashes on Win XP (Vista/Win7 seem OK, for now) under certain circumstances. I found that the app was crashing on a memory allocation. This allocation is done with the new operator that has been redefined to use malloc.

The application contains many lists of pointers to the objects that have been allocated. These objects range in size from 8 to maybe 100 bytes. The crash is appearing to happen around the 1 millionth allocation (I didn't have time to profile how many deallocations took place, but it's not enough to be significant ... maybe in the 1000 - 10,000 range). Memory usage at this point is about 50MB, so this seems to support my data of allocation numbers.

So, the question here is not how to fix this problem. The code needs rewritten rather than refactored. I'm wondering if there is a limit to the number of allocations that an OS can support for a single process? My assumption is that the allocation table is just overwhelmed. I've done some googling, but all of my attempts concern size rather than number of attempts. I haven't been able to reproduce this issue on my dev machine, but that's Win7, so inconclusive.

Thanks for any thoughts.

Share this post


Link to post
Share on other sites
I think malloc, or any other decent memory management code, will not use OS MM directly.
Instead it will request a large chunk of memory from OS and then do the allocation by itself.

So most likely your problem has no connection with OS limitation.

And why not just do a simple test on that? Allocate small piece of memory for many times.

Share this post


Link to post
Share on other sites
[quote name='Steno' timestamp='1307583653' post='4821163']This allocation is done with the new operator that has been redefined to use malloc.[/quote]How is this implemented?

Note that regular C++ [font="'Courier New"]new[/font] will never return [font="'Courier New"]NULL[/font], whereas [font="'Courier New"]malloc[/font] will -- is this taken into account?

Also, which compiler are you using?

Share this post


Link to post
Share on other sites
The first major question is what compiler and standard library implementation are you using? Debug or release? 32 or 64 bit?

These things heavily influence the way memory allocation works. However, at a fundamental level, there is no "cap" on the number of allocations you can make in a Windows program. The list of allocated memory itself grows as you allocate more stuff, typically using a linked list structure to track the allocations. The OS kernel takes care of mapping this to physical RAM and then mapping that to swap space if physical RAM is exhausted. On XP32, you're limited to 2GB of space (in practice 2 billion or so 1 byte allocations is your max) or possibly 3GB if you have the appropriate kernel options set; on XP64 you can have 64 bits worth of individual allocations without theoretical problems (less a factor of 10 or so because of bookkeeping overhead), and that's far more allocations than your program can physically make in a human lifetime, so no worries there.

If you are truly "out" of memory it is possible that you're experiencing severe heap fragmentation (although at only 50MB of commit that seems very unlikely).

My guess is that something is corrupting the malloc book-keeping data structure, probably by writing prior to or past the end of allocated memory. This is the most common cause of allocation failures in most programs. Truly running out of memory is hard to do on a modern machine, and virtually impossible on a 64-bit platform.

Share this post


Link to post
Share on other sites
[quote name='wqking' timestamp='1307592079' post='4821193']
I think malloc, or any other decent memory management code, will not use OS MM directly.
Instead it will request a large chunk of memory from OS and then do the allocation by itself.

So most likely your problem has no connection with OS limitation.

And why not just do a simple test on that? Allocate small piece of memory for many times.
[/quote]

Thanks for the interest. A couple of things:

A) Will new or malloc request a large heap space? We've talked about doing this within the process by implementing local heap space and allocation tables, but what would be the drawbacks?

B) I have tested this at home on a Win7 machine by adding random objects of various size (between 7-33 bytes) to lists and deleting them at random. I didn't get a crash, but I got bored after +3M allocations. The original crash was with XP and I only have 7 to test this with.

Share this post


Link to post
Share on other sites
[quote name='ApochPiQ' timestamp='1307592776' post='4821199']
The first major question is what compiler and standard library implementation are you using? Debug or release? 32 or 64 bit?

These things heavily influence the way memory allocation works. However, at a fundamental level, there is no "cap" on the number of allocations you can make in a Windows program. The list of allocated memory itself grows as you allocate more stuff, typically using a linked list structure to track the allocations. The OS kernel takes care of mapping this to physical RAM and then mapping that to swap space if physical RAM is exhausted. On XP32, you're limited to 2GB of space (in practice 2 billion or so 1 byte allocations is your max) or possibly 3GB if you have the appropriate kernel options set; on XP64 you can have 64 bits worth of individual allocations without theoretical problems (less a factor of 10 or so because of bookkeeping overhead), and that's far more allocations than your program can physically make in a human lifetime, so no worries there.

If you are truly "out" of memory it is possible that you're experiencing severe heap fragmentation (although at only 50MB of commit that seems very unlikely).

My guess is that something is corrupting the malloc book-keeping data structure, probably by writing prior to or past the end of allocated memory. This is the most common cause of allocation failures in most programs. Truly running out of memory is hard to do on a modern machine, and virtually impossible on a 64-bit platform.
[/quote]

Compiler: VC8
STL: No, custom implementation
Debug/Release: I forgot to include this, but it only happens in release. We have debugger profilers that check for uninitialized heap memory, bounds checking, etc. and they aren't throwing any warnings at me.
32/64: XP only, but I can't confirm 64-bit XP. Vista/7 bit count doesn't reproduce.

Share this post


Link to post
Share on other sites
[quote name='ApochPiQ' timestamp='1307592776' post='4821199']

If you are truly "out" of memory it is possible that you're experiencing severe heap fragmentation (although at only 50MB of commit that seems very unlikely).
[/quote]

For those that are interested, I've tried replicating severe heap fragmentation by allocating and deleting structs of odd sizes (133 bytes, 54, 16, 9, 7) at random. I haven't seen any change in performance or any crashes. Again, this is just on Win7 and not XP.

Share this post


Link to post
Share on other sites
[quote name='wqking' timestamp='1307592079' post='4821193']
I think malloc, or any other decent memory management code, will not use OS MM directly.
[/quote]
Actually some implementations do. For example, MSVC 2010's malloc() implementation more or less just calls HeapAlloc().

Share this post


Link to post
Share on other sites
" I found that the app was crashing on a memory allocation."

This is almost always caused by something else scribbling into a previously deallocated block and corrupting the linked list that the memory manager puts into the space.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this