Question about free store memory location and recommendation to user when new fails.

Started by
16 comments, last by grill8 17 years, 1 month ago
Quote:Original post by Promit
Quote:Original post by Deyja
The chances of actually getting a new failure are slim in the first place.
It depends. Severe heap fragmentation plus a large allocation can cause this to happen; it's frequently a problem for server type applications that run a long time. You can get a similar effect by simply trying to allocate a huge block of memory, like 2 GB or something. It will fail, even if your system has enough physical memory to handle it.

That's a hardcoded limit in Windows: a process can't allocate more than 30 bytes less than 2GB unless it creates a new heap (with HeapCreate).

Presumably, this is to stop you shooting yourself in the foot by accidentally allocating all available virtual memory (although 30 bytes less than 2GB is still most of it on 32-bit systems).

Combined with heap fragmentation, this can cause allocations to fail even if you have enough RAM and swap to cover your needs.
Advertisement
Quote:Original post by Nathan Baum
That's a hardcoded limit in Windows: a process can't allocate more than 30 bytes less than 2GB unless it creates a new heap (with HeapCreate).
That sounds odd to me. Your complete process virtual memory only spans the lower 2 GB of the address space. Out of that you have multiple pieces sliced out, like the code and data segments which have been placed in the middle, and the stack which is towards the top. Allocating 2 GB or 2 GB - 30 or anything like it should be doomed to fail for that reason.

(Obviously not relevant to 64 bit systems.)
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Quote:Original post by Promit
Quote:Original post by Dave
Your application should determine the capabilities of the computer before it gets into runtime, ie during application load time.
Won't help. Out of memory errors deal purely with available virtual memory, which is a fixed quantity and has nothing to do with the computer.

That's not true.

Firstly, as noted above, on Windows there's a limit of just under 2GB which you have to disable to allocate more.

Secondly, you don't have to consume all available virtual memory to run out of memory. If there's an upper limit on swap usage, then the maximum possible amount of memory a process could use is the size of RAM plus the size of swap. The OS would know this, and could refuse to allocate more memory than could possibly ever be available.

Thirdly, even if swap can grow to fill the entire disk, that doesn't mean a program can't "determine the capabilities of the computer" and decide that since you have, say, 512MB of RAM, it'll only use 384MB of storage.
Quote:Original post by Nathan Baum
Secondly, you don't have to consume all available virtual memory to run out of memory. If there's an upper limit on swap usage, then the maximum possible amount of memory a process could use is the size of RAM plus the size of swap. The OS would know this, and could refuse to allocate more memory than could possibly ever be available.
Which is why I was careful to use the word "computer" rather than "system". It's true that you can run out of actual physical memory, but the somewhat complex memory behavior in Windows (or any modern desktop/server OS) makes it infeasible to try to alter your application's behavior based on that, since the entire set of processes on the system, as well as the kernel itself, affect those limitations.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Quote:Original post by Promit
Quote:Original post by Nathan Baum
That's a hardcoded limit in Windows: a process can't allocate more than 30 bytes less than 2GB unless it creates a new heap (with HeapCreate).
That sounds odd to me. Your complete process virtual memory only spans the lower 2 GB of the address space.

It depends. Usually only 2GB is available to the process, but Windows can be configured to make 3GB available.
Hello all,

Thank you for the information (I really should know more about hardware and the underlying systems to better myself as a programmer).

Can someone please verify though the accuracy of the first paragraph of my second post?

Thank you,
Jeremy
It depends on the application. For something like a game, I wouldn't worry about it. If you're writing a server back-end or something where uptime is an important factor, it's worth at least trying to free as much memory as possible when you get a bad_alloc exception, and then recording some kind of error that notes that a memory allocation failed. Knowing that you failed an allocation somewhere is much better than just seeing your code die and having no clue why.

Obviously, if you want to be conscientious, you could do that in a game as well. It isn't hard if you use RAII well, and in the unlikely case your game ever barfs in front of a player, they'll appreciate at least knowing something about what happened rather than just having the game disappear silently. It's up to you, though, really. I will say that outside of highly uptime-critical embedded applications, I've never been particularly bothered about handling allocation failures.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Thank you.

Jeremy

This topic is closed to new replies.

Advertisement