Sign in to follow this  
grill8

Question about free store memory location and recommendation to user when new fails.

Recommended Posts

grill8    148
Hello all, I have a question about free store memory location and a recommendation to present to the user when operator new fails. I do know how to use the operator new, overload it, customize it etc. but I do not know much about hardware or where the free store memory actually resides. As such I do not know the proper recommendation to present to users when the operator new fails and there are no more fallback mechanisms. I understand the basics of memory structure system memory/virtual memory etc. but do not know the underlying location of the free store memory to be honest with you. I also am needing a short (one sentence or so) message that I should present to end-users of my software if/when new or new[] fails. Some short recommendation such as the following: "ERROR: This application requires more RAM than is available." "TO FIX: Try installing more RAM memory on your computer." Thank you for your help, Your friend, Jeremy (grill8)

Share this post


Link to post
Share on other sites
dave    2187
I wouldn't worry about giving the user any more than a "Out of memory!". From that message it is pretty obvious what is wrong. Chances are their system isn't meeting your min specs.

Dave

Share this post


Link to post
Share on other sites
OrangyTang    1298
IMHO you can't (usefully) handle the case where 'new' fails. Catching it at presenting an error dialog isn't as easy as it sounds, as you're not really allowed to allocate any more memory (and even a standard windows error dialog will probably allocate somewhere along the line).

Instead what's more likely is that you'll allocate more than is physically available, and your performance will suffer as memory gets swapped in and out. In this case you should probably provide some options for the user to use less memory (which will probably be specific to your app, such as loading in lower quality textures or models).

Share this post


Link to post
Share on other sites
MaulingMonkey    1730
Quote:
Original post by OrangyTang
IMHO you can't (usefully) handle the case where 'new' fails. Catching it at presenting an error dialog isn't as easy as it sounds, as you're not really allowed to allocate any more memory (and even a standard windows error dialog will probably allocate somewhere along the line).


Surely you can catch bad_alloc after all the main resources of the program have been cleaned up and released?

Share this post


Link to post
Share on other sites
Deyja    920
The chances of actually getting a new failure are slim in the first place. The computer will run out of physical memory long before the operating system reports an out of memory condition. At this point, the memory swapping will have probably brought the machine to a halt. Chances are, your application will run unusably slow long before you ever have a new failure.

Share this post


Link to post
Share on other sites
Antheus    2409
Unless this has to deal with embedded system, I don't think you need to worry about "new" failing.

Resource allocations, graphics, sound, network and similar, yes. But new shouldn't be an issue.

If you do have low memory issues, then you're probably better off handling memory manually, and attempting to allocate *all* memory at application startup. Then your application will use only these pre-allocated blocks, not worrying about low memory in the first place.

But again, unless you have really large (gigabyte range) memory allocations, or using some old OS, the new itself shouldn't be much of an issue.

Share this post


Link to post
Share on other sites
Promit    13246
Quote:
Original post by Deyja
The chances of actually getting a new failure are slim in the first place.
It depends. Severe heap fragmentation plus a large allocation can cause this to happen; it's frequently a problem for server type applications that run a long time. You can get a similar effect by simply trying to allocate a huge block of memory, like 2 GB or something. It will fail, even if your system has enough physical memory to handle it.

Share this post


Link to post
Share on other sites
grill8    148
Hello all,

Thank you.

So what it sounds like is for a medium sized non-ongoing-server-style program (3D FPS game) that does not allocate more memory than is mission critical you don't actually need to worry about handling new failures / bad_alloc / new_handler / set_new_handler and such?

Perhaps as mentioned provide some sort of mechanism that downgrades the texture sizes/resources/poly count etc. if the frame rate stays low for a relatively long duration (10 seconds or so)?

Thank you for your help.

Jeremy (grill8)

Share this post


Link to post
Share on other sites
dave    2187
Your application should determine the capabilities of the computer before it gets into runtime, ie during application load time. It would be extremely slow to reload all of your textures at runtime to accomodate for only just realising you don't have enough memory.

Dave

Share this post


Link to post
Share on other sites
Promit    13246
Quote:
Original post by Dave
Your application should determine the capabilities of the computer before it gets into runtime, ie during application load time.
Won't help. Out of memory errors deal purely with available virtual memory, which is a fixed quantity and has nothing to do with the computer.

Share this post


Link to post
Share on other sites
Nathan Baum    1027
Quote:
Original post by Promit
Quote:
Original post by Deyja
The chances of actually getting a new failure are slim in the first place.
It depends. Severe heap fragmentation plus a large allocation can cause this to happen; it's frequently a problem for server type applications that run a long time. You can get a similar effect by simply trying to allocate a huge block of memory, like 2 GB or something. It will fail, even if your system has enough physical memory to handle it.

That's a hardcoded limit in Windows: a process can't allocate more than 30 bytes less than 2GB unless it creates a new heap (with HeapCreate).

Presumably, this is to stop you shooting yourself in the foot by accidentally allocating all available virtual memory (although 30 bytes less than 2GB is still most of it on 32-bit systems).

Combined with heap fragmentation, this can cause allocations to fail even if you have enough RAM and swap to cover your needs.

Share this post


Link to post
Share on other sites
Promit    13246
Quote:
Original post by Nathan Baum
That's a hardcoded limit in Windows: a process can't allocate more than 30 bytes less than 2GB unless it creates a new heap (with HeapCreate).
That sounds odd to me. Your complete process virtual memory only spans the lower 2 GB of the address space. Out of that you have multiple pieces sliced out, like the code and data segments which have been placed in the middle, and the stack which is towards the top. Allocating 2 GB or 2 GB - 30 or anything like it should be doomed to fail for that reason.

(Obviously not relevant to 64 bit systems.)

Share this post


Link to post
Share on other sites
Nathan Baum    1027
Quote:
Original post by Promit
Quote:
Original post by Dave
Your application should determine the capabilities of the computer before it gets into runtime, ie during application load time.
Won't help. Out of memory errors deal purely with available virtual memory, which is a fixed quantity and has nothing to do with the computer.

That's not true.

Firstly, as noted above, on Windows there's a limit of just under 2GB which you have to disable to allocate more.

Secondly, you don't have to consume all available virtual memory to run out of memory. If there's an upper limit on swap usage, then the maximum possible amount of memory a process could use is the size of RAM plus the size of swap. The OS would know this, and could refuse to allocate more memory than could possibly ever be available.

Thirdly, even if swap can grow to fill the entire disk, that doesn't mean a program can't "determine the capabilities of the computer" and decide that since you have, say, 512MB of RAM, it'll only use 384MB of storage.

Share this post


Link to post
Share on other sites
Promit    13246
Quote:
Original post by Nathan Baum
Secondly, you don't have to consume all available virtual memory to run out of memory. If there's an upper limit on swap usage, then the maximum possible amount of memory a process could use is the size of RAM plus the size of swap. The OS would know this, and could refuse to allocate more memory than could possibly ever be available.
Which is why I was careful to use the word "computer" rather than "system". It's true that you can run out of actual physical memory, but the somewhat complex memory behavior in Windows (or any modern desktop/server OS) makes it infeasible to try to alter your application's behavior based on that, since the entire set of processes on the system, as well as the kernel itself, affect those limitations.

Share this post


Link to post
Share on other sites
Nathan Baum    1027
Quote:
Original post by Promit
Quote:
Original post by Nathan Baum
That's a hardcoded limit in Windows: a process can't allocate more than 30 bytes less than 2GB unless it creates a new heap (with HeapCreate).
That sounds odd to me. Your complete process virtual memory only spans the lower 2 GB of the address space.

It depends. Usually only 2GB is available to the process, but Windows can be configured to make 3GB available.

Share this post


Link to post
Share on other sites
grill8    148
Hello all,

Thank you for the information (I really should know more about hardware and the underlying systems to better myself as a programmer).

Can someone please verify though the accuracy of the first paragraph of my second post?

Thank you,
Jeremy

Share this post


Link to post
Share on other sites
ApochPiQ    23064
It depends on the application. For something like a game, I wouldn't worry about it. If you're writing a server back-end or something where uptime is an important factor, it's worth at least trying to free as much memory as possible when you get a bad_alloc exception, and then recording some kind of error that notes that a memory allocation failed. Knowing that you failed an allocation somewhere is much better than just seeing your code die and having no clue why.

Obviously, if you want to be conscientious, you could do that in a game as well. It isn't hard if you use RAII well, and in the unlikely case your game ever barfs in front of a player, they'll appreciate at least knowing something about what happened rather than just having the game disappear silently. It's up to you, though, really. I will say that outside of highly uptime-critical embedded applications, I've never been particularly bothered about handling allocation failures.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this