Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Can I get Windows 7 to overcommit memory?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
11 replies to this topic

#1 valderman   Members   -  Reputation: 512

Posted 26 March 2011 - 04:32 AM

So I have this beefy machine with four gigs of RAM running 64-bit Windows 7. Unfortunately, I don't have the disk space to match, so I figure that, with that much RAM I should be able to run without page file, to conserve a bit. Also, having a program die because of a failed allocation tends to be preferrable to having the system go into 15 minutes of frantic paging activity, which usually doesn't end until the memory hog is killed anyway.

This has worked out beautifully on Linux, which I used to use as my primary OS until my other disk died and took my Linux partition with it. Unfortunately, no such luck in Windows 7. I get constant nag screens about the system being low on memory, and occasionally programs commit suicide because of failed allocations. Turning on the page file solves the problem, but the file is barely used at all. This is of course due to the difference in memory handling between W7 and Linux; W7 requires that every allocated page of memory actually exists. Linux, on the other hand, allows programs to allocate quite a bit more than what's actually available (and then goes on a killing spree if programs actually touch more memory than what's available,) the rationale being that programs quite frequently allocate huge chunks of memory without using anywhere near all of it. Java is a great example of this, and pretty much every program that uses a custom memory allocator, I would presume.

Now, I think it's pretty silly to have to give up six gigs of HD space just because some programs like to lie about how much memory they're going to use, so I was wondering if there's any way to get Windows 7 to overcommit memory the way Linux does?

Sponsor:

#2 Nanoha   Members   -  Reputation: 300

Posted 26 March 2011 - 04:47 AM

I read somethign a while back about this and the general opinion was its best left on. That said, if you do need more space why not just turn it down? You don't have to use 6gb. Tell it to use less, I checked mine and its only on 4gb (it does say reccomended 6gb though).

#3 Endar   Members   -  Reputation: 668

Posted 26 March 2011 - 05:05 AM

I guess this isn't the answer that you're looking for, but why not buy a bigger hard drive? Even if you're a bit hard up on money, a 1 TB hard drive isn't quite inexpensive, maybe $100AUD or less.
Mort, Duke of Sto Helit: NON TIMETIS MESSOR -- Don't Fear The Reaper

#4 Antheus   Members   -  Reputation: 2401

Posted 26 March 2011 - 07:47 AM

Unfortunately, no such luck in Windows 7. I get constant nag screens about the system being low on memory, and occasionally programs commit suicide because of failed allocations.

I've been running without a page file on many systems since XP times.

I've encountered out-of-memory problem precisely once, when a version of Flash had a nasty memory leak that gobbled up gigabytes.

Unless there is something fishy going on, then the explanation is quite simply that you don't have enough RAM.

some programs like to lie

They don't lie, they don't know.

If you're a heavy user, you'll simply need to buy more RAM. And is this a 64-bit version of Windows?

I read somethign a while back about this and the general opinion was its best left on. That said, if you do need more space why not just turn it down? You don't have to use 6gb. Tell it to use less, I checked mine and its only on 4gb (it does say reccomended 6gb though).

It's a myth. There is nothing that requires system to scribble over disk.

Having pagefile can help in some corner cases, but when you run out of memory you run out. There is a point for everyone, it just turns out that having 4+6GB is usually enough.

Size of pagefile is also often overstated. 6GB takes around 4 minutes to write fully. RAM needs 1 second to do the same. So when application hits the pagefile seriously, things slow down to utterly unusable.

The reason why one would want to turn it off is, perhaps, to reduce disk activity, on laptop it could allow system to turn it off, or perhaps to prevent wear on a SSD. Unless doing some really heavy image manipulation or some strange computations, 8GB is enough these days, 4GB will typically work, but may be a bit tight.

#5 Sneftel   Senior Moderators   -  Reputation: 1781

Posted 26 March 2011 - 10:18 AM

W7 requires that every allocated page of memory actually exists.

That's not true. VirtualAlloc can reserve a virtual address range without actually allocating physical RAM/pagefile space for it. The problem is simply that programmers WANT a huge pile of just-in-case RAM already there for them; it makes it faster to use when they actually decide to use it.

#6 Sirisian   Crossbones+   -  Reputation: 1915

Posted 26 March 2011 - 03:51 PM

You should probably wait until you have 8 GB or more to turn off the page file. I have it off on my laptop since I have 16 GB of memory which seems to be a nice amount where a page file just seems silly. I'm still not convinced for the arguments to keep it on. I keep thinking those discussions are for people wanting to turn it off with only 2 GB of memory.

#7 JDX_John   Members   -  Reputation: 292

Posted 27 March 2011 - 01:51 AM

4Gb isn't a beefy machine for W7-64. If you don't want to get a bigger HD (or consider a second HD) get more RAM.

www.simulatedmedicine.com - medical simulation software

Looking to find experienced Ogre & shader developers/artists. PM me or contact through website with a contact email address if interested.


#8 valderman   Members   -  Reputation: 512

Posted 27 March 2011 - 03:02 AM

I guess this isn't the answer that you're looking for, but why not buy a bigger hard drive? Even if you're a bit hard up on money, a 1 TB hard drive isn't quite inexpensive, maybe $100AUD or less.

It's not that much of a practical problem really, it's more of an annoyance.


Unfortunately, no such luck in Windows 7. I get constant nag screens about the system being low on memory, and occasionally programs commit suicide because of failed allocations.

I've been running without a page file on many systems since XP times.

I've encountered out-of-memory problem precisely once, when a version of Flash had a nasty memory leak that gobbled up gigabytes.

Unless there is something fishy going on, then the explanation is quite simply that you don't have enough RAM.

I used to do that under XP too, and it worked just fine. However, everything I've read indicates that Windows requires all allocated memory pages to be backed by physical storage, so I sort of wonder how it did work. Perhaps programs were less prone to pre-allocate huge (in relation to that time's memory sizes) chunkds of memory back then.

some programs like to lie

They don't lie, they don't know.

They aren't telling the truth, isn't that the definition of lie, regardless of reason?

If you're a heavy user, you'll simply need to buy more RAM. And is this a 64-bit version of Windows?

A freshly started copy of Eclipse with one three-class project loaded plus Firefox is enough to cause the problem. I don't think they and W7 between them actually touch enough pages to cause memory to run out. Of course buying more RAM is a solution, but huge chunks of it are going to remain unused "just in case," which seems like a waste of perfectly good memory to me.



W7 requires that every allocated page of memory actually exists.

That's not true. VirtualAlloc can reserve a virtual address range without actually allocating physical RAM/pagefile space for it. The problem is simply that programmers WANT a huge pile of just-in-case RAM already there for them; it makes it faster to use when they actually decide to use it.

And physical storage gets allocated on first access to a page? That's the behavior I want to enforce for all allocations, but I guess Windows wouldn't let me? I'm a bit sceptical of the actual performance hit incurred by overcommitting, as you'll only get a page fault the first time you hit a previously untouched page of memory; seems more like a case of "my application is more important than the rest of the system" than actual performance concerns to me.

#9 Antheus   Members   -  Reputation: 2401

Posted 27 March 2011 - 06:59 AM

They aren't telling the truth, isn't that the definition of lie, regardless of reason?


How much memory will Word need? ~200MB. Unless the user happens to load a 10,000x10,000 bitmap and inserts it.

A freshly started copy of Eclipse with one three-class project loaded plus Firefox is enough to cause the problem. I don't think they and W7 between them actually touch enough pages to cause memory to run out.


Then there's something wrong here.

New eclipse is around 64MB, it will grow arbitrarily. New Firefox is ~90MB. There is no way to exhaust memory using these.

Open task manager, or better yet, ProcessExplorer and see what's consuming all memory.

I've written in another thread about juggling java applications exceeding 4GB of memory with no problem. There must be something else running, either services, perhaps some memory leaking application, antivirus.

Of course buying more RAM is a solution, but huge chunks of it are going to remain unused "just in case," which seems like a waste of perfectly good memory to me

Then turn on the page file. It's precisely what it was made for - the "just in case".

And physical storage gets allocated on first access to a page? That's the behavior I want to enforce for all allocations, but I guess Windows wouldn't let me? I'm a bit sceptical of the actual performance hit incurred by overcommitting, as you'll only get a page fault the first time you hit a previously untouched page of memory; seems more like a case of "my application is more important than the rest of the system" than actual performance concerns to me.

Except that application doesn't allocate all the memory it will ever need at start.

FF runs for 1 hour, using 150MB and not a byte more. Then user goes to YouTube. Suddely, FF needs to allocate plugin, plugin needs to allocate driver resources for hardware acceleration, windows needs to allocate whatever desktop handles, then images need to be loaded which causes several calls to VirtualAlloc - but some images are of unknown size, so FF uses Heuristic and allocates one 100kb block for file and one 4MB block for unpacked image. Suddenly, FF uses 210MB per process, 20MB for plugin sandbox and 60MB in kernel and driver resources.

Applications simply aren't written in a way to know how much they will need. Except for Java VM, which can be limited to min/max values.

But each time VirtualAlloc is called, memory is checked.

#10 JDX_John   Members   -  Reputation: 292

Posted 27 March 2011 - 09:17 AM

Considering the cost/Gb of a hard disk, all this messing about seems a little pointless.

www.simulatedmedicine.com - medical simulation software

Looking to find experienced Ogre & shader developers/artists. PM me or contact through website with a contact email address if interested.


#11 zedz   Members   -  Reputation: 291

Posted 27 March 2011 - 11:22 AM

to those advocating buying a larger HD
yes that would be great
I recently wanted one, eg I thought I go and grab a 1 TB one
but had to settle for 200GB since that was the only internal one they had in stock in my city (and I was lucky to find that)
there were ~30 different external HDs up to 1.5 TB
but just the single 200GB internal HD, absolutely nuts

http://search.dse.co.nz/search?p=KK&srid=S2-2&lbc=dse&ts=dse&pw=internal%20harddisk&uid=208446881&isort=score&w=Hard%20Disk%20Drive&rk=4&sessionid=4d8f70e705b2bdfc273fc0a87f3b0758

#12 valderman   Members   -  Reputation: 512

Posted 29 March 2011 - 12:33 PM


They aren't telling the truth, isn't that the definition of lie, regardless of reason?


How much memory will Word need? ~200MB. Unless the user happens to load a 10,000x10,000 bitmap and inserts it.

I'm quite aware that the programs don't know how much memory they're going to use. Regardless, allocating a huge chunk of memory just in case, rather than allocating it as needed or in smaller chunks, is rather impolite when the OS doesn't overcommit memory.

A freshly started copy of Eclipse with one three-class project loaded plus Firefox is enough to cause the problem. I don't think they and W7 between them actually touch enough pages to cause memory to run out.


Then there's something wrong here.

New eclipse is around 64MB, it will grow arbitrarily. New Firefox is ~90MB. There is no way to exhaust memory using these.

Yes, that's quite my point. Java is particularly nasty about allocating huge chunks of memory on startup regardless of how much will actually be needed.

Open task manager, or better yet, ProcessExplorer and see what's consuming all memory.

W7 claims to use something like a gigabyte or so for cache. That's great, but you'd think the first thing to do when memory is running low would be to drop a few hundred megabytes there. Apparently not.

I've written in another thread about juggling java applications exceeding 4GB of memory with no problem. There must be something else running, either services, perhaps some memory leaking application, antivirus.

Nothing that shows in task manager at least; I've tallied up everything manually, several times, and it always comes up about half what's reported as used by the task manager. Most likely the per process statistics are based on touched memory pages, and the information about total available memory is based on non-allocated pages.

Of course buying more RAM is a solution, but huge chunks of it are going to remain unused "just in case," which seems like a waste of perfectly good memory to me

Then turn on the page file. It's precisely what it was made for - the "just in case".

Not "just in case I need this much memory," but "just in case something with a custom memory allocator actually uses this much memory, which I know it will not."

And physical storage gets allocated on first access to a page? That's the behavior I want to enforce for all allocations, but I guess Windows wouldn't let me? I'm a bit sceptical of the actual performance hit incurred by overcommitting, as you'll only get a page fault the first time you hit a previously untouched page of memory; seems more like a case of "my application is more important than the rest of the system" than actual performance concerns to me.

Except that application doesn't allocate all the memory it will ever need at start.

FF runs for 1 hour, using 150MB and not a byte more. Then user goes to YouTube. Suddely, FF needs to allocate plugin, plugin needs to allocate driver resources for hardware acceleration, windows needs to allocate whatever desktop handles, then images need to be loaded which causes several calls to VirtualAlloc - but some images are of unknown size, so FF uses Heuristic and allocates one 100kb block for file and one 4MB block for unpacked image. Suddenly, FF uses 210MB per process, 20MB for plugin sandbox and 60MB in kernel and driver resources.

Applications simply aren't written in a way to know how much they will need. Except for Java VM, which can be limited to min/max values.

But each time VirtualAlloc is called, memory is checked.

What does this have to do with the performance of deferred allocations? The only performance hit you get is a single pagefault per memory page, incurred on first write. If VirtualAlloc requires you to call it time and again, then it doesn't do the same thing.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS