Can I get Windows 7 to overcommit memory?

Started by
10 comments, last by klems 13 years ago
to those advocating buying a larger HD
yes that would be great
I recently wanted one, eg I thought I go and grab a 1 TB one
but had to settle for 200GB since that was the only internal one they had in stock in my city (and I was lucky to find that)
there were ~30 different external HDs up to 1.5 TB
but just the single 200GB internal HD, absolutely nuts

http://search.dse.co.nz/search?p=KK&srid=S2-2&lbc=dse&ts=dse&pw=internal%20harddisk&uid=208446881&isort=score&w=Hard%20Disk%20Drive&rk=4&sessionid=4d8f70e705b2bdfc273fc0a87f3b0758
Advertisement

[quote name='valderman' timestamp='1301216527' post='4790915']
They aren't telling the truth, isn't that the definition of lie, regardless of reason?


How much memory will Word need? ~200MB. Unless the user happens to load a 10,000x10,000 bitmap and inserts it.[/quote]I'm quite aware that the programs don't know how much memory they're going to use. Regardless, allocating a huge chunk of memory just in case, rather than allocating it as needed or in smaller chunks, is rather impolite when the OS doesn't overcommit memory.

A freshly started copy of Eclipse with one three-class project loaded plus Firefox is enough to cause the problem. I don't think they and W7 between them actually touch enough pages to cause memory to run out.[/quote]

Then there's something wrong here.

New eclipse is around 64MB, it will grow arbitrarily. New Firefox is ~90MB. There is no way to exhaust memory using these.[/quote]Yes, that's quite my point. Java is particularly nasty about allocating huge chunks of memory on startup regardless of how much will actually be needed.

Open task manager, or better yet, ProcessExplorer and see what's consuming all memory.[/quote]W7 claims to use something like a gigabyte or so for cache. That's great, but you'd think the first thing to do when memory is running low would be to drop a few hundred megabytes there. Apparently not.

I've written in another thread about juggling java applications exceeding 4GB of memory with no problem. There must be something else running, either services, perhaps some memory leaking application, antivirus.[/quote]Nothing that shows in task manager at least; I've tallied up everything manually, several times, and it always comes up about half what's reported as used by the task manager. Most likely the per process statistics are based on touched memory pages, and the information about total available memory is based on non-allocated pages.

Of course buying more RAM is a solution, but huge chunks of it are going to remain unused "just in case," which seems like a waste of perfectly good memory to me[/quote]
Then turn on the page file. It's precisely what it was made for - the "just in case".[/quote]Not "just in case I need this much memory," but "just in case something with a custom memory allocator actually uses this much memory, which I know it will not."

And physical storage gets allocated on first access to a page? That's the behavior I want to enforce for all allocations, but I guess Windows wouldn't let me? I'm a bit sceptical of the actual performance hit incurred by overcommitting, as you'll only get a page fault the first time you hit a previously untouched page of memory; seems more like a case of "my application is more important than the rest of the system" than actual performance concerns to me.[/quote]
Except that application doesn't allocate all the memory it will ever need at start.

FF runs for 1 hour, using 150MB and not a byte more. Then user goes to YouTube. Suddely, FF needs to allocate plugin, plugin needs to allocate driver resources for hardware acceleration, windows needs to allocate whatever desktop handles, then images need to be loaded which causes several calls to VirtualAlloc - but some images are of unknown size, so FF uses Heuristic and allocates one 100kb block for file and one 4MB block for unpacked image. Suddenly, FF uses 210MB per process, 20MB for plugin sandbox and 60MB in kernel and driver resources.

Applications simply aren't written in a way to know how much they will need. Except for Java VM, which can be limited to min/max values.

But each time VirtualAlloc is called, memory is checked.
[/quote]What does this have to do with the performance of deferred allocations? The only performance hit you get is a single pagefault per memory page, incurred on first write. If VirtualAlloc requires you to call it time and again, then it doesn't do the same thing.

This topic is closed to new replies.

Advertisement