is virtual memory obsolete?

Started by
10 comments, last by japro 11 years, 11 months ago
Let's say you had a 32-bit machine with 4 gigs of ram. Wouldn't virtual memory slow you down? (By "virtual memory", I mean swapping data in and out of the hard drive). As I understand it, when you load a program, the OS will only load the first page of the EXE. If the program references code or data that is outside that page, it will trigger a page fault, and the OS will load that page from the hdd. But if you have 4 gigs of RAM, why even bother with paging?

Has anyone heard of any OSes where paging can be disabled? I would imagine someone would have made a linux mod like this as soon as it became affordable to have 4 gigs of RAM. Wouldn't a system like this with no paging be lightning fast?
Advertisement

Let's say you had a 32-bit machine with 4 gigs of ram. Wouldn't virtual memory slow you down? (By "virtual memory", I mean swapping data in and out of the hard drive). As I understand it, when you load a program, the OS will only load the first page of the EXE. If the program references code or data that is outside that page, it will trigger a page fault, and the OS will load that page from the hdd. But if you have 4 gigs of RAM, why even bother with paging?

Has anyone heard of any OSes where paging can be disabled? I would imagine someone would have made a linux mod like this as soon as it became affordable to have 4 gigs of RAM. Wouldn't a system like this with no paging be lightning fast?

That's not what virtual memory is. Virtual memory is basically the OS virtualizing a memory buffer for your process (and only your process) to work with - the translation between virtual memory and physical/paging memory is done by the OS's virtual memory manager (it is possible to have some sort of control over how the virtual memory manager handles your process's memory, for instance under Windows using VirtualAlloc/VirtualLock/etc...).

tl;dr: page file != virtual memory (but the concept of virtual memory allows the OS to fallback on a page file if actual physical memory is running low).

And yes, paging can be disabled with any OS (Windows, Linux, etc...), but it is very inadvisable to do so - OS's will usually not use the pagefile when there is lots of memory available, however if you explicitly disable paging, when the OS does run out of memory (say you were coding a little program and allocated too much memory by accident) the OS will have no fallback and will die on you (appcrash/blue screen/other) instead of using the page file. And, of course, if the page file is active but not being used, the performance cost is negligible.

Really, in those days with machines having 8GB+ of RAM, you'd think paging was useless, and in most cases, it isn't being used much, but it is still an emergency fallback solution for the OS to cope with low-memory situations. If you disable an emergency fallback, you have to accept the consequences.

PS: of course modern OS's have more than one fallback, this is highly simplified overview but it should be enough to see that having a pagefile isn't bad per se - it only is if it is constantly being used and thrashing your hard drive. An idle page file costs nothing (unless you need the 2-4GB on your hard drive... lol)

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

To expand on what Bacterius said, the OS uses virtual memory to map your programs memory to actual, physical memory. Here's an example:

You're on a MIPS system, and the stack starts at address 0x7fffffff. Now let's say you have two programs, A and B. Both of them think the stack starts at 0x7fffffff, and both of them think they own the stack (neither of them know about the other process). So what should happen? They'd corrupt each other. So instead, the OS does a little magic. Both programs think they're using the physical RAM, and try to access the stack which should start at 0x7fffffff. But the OS has really put them at two different places in RAM, so A's stack might be at 0x10000000 and B's stack might be at 0x20000000 (I'm making these addresses up), and it uses the virtual memory management to map A's and B's memory accesses from the virtual world (both think they're accessing 0x7fffffff) to the physical world (really they're accessing 0x10000000 or 0x20000000).
[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]
If you have more than enough RAM for all your programs, then what makes you think it would even need to page something out to disk in the first place?
Clearly it cant avoid reading something in from disk to begin with, so it's not doing any more IO than without virtual memory, so it's not really any slower.

Virtual Memory has negligible performance impact. In fact on systems where it can be turned off, it actually makes many things slower. Since apps cant rely on getting page faults to load in say the other half of some file, they load the whole thing in up front, then maybe only end up using half of it. That kind of thing. Being able to memory-map files is a huge advantage which is completely gone without VM.
Generally the RAM usage will be a lot higher with it off too.

I used to set up Windows with a fixed-size page file and use pagedfrg.exe on boot to ensure it is contiguous. But I've come to realise that even this really was a waste of time, as it made no noticeable difference at all, and in the unlikely event that I reached the page file size ... game over. I don't do that any more.

If you want a really fast system, get a solid-state drive, and money to spare for the replacements when they die.

On my old Mac, I used to use a write-through RAM disk at times. Unlike a normal RAM disk which is RAM only, this wrote everything to file, but all reads came straight from RAM. Once that had mounted and read the file into RAM, THAT was lightning fast! I only wish I knew of such a thing for PCs.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
Corkstalks has touched on an important point here which is that virtual memory can also be a security feature. By isolating processes in this manner you help ensure that (in theory) a misbehaving process can only do damage to itself, and not to other processes or to the OS. Of course things are a whole heap more complex in reality, but without that foundation we would right back in the bad old days of rogue processes being able to scribble all over memory wherever they wanted (and anyone who started learning C back in those days remembers how hairy development could get - especially when you get your first introduction to pointers).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

[background=rgb(250, 251, 252)](By "virtual memory", I mean swapping data in and out of the hard drive).[/background]

[/font]
[/quote]
I believe the OP made it clear they are really talking about paging and not virtualised address space.

I have read that the default kernel settings for Linux is quite aggressive about swapping out memory it detects as being unused. For example, if a program has pages for initialisation / cleanup code, or a page containing code for a particular feature you're not using, there is no point keeping that resident when the memory could be better used to cache / buffer frequently accessed files.

I have read that the latest Windows kernerls are relatively aggressive about trying to cache files too.

It's pretty clear the OP is talking about swapping, not paging or virtual memory.

Swapping is still useful under some circumstances even when a system has metric oodles of RAM.

Some OS kernels (eg. Linux, mach, and most Unixes) will use all unallocated pages of RAM for disk buffers, so physical memory is full 100% of the time. Under a normal load, this is great and results in remarkably improved disk throughput. Under some circumstances, especially on heavily-loaded servers, better throughput can be gained by tuning various OS parameters to increase swappiness (ie. the likelihood that a page in an applications virtual memory space will be swapped to backing store) in favour of kernel buffers. This kind of tuning is tricksy because swapping means increasing physical disk and bus contention, so it's one of the dark arts.

On a home desktop or consumer device, you are unlikely to encounter the sort of situation where swapping will actually improve performance. In fact, on a number of consumer devices I have worked on, we disabled swapping entirely because "disk" was effectively in-core ramdisk so it made no sense. Yes, it was possible to run out of memory, but the Linux OOM killer would take care of that before the system went down.

I use swap on my desktop machine. It gest really sluggish once the kernel starts swapping, but that's usually a sign that an app has a severe problem, and I'd rather have a sluggish working system to use to diagnose and fix the problem than one that make sit impossible to do that.

Stephen M. Webb
Professional Free Software Developer


It's pretty clear the OP is talking about swapping, not paging or virtual memory.


[/quote]
Paging is used to mean swapping pages, not just dividing memory into pages.
The linux mod is called uCLinux and it was made to target small machines that do not have an MMU.
On the desktop and with hand-helds, virtual memory remains very important.

If you really did mean swapping, using hard-disk space to swap-out pages of RAM to make room for other stuff, then you can disable this in Linux by turning off the swap volume, swap off, and in Windows you can go into the settings and remove page files from all disk and reboot.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara

Let's say you had a 32-bit machine with 4 gigs of ram. ... But if you have 4 gigs of RAM, why even bother with paging?


You've made a slight mistake here; assuming we are talking about Windows then your 32bit machine allows every running process to have a 4gig address range mapped to it.

Now, in practical terms this means you'll probably not be able to access more than 2gig in a process (between the various reserved memory address spaces you find memory vanishing pretty quickly; our build system at work is currently un-memory tuned and tends to die when it clears 1.5gig working set), but the upshot is that every process can expect to have that 2gig there for it to use.

So, your 4gig machine would, very quickly, die if you couldn't page data into and out of ram as required.

For reference my home machine, with nothing going on, is currently showing that of the 12gig installed only 3.8gig is 'free' the rest is either used (2.9gig) or being taken up as cache by the OS.
On my work machine last week I was regularly seeing memory load hit 6 to 7gig with quite a number of program open (4 instances of VS2010 at one point).

In short; it doesn't hurt to have it and you can help the OS by just leaving it on unless you have a technically good reason for turning it off.

This topic is closed to new replies.

Advertisement