Jump to content
  • Advertisement
Sign in to follow this  
codeToad

is virtual memory obsolete?

This topic is 2174 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Let's say you had a 32-bit machine with 4 gigs of ram. Wouldn't virtual memory slow you down? (By "virtual memory", I mean swapping data in and out of the hard drive). As I understand it, when you load a program, the OS will only load the first page of the EXE. If the program references code or data that is outside that page, it will trigger a page fault, and the OS will load that page from the hdd. But if you have 4 gigs of RAM, why even bother with paging?

Has anyone heard of any OSes where paging can be disabled? I would imagine someone would have made a linux mod like this as soon as it became affordable to have 4 gigs of RAM. Wouldn't a system like this with no paging be lightning fast? Edited by codeToad

Share this post


Link to post
Share on other sites
Advertisement
To expand on what Bacterius said, the OS uses virtual memory to map your programs memory to actual, physical memory. Here's an example:

You're on a MIPS system, and the stack starts at address 0x7fffffff. Now let's say you have two programs, A and B. Both of them think the stack starts at 0x7fffffff, and both of them think they own the stack (neither of them know about the other process). So what should happen? They'd corrupt each other. So instead, the OS does a little magic. Both programs think they're using the physical RAM, and try to access the stack which should start at 0x7fffffff. But the OS has really put them at two different places in RAM, so A's stack might be at 0x10000000 and B's stack might be at 0x20000000 (I'm making these addresses up), and it uses the virtual memory management to map A's and B's memory accesses from the virtual world (both think they're accessing 0x7fffffff) to the physical world (really they're accessing 0x10000000 or 0x20000000).

Share this post


Link to post
Share on other sites
If you have more than enough RAM for all your programs, then what makes you think it would even need to page something out to disk in the first place?
Clearly it cant avoid reading something in from disk to begin with, so it's not doing any more IO than without virtual memory, so it's not really any slower.

Virtual Memory has negligible performance impact. In fact on systems where it can be turned off, it actually makes many things slower. Since apps cant rely on getting page faults to load in say the other half of some file, they load the whole thing in up front, then maybe only end up using half of it. That kind of thing. Being able to memory-map files is a huge advantage which is completely gone without VM.
Generally the RAM usage will be a lot higher with it off too.

I used to set up Windows with a fixed-size page file and use pagedfrg.exe on boot to ensure it is contiguous. But I've come to realise that even this really was a waste of time, as it made no noticeable difference at all, and in the unlikely event that I reached the page file size ... game over. I don't do that any more.

If you want a really fast system, get a solid-state drive, and money to spare for the replacements when they die.

On my old Mac, I used to use a write-through RAM disk at times. Unlike a normal RAM disk which is RAM only, this wrote everything to file, but all reads came straight from RAM. Once that had mounted and read the file into RAM, THAT was lightning fast! I only wish I knew of such a thing for PCs.

Share this post


Link to post
Share on other sites
Corkstalks has touched on an important point here which is that virtual memory can also be a security feature. By isolating processes in this manner you help ensure that (in theory) a misbehaving process can only do damage to itself, and not to other processes or to the OS. Of course things are a whole heap more complex in reality, but without that foundation we would right back in the bad old days of rogue processes being able to scribble all over memory wherever they wanted (and anyone who started learning C back in those days remembers how hairy development could get - especially when you get your first introduction to pointers).

Share this post


Link to post
Share on other sites

[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

[background=rgb(250, 251, 252)](By "virtual memory", I mean swapping data in and out of the hard drive).[/background]

[/font]
[/quote]
I believe the OP made it clear they are really talking about paging and not virtualised address space.

I have read that the default kernel settings for Linux is quite aggressive about swapping out memory it detects as being unused. For example, if a program has pages for initialisation / cleanup code, or a page containing code for a particular feature you're not using, there is no point keeping that resident when the memory could be better used to cache / buffer frequently accessed files.

I have read that the latest Windows kernerls are relatively aggressive about trying to cache files too.

Share this post


Link to post
Share on other sites
It's pretty clear the OP is talking about swapping, not paging or virtual memory.

Swapping is still useful under some circumstances even when a system has metric oodles of RAM.

Some OS kernels (eg. Linux, mach, and most Unixes) will use all unallocated pages of RAM for disk buffers, so physical memory is full 100% of the time. Under a normal load, this is great and results in remarkably improved disk throughput. Under some circumstances, especially on heavily-loaded servers, better throughput can be gained by tuning various OS parameters to increase swappiness (ie. the likelihood that a page in an applications virtual memory space will be swapped to backing store) in favour of kernel buffers. This kind of tuning is tricksy because swapping means increasing physical disk and bus contention, so it's one of the dark arts.

On a home desktop or consumer device, you are unlikely to encounter the sort of situation where swapping will actually improve performance. In fact, on a number of consumer devices I have worked on, we disabled swapping entirely because "disk" was effectively in-core ramdisk so it made no sense. Yes, it was possible to run out of memory, but the Linux OOM killer would take care of that before the system went down.

I use swap on my desktop machine. It gest really sluggish once the kernel starts swapping, but that's usually a sign that an app has a severe problem, and I'd rather have a sluggish working system to use to diagnose and fix the problem than one that make sit impossible to do that.

Share this post


Link to post
Share on other sites

It's pretty clear the OP is talking about swapping, not paging or virtual memory.


[/quote]
Paging is used to mean swapping pages, not just dividing memory into pages. Edited by rip-off

Share this post


Link to post
Share on other sites
The linux mod is called uCLinux and it was made to target small machines that do not have an MMU.
On the desktop and with hand-helds, virtual memory remains very important.

If you really did mean swapping, using hard-disk space to swap-out pages of RAM to make room for other stuff, then you can disable this in Linux by turning off the swap volume, swap off, and in Windows you can go into the settings and remove page files from all disk and reboot.

Share this post


Link to post
Share on other sites

Let's say you had a 32-bit machine with 4 gigs of ram. ... But if you have 4 gigs of RAM, why even bother with paging?


You've made a slight mistake here; assuming we are talking about Windows then your 32bit machine allows every running process to have a 4gig address range mapped to it.

Now, in practical terms this means you'll probably not be able to access more than 2gig in a process (between the various reserved memory address spaces you find memory vanishing pretty quickly; our build system at work is currently un-memory tuned and tends to die when it clears 1.5gig working set), but the upshot is that every process can expect to have that 2gig there for it to use.

So, your 4gig machine would, very quickly, die if you couldn't page data into and out of ram as required.

For reference my home machine, with nothing going on, is currently showing that of the 12gig installed only 3.8gig is 'free' the rest is either used (2.9gig) or being taken up as cache by the OS.
On my work machine last week I was regularly seeing memory load hit 6 to 7gig with quite a number of program open (4 instances of VS2010 at one point).

In short; it doesn't hurt to have it and you can help the OS by just leaving it on unless you have a technically good reason for turning it off.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!