Memory leaks are always contained inside processes, aren't they?

Started by
8 comments, last by Servant of the Lord 10 years, 2 months ago

I acknowledge the fact that memory leaks are bad, regardless of if you're doing it on purpose or not.

I do notice there's a behavior in memory leaks, such that if a program were to create memory leaks, Windows 7's Task Manager will always show the program's name and process in the list even if the program had already been closed. Same applies to MSBuild for Visual Studios, if the affected program left behind tons of unresolved commands, we kill the process in Task Manager.

My guess is that memory leaks are always contained and recollected back into the resource pool when the affected process has been killed by Task Manager. And that there is no way to create a memory leak that is persistent and permanently damaging to the RAM in the computer... am I right in this?


struct Object
{
	int hello;
	bool world;
};

void purposefullyCreateMemoryLeak(){
	Object** ptr = new Object*[200];
	for (int i = 0; i < 200; i++){
		ptr[i] = new Object;
		ptr[i]->hello = 20 - i;
		ptr[i]->world = false;
	}
	//Gracefully return and leave behind memory leak.
	return;
}

They say that hackers can create physical damage to hardware components just by software alone. If they are able to create memory leaks and not make it so that it's impossible to be isolated within a process that had quit, it will be a tough day for a researcher.

Advertisement

Yes, when the process ends the OS regains control of whatever memory the process asked for.

Most memory leaks are contained within the running process, but some are not, especially, GDI leaks. If you run out of handles, well, you're os might start to act/look strangely.

Neither of them are damaging to memory, but with the second case you will have to reboot your computer.

I would be very surprise if hackers could actually manage to break things in a computer nowaday. Maybe through some overclocking hack?(i doubt it)

What 'people say' and what is real is often quite different. Since i can't say for 100% sure that this can't happen, i'll let others step in, but it's doubtfull at best.

I remember of one virus, CIH, that could do damage to computers, through changing some bios settings, but it was a long time ago.

A modern OS keeps track of virtual memory allocated to a process through a page table -- a table that maps chunks of process-owned virtual memory addresses into system-owned physical memory addresses. The chunks of virtual memory, called pages, can be moved around in physical memory or even swapped out to disk. When a process is ended by the OS, it destroys the page table, and all of the virtual memory disappears into the æther whence it came. All the memory leaks get disappeared.

It's possible, should a hacker gain elevated privileges, to make certain calls into the OS to gain access to physical memory addresses that could cause physical damage to your system. That's generally an important reason for binding all physical memory accesses within the OS itself, although there can be exploitable bugs in the OS that can allow the clever hacker to bypass all built-in protections. Memory leaks are not generally one of those exploits: they're usually found in stack smashes or crafted inputs. About the only exploit you can do with a memory leak is a local DOS (denial of service) attack, bringing the system to its knees as the swapper thrashes.

Stephen M. Webb
Professional Free Software Developer

Depends what you mean by damage. Would erasing your hard drive be considered physical damage?

Even with complete and total access to physical memory (and ignoring the fact that memory these days is virtualized by the OS), you cannot really do damage to your memory by simply writing to it. That's what it's supposed to do. However writing to places you shouldn't (usually by subverting the operating system) may very well confuse software or even hardware and make them do really bad things, such as shutting off the CPU fan, overclocking the GPU to unsafe limits, formatting all your hard drives, etc, etc. But if you are under a modern operating system and just forgot to release some memory before exiting, you have nothing to worry about from a hardware point of view, the OS will reclaim all resources used by your process as it terminates. Memory cannot be "lost" in this way, the problem with memory leaks is when they grow too big on long-running tasks and start interfering with other processes' ability to allocate memory themselves.

Basically, the operating system has got your back, but it doesn't make it right to just leave all your trash for it to clean up. One day you might want to turn that program into a little tool and move all that code into a loop, and, boom, instant memory leak that actually grows over time and becomes problematic. There might be some valid use cases (freeing huge data structures with malloc'ed memory everywhere in them, the OS can do it faster) but these can usually be prevented by good design. The operating system isn't doing this for your convenience, it's doing it because it needs to in order to guarantee long-term stability of the system.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

While the original poster mentioned Windows 7 specifically, a few others have hinted toward better answers by saying things like "on modern operating systems..."

There are a lot of computers out there. Only the tiniest minority of computers are able to run systems like Windows. Your computer is made up of hundreds or even thousands of other smaller computers. The ram chips are self contained computers. The sound card (these days just a single chip on the motherboard) is usually a collection of self contained computers. The graphics card likely contains a large number of sub-computers. There are various types, such as DSPs (Digital Signal Processors) and FPGAs (Field Programmable Gate Arrays), and they serve many different purposes, such as microcontrollers that manage the motors on a drive.


do notice there's a behavior in memory leaks, such that if a program were to create memory leaks, Windows 7's Task Manager will always show the program's name and process in the list even if the program had already been closed. Same applies to MSBuild for Visual Studios, if the affected program left behind tons of unresolved commands, we kill the process in Task Manager.

My guess is that memory leaks are always contained and recollected back into the resource pool when the affected process has been killed by Task Manager. And that there is no way to create a memory leak that is persistent and permanently damaging to the RAM in the computer... am I right in this?

Both yes and no.

Windows and other modern operating systems do a great job at virtualizing everything. You get access to a virtual memory space, not physical memory spaces. You get access to a memory buffer used for rendering, not the physical graphics card. You get access to buffers representing space on disk, not the actual physical drive.

That doesn't mean the actual physical object cannot be accessed, because obviously the operating system and device drivers must access it. However, it does mean that most unprivileged programs will have a very difficult time damaging the hardware. Most "big" computers, meaning things like desktop computers, mainframes, and so on, have virtualized hardware access for decades. For the big systems Windows was a latecomer in 1995.

On those big systems access to various systems in controlled by what are frequently called 'security rings'. The innermost security rings get raw access, the outermost security rings get virtualized access, and things in between get varying degrees of access.

At the user program level, which are the outermost levels, since everything is virtualized by the inner levels all the parts are tracked by necessity. So in these cases when a process dies it is very easy to reclaim all their memory. The virtual systems are cleaned up and everything self destructs cleanly. On the other hand, programs on the innermost ring, often called "ring zero", have direct access to everything. When a ring zero process dies, there isn't anything above it to clean up short of rebooting the system.

It is difficult for a user application to gain access to the inner security rings on accident, but there is a never-ending stream of exploits that attackers can use to elevate their privileges. After they have access the damage they can do is only limited by physical and software failsafe systems built into lower levels.

They say that hackers can create physical damage to hardware components just by software alone. If they are able to create memory leaks and not make it so that it's impossible to be isolated within a process that had quit, it will be a tough day for a researcher.

This is true, but it usually isn't due to memory leaks.

An attacker may be able to run a program at the lowest security levels. By doing so they can bypass protections normally present on the hardware. This might include sending a software signal to a video card or processor to intentionally overclock itself, do high power computations, and also shut off fans. While the operations don't directly harm the hardware they establish conditions where the hardware is much more likely to fail if additional safeguards are not present. Today's video cards and other components typically shut off the computer when they overheat, but in past years it was more likely for grey-blue smoke to be the first indication of overheating.

But all that is for the "big" computers. The microcontrollers and small components are often much less protected. It is much more difficult, but once an attacker has gained control of a bigger system the attacker can provide a more specific attack for a smaller system, such as the microcontroller of a drive or for a usb device or even a nuclear centrifuge. While the big computer's operating system is likely to have virtualized hardware and multiple layers of failsafes, the smaller devices generally have much less protection against damage. An intentionally misprogrammed USB controller could send out very high voltage and short out devices. An intentionally misprogrammed hard drive controller could crash the head or even physically grind away sections of the platter. And rather famously, an intentionally misconfigured centrifuge controller could modify motor speeds and unbalance the load causing the equipment to wear down quickly and require very expensive and difficult repairs. These are almost universally intentional malicious acts.

Way back in the old days when there were very few computer vendors and every computer had exactly the same parts and the failsafes were less common, it was much easier to physically damage the hardware both by opportunity and by less fault-tolerant design. Consider back in the early 1980s there were very few possible configurations of machines; if your office had 8088 machines there was only one vendor and only one hardware configuration. An attack against a specific chip could take down thousands of businesses globally. These days you can buy ten seemingly identical computers yet have each one include slightly different components that are incompatible. That makes it much more difficult -- but obviously not impossible -- for attackers to implement hardware-damaging attacks.

These days it is statistically impossible to accidentally harm the physical computer through something as simple as a memory leak. There may be some ultra-obscure combination of random chance that approaching a specific memory pattern might happen to do something on a unique combination of systems, but it is similarly likely to have that same harm come from a stray bit of space radiation. The odds are so close to zero that it doesn't meaningfully exist.

My question has been answered in a meaningful way. Thank you all.

While you mentioned 'processes' several times, your code showed a function, not a process.

So just to be clear, memory leaks do persist after the function ends.

void purposefullyCreateMemoryLeak()
{
	Object *ptr = new Object;
	return;
}

void funcB()
{
    purposefullyCreateMemoryLeak();
    
    //Memory is still leaked!
    return;
}

void funcA()
{
    funcB();
    
    //Memory is still leaked!
    return;
}

int main()
{
    funcA();
    
    //Memory is still leaked!    

    //Sometime after main() exits (and after the function(s) that originally called main() exits),
    //then the memory will most likely be cleaned up by the operating system.
    return 0;
}
It's considered good practice to clean up after yourself, when programming - though if your code is trying to shut down because something went wrong (i.e. the program is crashing in a controlled manner), leaked resources aren't the end of the world.

This topic is closed to new replies.

Advertisement