Jump to content

  • Log In with Google      Sign In   
  • Create Account


Memory leaks are always contained inside processes, aren't they?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
9 replies to this topic

#1 tom_mai78101   Members   -  Reputation: 568

Like
0Likes
Like

Posted 28 January 2014 - 10:29 AM

I acknowledge the fact that memory leaks are bad, regardless of if you're doing it on purpose or not.

 

I do notice there's a behavior in memory leaks, such that if a program were to create memory leaks, Windows 7's Task Manager will always show the program's name and process in the list even if the program had already been closed. Same applies to MSBuild for Visual Studios, if the affected program left behind tons of unresolved commands, we kill the process in Task Manager.

 

My guess is that memory leaks are always contained and recollected back into the resource pool when the affected process has been killed by Task Manager. And that there is no way to create a memory leak that is persistent and permanently damaging to the RAM in the computer... am I right in this?

struct Object
{
	int hello;
	bool world;
};

void purposefullyCreateMemoryLeak(){
	Object** ptr = new Object*[200];
	for (int i = 0; i < 200; i++){
		ptr[i] = new Object;
		ptr[i]->hello = 20 - i;
		ptr[i]->world = false;
	}
	//Gracefully return and leave behind memory leak.
	return;
}

They say that hackers can create physical damage to hardware components just by software alone. If they are able to create memory leaks and not make it so that it's impossible to be isolated within a process that had quit, it will be a tough day for a researcher.



Sponsor:

#2 d4n1   Members   -  Reputation: 423

Like
0Likes
Like

Posted 28 January 2014 - 10:33 AM

The definition of a memory leak is more than just memory being used and not freeing when it should, it has to do with not having direct access as a developer to that area of memory, ie: a loss of control.

 

Sometimes when you close a program, depending on how that programs closing event was programmed, it will handle memory clean up in some manual way, perhaps there is finalizing/handshaking mechanics that are going on it the background, or its possible that the OS is doing some magic to try and release the memory.


Edited by d4n1, 28 January 2014 - 10:34 AM.


#3 Álvaro   Crossbones+   -  Reputation: 12020

Like
3Likes
Like

Posted 28 January 2014 - 10:34 AM

Yes, when the process ends the OS regains control of whatever memory the process asked for.



#4 Vortez   Crossbones+   -  Reputation: 2688

Like
1Likes
Like

Posted 28 January 2014 - 10:48 AM

Most memory leaks are contained within the running process, but some are not, especially, GDI leaks. If you run out of handles, well, you're os might start to act/look strangely.

Neither of them are damaging to memory, but with the second case you will have to reboot your computer.

 

I would be very surprise if hackers could actually manage to break things in a computer nowaday. Maybe through some overclocking hack?(i doubt it)

What 'people say' and what is real is often quite different. Since i can't say for 100% sure that this can't happen, i'll let others step in, but it's doubtfull at best.

 

I remember of one virus, CIH, that could do damage to computers, through changing some bios settings, but it was a long time ago.



#5 Bregma   Crossbones+   -  Reputation: 4772

Like
4Likes
Like

Posted 28 January 2014 - 10:54 AM

A modern OS keeps track of virtual memory allocated to a process through a page table -- a table that maps chunks of process-owned virtual memory addresses into system-owned physical memory addresses.  The chunks of virtual memory, called pages, can be moved around in physical memory or even swapped out to disk.  When a process is ended by the OS, it destroys the page table, and all of the virtual memory disappears into the æther whence it came.  All the memory leaks get disappeared.

 

It's possible, should a hacker gain elevated privileges, to make certain calls into the OS to gain access to physical memory addresses that could cause physical damage to your system.  That's generally an important reason for binding all physical memory accesses within the OS itself, although there can be exploitable bugs in the OS that can allow the clever hacker to bypass all built-in protections.  Memory leaks are not generally one of those exploits:  they're usually found in stack smashes or crafted inputs.  About the only exploit you can do with a memory leak is a local DOS (denial of service) attack, bringing the system to its knees as the swapper thrashes.


Stephen M. Webb
Professional Free Software Developer

#6 DeafManNoEars   Members   -  Reputation: 462

Like
0Likes
Like

Posted 28 January 2014 - 10:55 AM

Depends what you mean by damage.  Would erasing your hard drive be considered physical damage?



#7 Bacterius   Crossbones+   -  Reputation: 8189

Like
5Likes
Like

Posted 28 January 2014 - 09:42 PM

Even with complete and total access to physical memory (and ignoring the fact that memory these days is virtualized by the OS), you cannot really do damage to your memory by simply writing to it. That's what it's supposed to do. However writing to places you shouldn't (usually by subverting the operating system) may very well confuse software or even hardware and make them do really bad things, such as shutting off the CPU fan, overclocking the GPU to unsafe limits, formatting all your hard drives, etc, etc. But if you are under a modern operating system and just forgot to release some memory before exiting, you have nothing to worry about from a hardware point of view, the OS will reclaim all resources used by your process as it terminates. Memory cannot be "lost" in this way, the problem with memory leaks is when they grow too big on long-running tasks and start interfering with other processes' ability to allocate memory themselves.

 

Basically, the operating system has got your back, but it doesn't make it right to just leave all your trash for it to clean up. One day you might want to turn that program into a little tool and move all that code into a loop, and, boom, instant memory leak that actually grows over time and becomes problematic. There might be some valid use cases (freeing huge data structures with malloc'ed memory everywhere in them, the OS can do it faster) but these can usually be prevented by good design. The operating system isn't doing this for your convenience, it's doing it because it needs to in order to guarantee long-term stability of the system.


Edited by Bacterius, 28 January 2014 - 09:49 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#8 frob   Moderators   -  Reputation: 19052

Like
8Likes
Like

Posted 29 January 2014 - 01:29 AM

While the original poster mentioned Windows 7 specifically, a few others have hinted toward better answers by saying things like "on modern operating systems..."

There are a lot of computers out there. Only the tiniest minority of computers are able to run systems like Windows. Your computer is made up of hundreds or even thousands of other smaller computers. The ram chips are self contained computers. The sound card (these days just a single chip on the motherboard) is usually a collection of self contained computers. The graphics card likely contains a large number of sub-computers. There are various types, such as DSPs (Digital Signal Processors) and FPGAs (Field Programmable Gate Arrays), and they serve many different purposes, such as microcontrollers that manage the motors on a drive.


do notice there's a behavior in memory leaks, such that if a program were to create memory leaks, Windows 7's Task Manager will always show the program's name and process in the list even if the program had already been closed. Same applies to MSBuild for Visual Studios, if the affected program left behind tons of unresolved commands, we kill the process in Task Manager.
 
My guess is that memory leaks are always contained and recollected back into the resource pool when the affected process has been killed by Task Manager. And that there is no way to create a memory leak that is persistent and permanently damaging to the RAM in the computer... am I right in this?

Both yes and no.

Windows and other modern operating systems do a great job at virtualizing everything. You get access to a virtual memory space, not physical memory spaces. You get access to a memory buffer used for rendering, not the physical graphics card. You get access to buffers representing space on disk, not the actual physical drive.

That doesn't mean the actual physical object cannot be accessed, because obviously the operating system and device drivers must access it. However, it does mean that most unprivileged programs will have a very difficult time damaging the hardware. Most "big" computers, meaning things like desktop computers, mainframes, and so on, have virtualized hardware access for decades. For the big systems Windows was a latecomer in 1995.

On those big systems access to various systems in controlled by what are frequently called 'security rings'. The innermost security rings get raw access, the outermost security rings get virtualized access, and things in between get varying degrees of access.

At the user program level, which are the outermost levels, since everything is virtualized by the inner levels all the parts are tracked by necessity. So in these cases when a process dies it is very easy to reclaim all their memory. The virtual systems are cleaned up and everything self destructs cleanly. On the other hand, programs on the innermost ring, often called "ring zero", have direct access to everything. When a ring zero process dies, there isn't anything above it to clean up short of rebooting the system.

It is difficult for a user application to gain access to the inner security rings on accident, but there is a never-ending stream of exploits that attackers can use to elevate their privileges. After they have access the damage they can do is only limited by physical and software failsafe systems built into lower levels.

They say that hackers can create physical damage to hardware components just by software alone. If they are able to create memory leaks and not make it so that it's impossible to be isolated within a process that had quit, it will be a tough day for a researcher.

This is true, but it usually isn't due to memory leaks.

An attacker may be able to run a program at the lowest security levels. By doing so they can bypass protections normally present on the hardware. This might include sending a software signal to a video card or processor to intentionally overclock itself, do high power computations, and also shut off fans. While the operations don't directly harm the hardware they establish conditions where the hardware is much more likely to fail if additional safeguards are not present. Today's video cards and other components typically shut off the computer when they overheat, but in past years it was more likely for grey-blue smoke to be the first indication of overheating.

But all that is for the "big" computers. The microcontrollers and small components are often much less protected. It is much more difficult, but once an attacker has gained control of a bigger system the attacker can provide a more specific attack for a smaller system, such as the microcontroller of a drive or for a usb device or even a nuclear centrifuge. While the big computer's operating system is likely to have virtualized hardware and multiple layers of failsafes, the smaller devices generally have much less protection against damage. An intentionally misprogrammed USB controller could send out very high voltage and short out devices. An intentionally misprogrammed hard drive controller could crash the head or even physically grind away sections of the platter. And rather famously, an intentionally misconfigured centrifuge controller could modify motor speeds and unbalance the load causing the equipment to wear down quickly and require very expensive and difficult repairs. These are almost universally intentional malicious acts.

Way back in the old days when there were very few computer vendors and every computer had exactly the same parts and the failsafes were less common, it was much easier to physically damage the hardware both by opportunity and by less fault-tolerant design. Consider back in the early 1980s there were very few possible configurations of machines; if your office had 8088 machines there was only one vendor and only one hardware configuration. An attack against a specific chip could take down thousands of businesses globally. These days you can buy ten seemingly identical computers yet have each one include slightly different components that are incompatible. That makes it much more difficult -- but obviously not impossible -- for attackers to implement hardware-damaging attacks.

These days it is statistically impossible to accidentally harm the physical computer through something as simple as a memory leak. There may be some ultra-obscure combination of random chance that approaching a specific memory pattern might happen to do something on a unique combination of systems, but it is similarly likely to have that same harm come from a stray bit of space radiation. The odds are so close to zero that it doesn't meaningfully exist.
Check out my personal indie blog at bryanwagstaff.com.

#9 tom_mai78101   Members   -  Reputation: 568

Like
4Likes
Like

Posted 29 January 2014 - 03:50 AM

My question has been answered in a meaningful way. Thank you all.



#10 Servant of the Lord   Crossbones+   -  Reputation: 17359

Like
0Likes
Like

Posted 30 January 2014 - 05:49 PM

While you mentioned 'processes' several times, your code showed a function, not a process.

So just to be clear, memory leaks do persist after the function ends.
void purposefullyCreateMemoryLeak()
{
	Object *ptr = new Object;
	return;
}

void funcB()
{
    purposefullyCreateMemoryLeak();
    
    //Memory is still leaked!
    return;
}

void funcA()
{
    funcB();
    
    //Memory is still leaked!
    return;
}

int main()
{
    funcA();
    
    //Memory is still leaked!    

    //Sometime after main() exits (and after the function(s) that originally called main() exits),
    //then the memory will most likely be cleaned up by the operating system.
    return 0;
}
It's considered good practice to clean up after yourself, when programming - though if your code is trying to shut down because something went wrong (i.e. the program is crashing in a controlled manner), leaked resources aren't the end of the world.

It's perfectly fine to abbreviate my username to 'Servant' rather than copy+pasting it all the time.

[Fly with me on Twitter] [Google+] [My broken website]

All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.                                                                                                                                                       [Need free cloud storage? I personally like DropBox]

Of Stranger Flames - [indie turn-based rpg set in a para-historical French colony] | Indie RPG development journal





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS