Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


What happens if I don't deallocate dynamic memory on application exit?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
9 replies to this topic

#1 noodlyappendage   Members   -  Reputation: 130

Like
0Likes
Like

Posted 11 June 2014 - 12:27 AM

I was just wondering what, if any, negative effect the following scenario would have. I create an object instance using the new operator. This object then exists for the life of the entire process. Then, when exiting the application, I do not call a corresponding delete to deallocate. I understand the problem with memory leaks in terms of memory usage of the process at run time, but if the application is exiting, wouldn't the operating system free up any memory it had used anyway?



Sponsor:

#2 bioglaze   Members   -  Reputation: 591

Like
9Likes
Like

Posted 11 June 2014 - 12:48 AM

Operating system frees it, but does not necessarily overwrite it, so if you have passwords etc. in memory, you should overwrite them.



#3 Bacterius   Crossbones+   -  Reputation: 9068

Like
10Likes
Like

Posted 11 June 2014 - 12:51 AM

Yes, any sane operating system will free up all the memory your process is currently using (as well as things like sockets, file handles, and so on, in general) upon shutdown anyway. There are many opinions on whether it makes sense to clean up on shutdown under that assumption, a few thoughts are below:

 

- if you ever want to take your program and refactor it into an independent module or library which may not live in its own process, then you will be glad you wrote some cleanup code, because this time the OS won't be there to do it for you (probably the most important reason)

 

- on the other hand, if you have very complex datastructures with lots of independent blocks of allocated memory floating around, it may take a while for your program to get around to deallocating them all the usual way (think of those games that take 10-15 seconds to close), whereas the OS itself just sees a big block of virtual memory and can just deallocate it all in one shot, this is mitigated by proper memory allocation practices, i.e. using the right allocator and not allocating memory like a chain smoker goes through cigarettes, but may nevertheless be a concern

 

- and as bioglaze said above, if your memory contains sensitive information, you would do well to get rid of that yourself rather than relying on the OS to do it (any modern OS will always wipe the contents of a newly allocated memory page per-process, so that a process doesn't see what an old process previously wrote somewhere in memory, but that might not happen for a while)

 

Also, if your destructor has observable side effects, then not calling delete may result in undefined behaviour (I'm not comfortable enough with C++ to tell whether it is for sure, maybe someone else can answer that, but you should probably check).


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#4 rip-off   Moderators   -  Reputation: 8526

Like
9Likes
Like

Posted 11 June 2014 - 01:32 AM

Memory may not be a problem, but there are other resources. For example, let's say that an object you didn't de-allocate maintains some kind of file handle (directly or indirectly). Many objects that do file I/O have an in-memory buffer. You might have written data to that buffer, but when the OS dismantles the process it has no idea that this memory is significant, and will discard it with the rest. The file handle will be closed, but the unwritten data is lost.

Edited by rip-off, 11 June 2014 - 01:33 AM.


#5 SeanMiddleditch   Members   -  Reputation: 6450

Like
7Likes
Like

Posted 11 June 2014 - 01:44 AM

Operating system frees it, but does not necessarily overwrite it, so if you have passwords etc. in memory, you should overwrite them.


This incorrectly implies something which is very dangerous to believe, which is the assumption that clearing the memory yourself avoids this problem. _It does not_.

The OS is free to move the virtual pages around how it wishes, make copies of virtual pages, stream them out to disk, etc. If you have sensitive data, simply clearing it after use makes absolutely no guarantees that you've cleared all copies of it that might reside on RAM or even disk without your application's knowledge.

If you need security of anything in RAM, you must use mlock or VirtualLock or the like.

Note that GPUs are still evolving in this area and there aren't solid protections there yet for any data accessed via GPGPU APIs or graphics APIs. And don't forget that a malicious application can also often just trivially attach to your process as if it were a debugger and have full access to memory, syscall interception, and so on, assuming the OS doesn't just allow all processes of a particular user to read each others' memory (wasn't uncommon all that long ago). And then, if security is a serious concern, there's protections from peripheral devices (firmware hacks via driver bugs, USB dongles, etc.) that get mostly unfettered access to RAM (or even VRAM).

#6 frob   Moderators   -  Reputation: 22251

Like
7Likes
Like

Posted 11 June 2014 - 08:19 AM

Rounding the topic out, it is generally only the big and mature operating systems that dump your memory for you.

 

If you are working on non-mainstream operating systems they might not clean up everything after you die.

 

Windows, Linux, and the other major systems clean up, but on minor systems and embedded devices there is likely to remain some residue after a crash. Maybe it will get all the allocated memory, but it might miss some handles or resources in the corners of the system.


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.


#7 noodlyappendage   Members   -  Reputation: 130

Like
1Likes
Like

Posted 11 June 2014 - 12:17 PM


If you need security of anything in RAM, you must use mlock or VirtualLock or the like.

 

I had never heard of these before, and decided to google it to learn more. I stumbled upon this link, http://blogs.msdn.com/b/oldnewthing/archive/2007/11/06/5924058.aspx, which is saying that when using VirtualLock, it is still possible for memory to be paged out by the operating system. It is a few years old though, so is this information still accurate? 

 

Edit: Actually, I just noticed at the bottom, the author includes a follow up which says his interpretation was incorrect, and VirtualLock is sufficient to secure memory. However I still found it to be an interesting read anyway.


Edited by noodlyappendage, 11 June 2014 - 12:20 PM.


#8 swiftcoder   Senior Moderators   -  Reputation: 10242

Like
2Likes
Like

Posted 11 June 2014 - 12:23 PM

Even on mature operating systems, there are various system resources that are not automatically disposed of on program exit.

 

For example, semaphores and shared memory regions on Linux.


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#9 Servant of the Lord   Crossbones+   -  Reputation: 20365

Like
3Likes
Like

Posted 12 June 2014 - 11:01 AM

Also, if your destructor has observable side effects, then not calling delete may result in undefined behaviour (I'm not comfortable enough with C++ to tell whether it is for sure, maybe someone else can answer that, but you should probably check).

Sure, if a destructor does something like, say, records the last known window positions and sizes, and you fail to call the destructor (via delete or whatever), then those settings won't get saved. Or if you were supposed to write something to the registry or send a "logout" packet to a server, or whatever.

 

Generally, it's your responsibility to clean up after yourself. Now, knowing that things will be cleaned up anyway (but at the expense of not running the destructors), you, as a developer, might make a conscious decision to, in that circumstance, let the OS clean it up. But by default, you should clean it up unless you have a reason not to.

 

If your problem is you just don't want to bother - good news: You probably shouldn't be calling 'delete' on anything anyway, because you probably shouldn't be calling 'new' either. In C++, you'd put most your objects as local variables and member variables, and the rest of your objects in containers or smart pointers - again, less you have a reason not to.

 

In my own code, I'd (very roughly) estimate less than 10% of my variables are dynamically allocated, and of those 10%, only 1% uses new and delete directly.


It's perfectly fine to abbreviate my username to 'Servant' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames - [indie turn-based rpg set in a para-historical French colony] | Indie RPG development journal

[Fly with me on Twitter] [Google+] [My broken website]

[Need web hosting? I personally like A Small Orange]


#10 dmatter   Crossbones+   -  Reputation: 3261

Like
0Likes
Like

Posted 13 June 2014 - 07:50 PM

A mature OS will deallocate memory on exit of the application, which answers your question. But knowing that should just be a curious tidbit because whether or not memory is freed by the OS is not really that relevant or useful...

 

Cleaning up after yourself is about more than just deallocating memory, it's also about running destructors which themselves may have side-effects (freeing other kinds of resources, flushing buffers, saving state, etc).

 

It's also not something you should really need to worry about in modern C++ code since all heap allocated memory should be being auto-deallocated by a stack-bound RAII container. The delete keyword is effectively deprecated under normal use and relegated for use only for implementing RAII containers or for extenuated circumstances such as interfacing with legacy code/frameworks.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS