Sign in to follow this  

[C] Freeing memory on program exit?

This topic is 2846 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Suppose I have the following program:
int main() {
    char *buffer = malloc(100);
    // Do something with buffer...
    return 0;
}

And suppose that the program never frees the buffer. 1) When the program exits, will the memory be freed by the OS? 2) Suppose that the above program is called prog1. In the following program:
int main() {
    system("prog1");
    return 0;
}

will the memory allocated by prog1 be freed after system() returns, but before "return 0" is executed? Thanks in advance.

Share this post


Link to post
Share on other sites
I am sorry that I really don't have time to elaborate, but the answer is definately yes to the first question. I am not sure wether system("prog1") waits for prog1 to terminate in order to return; if this is the case, then the answer is also yes to the second question. As far as I know, this holds at least for Windows and Linux.

Share this post


Link to post
Share on other sites
I don't think it's guaranteed, but a modern operating systems most probably will. As far as I know, free will usually not give the previously allocated memory back to the system right away anyway, but only makes it available for the next malloc.

On embedded systems I'm not sure, but needless to say it's a bad practice not to free memory anyway.

Share this post


Link to post
Share on other sites
Quote:
Original post by Wan
needless to say it's a bad practice not to free memory anyway.


It's just that some people say that if you're exiting a program, there's no need to free memory (or close open files, etc) because the OS will do it for you?

For example, here:
if (some error) {
free(buffer);
exit(1);
}

the call to free is not needed (according to those people).

I was just wondering what others think about this.

Share this post


Link to post
Share on other sites
Quote:
Original post by Gage64
Quote:
Original post by Wan
needless to say it's a bad practice not to free memory anyway.
It's just that some people say that if you're exiting a program, there's no need to free memory (or close open files, etc) because the OS will do it for you?
You don't have to, but I would recommend it anyway.

For one thing, it is quite common to refactor the main loop of a prototype into a function of a larger program, at which point the lack of a free/delete becomes a major issue.

Share this post


Link to post
Share on other sites
it's not strictly necessary, however it's a (very) bad practice to do so. You can beter be just sure that the memory is deallocated when not needed anymore.
Imagine that you decide to change the program a little and add a function at the end (not unlikely in a development proces), you will carry an unneeded block of memory with you during that function because you didn't deallocate the memory.
What I'm trying to say is that program keep changing over development and in complicated programs you're likely to forget that you didn't manualy dealloccate the memory.

typing that little rule of code that deallocates the memory will save you trouble later :)

[Edited by - flammable on March 2, 2010 10:06:21 AM]

Share this post


Link to post
Share on other sites
I don't exactly disagree with the other posters, but I want to clarify some rationale.

1. In general you should get in the habit of freeing all the memory you allocate.

Otherwise you will forget to do it in cases that are NOT ok to forget - for instance when your program is allocating memory during execution that may be run more than once per run. And example would be starting the game from a menu, which the user can press and then press again to start over ... if you don't free this memory then each time the start a new game it gets bigger and bigger.

2. All the memory allocated by normal programming API (C/C++ library) is freed on program exit and there is no need for the programmer to do this manually.

It does take more execution time to do the dealloc yourself than to let the OS do it, and does not provide any additional benefit / assurance.

3. Not all "resources" are like memory, and not all of them are automatically freed on program exit. In various API calls resources are allocated for you, inside of the API dlls (devied contexts, direct x resources, files, etc), in almost all cases these are deallocated again by the OS just like memory is ... but this is not 100% true. You must find out for each API, to find out if it will be reclaimed by the OS, and this isn't always easy.

4. Programmers make mistakes and create "memory leaks" all the time, that pile up DURING program execution (sure they are reclaimed at close, but what about when you program crashes while running due to lack of memory). There are many useful tools to track down memory leaks and help you find them - but intentionally leaving some of your memory allocated to increase shutdown speed will just make it harder for you to find the real memory leaks, when the tool leak reports always include both the accidental leaks and the intentional leaks without distinction.

So now you can see that 1, 3 and 4 are at odds with 2 ... but the general answer is still - deallocate it yourself. The reasons are, as mentioned, you don't know that you won't move this code somewhere that could be run more than once during program execution, you don't always know that each resource is taken back by the OS, and because it will make it very easy to find if you have missed any using memory leak detection tools.

Good Luck.

Share this post


Link to post
Share on other sites
Quote:
Original post by Xai
It does take more execution time to do the dealloc yourself than to let the OS do it,


I'm more interested in correctness and consistency then saving few seconds on shutdown. By this logic, the programmer needs to decide if and what memory to allocate and which one he/she can leave for OS on program shutdown. This is error prone and inconsistent.

Quote:
Original post by Xai
and does not provide any additional benefit / assurance.


The benefit is to actually have a leak memory list, which helps you in debugging and writing a better and cleaner code.

IMHO, adhering and using good programming practices makes and teaches you to be a better programmer and that's for me is a big plus, regardless if it takes more code, longer to write and more reading at the beginning of your programming journey.

Share this post


Link to post
Share on other sites
Quote:
Original post by DoctorGlow

I'm more interested in correctness and consistency then saving few seconds on shutdown.


This depends on type of application. Memory is just part of the equation, there's also all other resources, such as file and socket handles, video and other media resources, and much more.

And there's something worse. If you hit pagefile, then deleting all the memory will need to reload all the data from disk just to free it.

If user hits shutdown, what you *need* to do is save anything that needs saving. Perhaps a document user is editing.

For games - exit might as well be immediate after [exit] is pressed. Just check if a save happens to be in progress.

As far as consistency goes - this isn't against it. Your application can still be fully consistent, and do all the nice cleanup/shutdown and RAII and all that for testing and debugging, but do the fast exit for release. This approach works for all applications, as long as you keep track of dirty state which needs to be fully committed.

Another problem is runaway/undefined behavior of code. While that might make application unresponsive, if the event loop remains alive, it would allow you to cleanly kill the application, whereas full cleanup would just hang.

Also, experience shows that due to various reasons, using System.exit() in Java is a good practice to avoid spurious java apps hanging around after they've shutdown, which is an incredibly common problem. Might as well call it after believing to have shutdown what is required.

In native apps, especially when dealing with complex resources or heavy use of hardware, obscure synchronization bugs will creep up.

Quote:
IMHO, adhering and using good programming practices makes and teaches you to be a better programmer and that's for me is a big plus,


Good practices also involve real world issues, not academic purist approach. And applications hanging during and after shutdown are a real world problem.

Finally - almost all of CS and best practices is taught on infallible machines. RAM does cause faults, OS does mess up, GPU does perform incorrect calculations due to overheating, third-party API does break.... At least obey the user and simply exit when asked for.
Quote:
regardless if it takes more code, longer to write and more reading at the beginning of your programming journey.


More code = bigger statistical chance of introducing a bug (programmer, hardware error, faulty unit test, disk fault, timing issue due to cache flushes, maintenance programmers, etc.).

Perfection is not when there is nothing to add, but when there is nothing left to take away.

Quote:
Not all "resources" are like memory, and not all of them are automatically freed on program exit.

This is very true, especially with any kind of external storage (files, databases, ...).

But applications do crash, power does go out, OS does lock-up. So rather than assuming one can write perfect cleanup, make sure to have a recovery mechanism that will handle all this junk leftover data.

There are two sets. One covers the data you absolutely need to cleanup and preserve. The other is optional.

Since the number of problems in second set is effectively infinite, instead plan for the first one with absolute scrutiny. When something needs to be persisted, it is. When something needs to be cleaned up, make sure it is.


Counter example - savegame. User presses save, gets "ok", presses exit, game exits cleanly, OS crashes. Bummer, data still wasn't committed to disk, so the savegame will be corrupted.

Now assume other way: savegame is atomic, and doesn't return "ok" until it confirms data was written to disk completely and correctly. At that point, you might as well ExitProcess(). Nothing can go wrong.

This doesn't involve internal state management and leak prevention - it is about the important data that matters to user. If you can get that right, then you also solve the crash problems and other unexpected termination scenarios.

Same for servers. They *will* exit uncleanly. So instead, "crash" them yourself. Migrate data from one server to another, kill the original one along with the OS and do a hard reset. Obviously, you can try for clean shutdown first. This is one of reasons why PHP is so popular in practice - stateless + per-request startup/shutdown.

This last advice in particular may be controversial, but once you realize how many things can and do go wrong, it becomes easy to assume that prevention is always good, but having a perfect recovery plan is better.


One final point on quick shutdown - low power. Many battery powered devices out there, and most of them operate on really small persistent state. Trying to shut down hundreds of megabytes of data when only 40kb really need to be written is not really a benefit for the user.

[Edited by - Antheus on February 28, 2010 6:46:01 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
And there's something worse. If you hit pagefile, then deleting all the memory will need to reload all the data from disk just to free it.
This. There's nothing more frustrating that hitting the "close" button, only to have the program sit there for 30 seconds, grinding the hard disk as stuff is paged back in only be freed again.

There is a school of thought which says that all program terminations should be "catastrophic" ones. That is, your "FileExit_Click" handler should be nothing more than ExitProcess (or, in the extreme case, TerminateProcess).

This gives you two benefits: you get lightening fast shutdown (usually at the expense of slower startup times, since you've got to "recover" or "rollback" files). And the other benefit is that in most programs, the failure path (i.e. what happens if there's a power failure, what happens if we run out of memory, etc) is usually the least well-tested part of the code. If you subscribe to the "every termination is a catastrophic termination" philosophy, then that code path gets tested pretty quickly.

Obviously, this methodology is not for the feint of heart. In fact, unless you're doing some kind of database or 24/7 mission-critical server application then it's probably way overkill. But something to think about anyway.

Oh, and as for Gage64's other question, I think it might depend on your platform, but specifically on Windows system() will launch the executable in a different process and wait for it to terminate. When the launched process exits, all memory is reclaimed by the operating system.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
As far as consistency goes - this isn't against it. Your application can still be fully consistent, and do all the nice cleanup/shutdown and RAII and all that for testing and debugging, but do the fast exit for release. This approach works for all applications, as long as you keep track of dirty state which needs to be fully committed.

I'd be against this. If you do a fast exit, do it both in debug and release. If your program is creating corrupt save files on exit, you'd want to be able to find that in debug.

Share this post


Link to post
Share on other sites
Quote:

What about my other question? Will memory be freed when system() returns, but before the program exits?

The behavior of "system" is almost entirely implementation specific.

On Windows, I believe, if you supply a string that would launch another process ("notepad.exe") for example, system() will block until that process exits. Since the process is just a regular OS process, the same rules for allocation and deallocation apply.

Share this post


Link to post
Share on other sites

This topic is 2846 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this