Would this produce a memory leak?

Started by
24 comments, last by Amr0 16 years ago
Quote:Original post by owl
I don't see the problem if the program is going to exit anyway. To be a leak, the program should still be running.


I agree. Although, it is a good practice for several reasons to clean up before we exit, rather than hoping the OS will do the job for us (some reasons: OS bugs, forgetting to delete critical buffers may imply we're also forgetting to save it's content to disk, etc)

How about this?:

[source=cpp]void MyExit( list<int> &l ){    l.clear();    exit(0);}int main(){    list<int> list1(100, 123); // Create a list with 100 ints with value 123.    MyExit( list1 );}


Not the finest solution, but it's something. Why do you want to use/need exit() anyway?
I haven't ever used it in my development years. Try thinking in a better design that avoids using it.
And if you're planning to use it for emergencies (i.e. crashes) it is fine to cause memory leaks, as often crashes are some memory pointers are corrupted, invalid or overflowed, in which case you wouldn't be able to free anyway.
Also take note that this code:

[source=cpp]void MyMain(){    list<int> list1(100, 123); // Create a list with 100 ints with value 123.}int main(){    MyMain();    exit(0);}


will work since list1 is in MyMain's stack, not in main()

Hope this helps
Dark Sylinc
Advertisement
It's not really "magical"...why wouldn't the OS know about memory that's been allocated for a process? The OS is what allocated it in the first place. Besides Windows would slow to a crawl if every bit of memory leaked became permanent.

This doesn't at all mean you should rely on this behavior. Different platforms can do whatever they want, and Windows is even technically free to change this behavior since it's not documented in ExitProcess (although in practice they never would, since I'm sure countless buggy apps do rely on it).
Quote:Original post by Matias Goldberg
How about this?:

*** Source Snippet Removed ***

Not the finest solution, but it's something. Why do you want to use/need exit() anyway?

Well, obviously I can't do that because it was just a simple example. In practice, I may call Terminate() from any place, so I don't know what is currently allocated on the stack.

My application is a game client and a dedicated server for it. In practice, the client always terminates by returning from main() when you exit normally. I only use Terminate(int nExitCode), which deallocates all (ok, it's actually 'most' rather than 'all' if it's called in an unstable place, like right after a 'new int;') memory on the heap and calls exit(nExitCode), when something abnormal happens, and I need to forcedly 'crash' basically.

However, since dedicated server doesn't have a GUI, the only way I found that I could get it to shut down nicely (along with its threads) is to create a Ctrl+C signal handler, and call Terminate there. So this was what worried me the most. Since it's the only way to terminate the server, and it may create leaks.

However, given what I've learned from this thread so far, it makes me feel better that it's ok if I don't always clean up every single byte allocated when the program is shut down in an unexpected way, or crashes.

As long as it does clean up everything on normal termination, I'm happy. I'll have to change the way 'normal termination' occurs on the server, though.

What mostly bothered me was that I didn't know all the details behind these processes, which I think is pretty important (I don't want to write crappy software with memory leaks all over the place). But it's good that I have a better understanding of it now, thanks to all your help. :)
Having a 2nd thought, (but I'm not sure how accurate it can be) memory stack is the least of our problems, as if I recall correctly the OS assigns certain ammount of memory in the stack for each app. Once the app is finished, the whole stack freed. Unlike dinamyc allocation where memory points almost wherever in the RAM, the stack has a more "lineal" way of allocating things.
For example, the stack from my app goes from 0x000000aa to 0x000000FF (easy numbers, that memory region is forbidden for regular apps) So the first list will be placed at 0x000000aa the 2nd one at 0x000000AB and so on. We can use assembly to "skip" some slots, but we can't never exceed 0x000000FF. If we do stack overflow occurs. When the app exits 0x000000aa to 0x000000FF becomes free. We can always know that any variable in the stack will be inside that region.

It is a bit more complicated than this, and I may have made a mistake here since it's been a long time I don't do such low level programming.

Dark Sylinc
Quote:Original post by Matias Goldberg
Having a 2nd thought, (but I'm not sure how accurate it can be) memory stack is the least of our problems, as if I recall correctly the OS assigns certain ammount of memory in the stack for each app. Once the app is finished, the whole stack freed. Unlike dinamyc allocation where memory points almost wherever in the RAM, the stack has a more "lineal" way of allocating things.
For example, the stack from my app goes from 0x000000aa to 0x000000FF (easy numbers, that memory region is forbidden for regular apps) So the first list will be placed at 0x000000aa the 2nd one at 0x000000AB and so on. We can use assembly to "skip" some slots, but we can't never exceed 0x000000FF. If we do stack overflow occurs. When the app exits 0x000000aa to 0x000000FF becomes free. We can always know that any variable in the stack will be inside that region.

It is a bit more complicated than this, and I may have made a mistake here since it's been a long time I don't do such low level programming.

The concept of Virtual Memory makes that a non-issue for the OS (if the said OS uses such a memory management model). It makes no difference if the memory was allocated on the stack or heap, everything can be freed in one shot when the process ends.
Quote:Original post by shurcool
dedicated server doesn't have a GUI, the only way I found that I could get it to shut down is to create a Ctrl+C signal handler, and call Terminate there. So this was what worried me the most. Since it's the only way to terminate the server, and it may create leaks.

However, given what I've learned from this thread so far, it makes me feel better that it's ok if I don't always clean up every single byte allocated when the program is shut down in an unexpected way, or crashes.


OK, If I were you, in the main loop I would check for a boolean "request_terminate" or something, and check if it is true and then go out of the main loop. The Ctrl+C signal handler would set that variable to true. And if there is some sort of intensive calculation that may arise after Ctrl+C is pressed (since we check the boolean after the loop restarted and then exit) then check if that boolean is true before doing that intensive calculation and skip it. That is what I mean with "design change"

Although, I have to admit calling exit() is the cheapest and quickest solution. Furthermore, you're creating a server application. Those apps are supposed to run without interruption for hours, days, and even years. If you're debugging, you don't need to worry about leaks as you can restart the OS. If you're in the final release, Ctrl+C will barely need to be called. If you need Ctrl+C in the final release, then you're surely having server problems which more likely will requiere to reboot the system.
Quote:Original post by shurcool
The concept of Virtual Memory makes that a non-issue for the OS (if the said OS uses such a memory management model). It makes no difference if the memory was allocated on the stack or heap, everything can be freed in one shot when the process ends.


Thanks for pointing it out. I didn't know virtualization is used too for the stack.
Also reading that article reminds me that if you're making a server application, you have to be extremely carefull with memory leaks and even memory fragmentation.

Dark Sylinc
Instead of using MyExit (see previous posts), why not put your cleanup code in a function and use "atexit(...)" to call that function when exit(0) is called? That way, you can still use exit(0).

Anyway, I don't like any of this - in fact, I wasn't even aware of the "exit" function (I just return from main and everything turns out fine). From what I understand, exit is a c function, and shouldn't be used when c++ classes are used.

Anyway, a memory leak to me is when a program doesn't clean itself up - whether the operating system does afterwards or not. If the OS does it automatically, that's nice - but one should never depend on that behavior.

Instead of calling a specific function to do clean up and trying to code for, and always ensure you pass it the things you must clean up, you could alternatively just throw an exception. The specific exception which could be thrown from anywhere (ie inside a message pump handling function) would get caught in the main() block and should force the unwinding of the stack in turn releasing all resources that are managed via raii such as stl containers and shared pointer resources.

It doesnt seem a completely ideal solution or a philosophically appropriate use of exception handling but it should guarantee the ordered clean up of all objects before termination. Also I am not sure quite how it might be made to work if the code is multithreaded which would imply different execution contexts continuing to maintain a hold on raii managed resources even as the thrown thread drops out of the main() function's scope.
Quote:Original post by chairthrower
... just throw an exception. The specific exception which could be thrown from anywhere (ie inside a message pump handling function) would get caught in the main() block and should force the unwinding of the stack in turn releasing all resources that are managed via raii such as stl containers and shared pointer resources.
Exactly. In main you have a try catch around every bit of code in the function, then just throw an exception of the type that only main will catch.
Now when you throw that fatal exception, BOOM everything is destroyed, assuming you're correctly using RAII.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

This topic is closed to new replies.

Advertisement