Sign in to follow this  
Liuqahs15

How to check if dynamic memory is there before deleting?

Recommended Posts

Liuqahs15    819

I'm sorry if this question has been asked before, but I'm struggling with figuring out how to google it right. I don't know how to pose the question without an example.

 

Say I have this code:

 

 

int main(){

   int * ptr = NULL;

   int input = 0;

   cin >> input;

   if(input > 5)
      ptr = new int;

  /*Apparently this doesn't work..*/
   if(ptr)
      delete ptr;

   return 0;
}

 

That's basically the way I've been checking if a pointer has been allocated dynamic memory or not, but I've just learned that my solution is wrong. So what might be one smart way to go about it? I just made the above code off the top of my head, so please don't criticize it for stuff that's off-topic from the question. Thanks a lot.

Edited by Shaquil

Share this post


Link to post
Share on other sites
Liuqahs15    819
Your solution is not wrong, but you may have misunderstood why it's not useful to check for null before deleting. It is not that your code won't work. the pointer is initially null, and if you allocate something, then the pointer is not null and it will be deleted. Likewise, if you don't allocate anything, then the pointer remains null and it won't delete it.

 

But that's not the issue. The "problem" is that calling delete on a null pointer is perfectly fine, so the if-statement is useless. Not incorrect, just useless. If ptr is null, then delete does nothing.

 

Oh that's probably the worst answer I could've gotten. Not because it's a bad answer, but because now I'm even more confused. Thank you for explaining that, though.

Share this post


Link to post
Share on other sites
but because now I'm even more confused.


'delete' is just a function that looks different than other functions. Inside the delete function, it does the if(ptr == NULL) check for you.

 

Deleting NULL does nothing, so no harm is done. Deleting the same (non-NULL) memory twice is bad though. Deleting memory that was never allocated is also bad.

Share this post


Link to post
Share on other sites
Digivance    1724

I'll agree with Madhed on this one, you should have a basic understanding of how dynamic memory works under the hood but there's very few occasions that would call for custom memory managed systems.  The Standard Template Library or STL (std:: classes) provide a wide range of easy to use and trustworthy dynamic memory wrappers.

Share this post


Link to post
Share on other sites
LordJulian    151

Wanting to delete memory and not being sure if you actually should delete it is the root of all evil. Try to impose as many rules as you can on your own code and make it as close to impossible as possible for others using your classes to do something stupid. One such an idea would be to enforce the creation and deletion of your object to stay in your own code (private constructors/destructors and friend class manager?). Anyhow, it's a tricky subject that cannot be taught in a few posts. Learn the basics of allocation/deletion very well, experiment a lot and grow wiser. At one point, you will feel confidant enough to answer your own question (and, from time to time, you will fail tongue.png, but keep it up ). You will know that you have the right implementation the moment you will KNOW that you MUST delete some memory and not wonder anymore.

Edited by LordJulian

Share this post


Link to post
Share on other sites
JiiPee    135

If memory is not there, then you can't even allocate memory, otherwise if you have succesfully allocated memory with new then there is reserved memory size of the data type. I think you should perhaps think otherway around, how to ensure that there is memory before allocating or if program can't allocate memory then how to catch exceptions and what are actions to avoid useless deletions.

Share this post


Link to post
Share on other sites
alvaro    21246
I guess the problem you were experiencing was double-deleting memory. One solution is to set the pointer to null immediately after deleting it, if you plan on reusing it later.
That's not much of a solution. We have had this controversy before in these forums, and to me it feels wrong to set the pointer to null, and it might give you a false sense of security. Double-deleting memory tends to happen when you have two different pointers to the same data and the programmer is confused about ownership. Setting pointers to null is a bit like rearranging the deck chairs on the Titanic.
However, you should also look into the standard library containers and smart pointers. (std::vector, std::list, std::shared_ptr, std::weak_ptr, etc.) They are the basics of modern, idiomatic c++ and in many cases eliminate the need to manually manage your memory.
This, with strong emphasis on std::vector.

Share this post


Link to post
Share on other sites
Khatharr    8812
If memory is not there, then you can't even allocate memory, otherwise if you have succesfully allocated memory with new then there is reserved memory size of the data type. I think you should perhaps think otherway around, how to ensure that there is memory before allocating or if program can't allocate memory then how to catch exceptions and what are actions to avoid useless deletions.

 

When he said 'is there' I think he meant 'is still allocated'. What he was really asking is if there's a way to check the status of an allocation, which for raw pointers is a 'no'. I don't think he's worried about the heap itself disappearing somewhere.

Edited by Khatharr

Share this post


Link to post
Share on other sites
Madhed    4095

[quote name='Álvaro' timestamp='1356815230' post='5015530']
Double-deleting memory tends to happen when you have two different pointers to the same data and the programmer is confused about ownership
[/quote]

 

Yeah, of course. In that case nulling one of the pointers doesn't help at all. Maybe we can make a point and say that manually managing memory shall only be done if you are 100% confident in what the feck you are doing.

 

[quote name='Álvaro' timestamp='1356815230' post='5015530']
with strong emphasis on std::vector.
[/quote]

 

Agreed.

Share this post


Link to post
Share on other sites
Khatharr    8812

Double-deleting memory tends to happen when you have two different pointers to the same data and the programmer is confused about ownership

 

 

Yeah, of course. In that case nulling one of the pointers doesn't help at all. Maybe we can make a point and say that manually managing memory shall only be done if you are 100% confident in what the feck you are doing.

 

As well as everyone else who may mess with the allocation in question.

Share this post


Link to post
Share on other sites

Checking for NULL before deleting and setting a pointer to NULL after deletion is troublesome in my opinion for many reasons.

 

Not only is it useless code and needless work (C++ guarantees that deleting NULL is a no-op). Code that doesn't add value should not be written. More code means not only extra opportunities for mistakes and wasted CPU cycles, but also "noise". Your brain can do so and so many things per second, and more text means that it needs to pick up more information. Skipping over a single if is no big deal, but then again, something that is entirely useless is kind of "expensive" even when the cost is otherwise neglegible.

 

But more importantly, it is also the wrong approach. Deleting resources should be well-defined, controlled, and guaranteed (read as: exception safe). It should not be "random" or some kind of guesswork with workarounds that prevent a crash when things go wrong. If you double-delete an object, that is a programming error which needs to be corrected. This should never happen. If it happens, your program should crash, and it should crash early. If you prevent your program from crashing, you are coalescing wrong behaviour.

 

Also, the if(ptr) delete ptr; idiom can be a source of many-hours-wasted in "works fine in debugger, crashes otherwise" type of errors. When you have an uninitialized pointer and run the code in a debugger, this will "magically" work, because the debugger zero-initializes the variable.

It would of course also "magically work" without the if condition (since the standard guarantees that). However, you have a chance (depends on the quality of your allocator and/or CRT and/or debugger) to get a "deleted null pointer in line..." warning.  When there is an if(ptr) in the way, delete is never called. You've successfully eleminated the most helpful hint that would tell you what has gone wrong.

When you run the program outside the debugger, it will attempt to delete some random memory address and crash (or, if you are very unlucky... not crash, or crash sometimes, because you've incidentially gotten a valid address). So you spend hours trying to figure why it works "fine" in the debugger, but crashes otherwise.

Share this post


Link to post
Share on other sites
Liuqahs15    819
I guess the problem you were experiencing was double-deleting memory. One solution is to set the pointer to null immediately after deleting it, if you plan on reusing it later.
That's not much of a solution. We have had this controversy before in these forums, and to me it feels wrong to set the pointer to null, and it might give you a false sense of security. Double-deleting memory tends to happen when you have two different pointers to the same data and the programmer is confused about ownership. Setting pointers to null is a bit like rearranging the deck chairs on the Titanic

 

I don't understand what you're trying to say. Anytime anyone talks about C++ they use these weird, dramatic metaphors. How is it a false sense of security? Why is it wrong? Please explain it to me like I'm stupid.

Share this post


Link to post
Share on other sites
alvaro    21246


I guess the problem you were experiencing was double-deleting memory. One solution is to set the pointer to null immediately after deleting it, if you plan on reusing it later.

That's not much of a solution. We have had this controversy before in these forums, and to me it feels wrong to set the pointer to null, and it might give you a false sense of security. Double-deleting memory tends to happen when you have two different pointers to the same data and the programmer is confused about ownership. Setting pointers to null is a bit like rearranging the deck chairs on the Titanic


 
I don't understand what you're trying to say. Anytime anyone talks about C++ they use these weird, dramatic metaphors. How is it a false sense of security? Why is it wrong? Please explain it to me like I'm stupid.
 



Sorry about the dramatic metaphor. The point is that you can do memory management correctly or incorrectly. If you do it correctly, you don't need to set the pointer to null after deleting it. If you do it incorrectly, setting the pointer to null after deleting it will not help things much: The only situation in which it can help is where you try to use the pointer to access the data after it has been released, because it will make the program crash consistently. On the other hand, if the problem is that you end up deleting that pointer twice, you will be masking a problem instead (I think samoth pointed this out).

If you are doing memory management incorrectly, you should do it correctly:
* Use standard containers in the vast majority of situations.
* When you have to use `new' (say, in a factory function), wrap the pointer right away in something like a unique_ptr.
* If you ever write a function that uses `new' and returns the raw pointer, make a huge comment indicating that this is the case and that the caller is now responsible for deleting it. But you should really use the unique_ptr instead.

I use C++ every day and I haven't had a memory leak in 10 years. It's not all that hard. I consider `new' as suspect as most people consider `goto', and for similar reasons: I know that in the hands of an inexperienced programmer it can lead to messes that I don't want to deal with.

Share this post


Link to post
Share on other sites
* When you have to use `new' (say, in a factory function), wrap the pointer right away in something like a unique_ptr.
Better yet, if you let smart pointers handle 'delete' for you, you might as well let them handle 'new' as well.
std::make_shared<> exists for this purpose, and though there isn't a std::make_unique<>, it is an acknowledged oversight that will be patched in to the standard later.

There are still reasons and occasions to use new and delete directly (just as there are rare occasions to use malloc() and free())... but those occasions will be few and far between.

Share this post


Link to post
Share on other sites
Khatharr    8812
I don't understand what you're trying to say. Anytime anyone talks about C++ they use these weird, dramatic metaphors. How is it a false sense of security? Why is it wrong? Please explain it to me like I'm stupid.

 

There are much better ways of handling memory management in C++ (compared to C), and those ways are important because it can be a lot harder to keep track of who has which pointer to what.

 

In other words it's not 'wrong' in the sense of not compiling and running. It's 'wrong' in the sense that someone may try to slap you if they catch you doing it.

Edited by Khatharr

Share this post


Link to post
Share on other sites
Liuqahs15    819
Sorry about the dramatic metaphor. The point is that you can do memory management correctly or incorrectly. If you do it correctly, you don't need to set the pointer to null after deleting it. If you do it incorrectly, setting the pointer to null after deleting it will not help things much: The only situation in which it can help is where you try to use the pointer to access the data after it has been released, because it will make the program crash consistently. On the other hand, if the problem is that you end up deleting that pointer twice, you will be masking a problem instead (I think samoth pointed this out).

If you are doing memory management incorrectly, you should do it correctly:
* Use standard containers in the vast majority of situations.
* When you have to use `new' (say, in a factory function), wrap the pointer right away in something like a unique_ptr.
* If you ever write a function that uses `new' and returns the raw pointer, make a huge comment indicating that this is the case and that the caller is now responsible for deleting it. But you should really use the unique_ptr instead.

I use C++ every day and I haven't had a memory leak in 10 years. It's not all that hard. I consider `new' as suspect as most people consider `goto', and for similar reasons: I know that in the hands of an inexperienced programmer it can lead to messes that I don't want to deal with.

 

Thank you so much. I wish I'd known this before I started my most recent project, but at least it's a learning experience. In cases where I use new and delete, and I realize that I'm making a messy web where I'll need to know exactly what gets deleted by who, I occasionally think, "Well, I could probably just use something in STL for this instead and save the head." But then I always wonder if that's the right choice. I do, after all, have to learn what new and delete does. And I worry that using one of the STL structures like vector might be overkill.

 

I'd love to hear some opinions on these thoughts, since I've never gotten much of a chance to talk about this with anyone else before. From other posts I've read, Servant of the Lord seems really knowledgeable about this stuff as well.

Share this post


Link to post
Share on other sites
0r0d    1930

Look into boost::shared_ptr and boost::weak_ptr.

 

Handling pointers to objects is inherently difficult because in many cases you'll want to pass off or store copies of those pointers.  Then you need to deal with knowing when it's safe to delete.  It's a problem of ownership.  You either have strict ownership or you have a mechanism for knowing when it's safe to delete.  If you're not that experienced then you can easily fall into traps you've set for yourself.  Using shared_ptr and weak_ptr together can help you solve the problem of ownership and get rid of the need for deletes. 

Share this post


Link to post
Share on other sites
JTippetts    12950
Look into boost::shared_ptr and boost::weak_ptr.

 

Handling pointers to objects is inherently difficult because in many cases you'll want to pass off or store copies of those pointers.  Then you need to deal with knowing when it's safe to delete.  It's a problem of ownership.  You either have strict ownership or you have a mechanism for knowing when it's safe to delete.  If you're not that experienced then you can easily fall into traps you've set for yourself.  Using shared_ptr and weak_ptr together can help you solve the problem of ownership and get rid of the need for deletes. 

 

shared_ptr and weak_ptr are part of the C++ standard now, so no need to introduce a boost dependency.

 

@Shaquil: new allocates memory and calls the constructor, delete calls the destructor and deletes memory. Now you know what new/delete do, so you don't have to worry about it anymore and can just use shared/weak/unique ptrs. Also, don't worry about using a vector being "overkill." Overkilling what? It's a dynamically managed array. It's not like using a guided missile to hammer a nail or anything, it's just a slightly safer hammer. Use the standard library (not STL; the STL became the standard library quite a few years ago, and now it's just the standard library), become familiar with it, learn which containers are best for which situations. That's why it's there. It's [i]standard[/i]. Not-Invented-Here syndrome is counterproductive and leads to buggy code in the hands of inexperienced programmers.

 

I'm in the midst of a redesign of my core framework for Goblinson Crusoe, and there is not a single new or delete in sight anywhere in the code. Not one. The standard is getting to the point where they simply aren't necessary for 90% of the code we write, and that other 10% probably isn't going to apply for most people. If you carefully analyze the use cases for each piece of memory you allocate, you can make a choice of the proper pointer type to use. This analysis can also help to expose flaws in your design, so that's a bonus.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this