C++ VirtualFunctions [SOLVED]

Started by
22 comments, last by Renthalkx97 8 years, 3 months ago

I was always under the assumption that it was best to set them to null for cases like this:


int *pX = new int(5);

delete pX;

if(pX)
{
    std::cout << *pX;
}

That will crash. However setting pX to null would alleviate that.

I'm probably going to get chastised for this, but I actually PERSONALLY prefer to use raw pointers. I like having to manually take care of them. Not that smart pointers are bad. By all means they are a great thing, and should be used whenever necessary.

Advertisement

I definitely agree with "fail fast", but I disagree with your advice in this particular case. The problem is, passing an invalid pointer to delete doesn't always cause a crash right away (unless you're always running with app verifier, which isn't realistic). It can simply corrupt the heap causing problems later, or in fact it could cause no problems at all if something has already been reallocated at that memory location (not unlikely when space is requested shortly after the delete for a same size object, since there will be a nice hole in the heap there).


The other thing is that it is not uncommon to have code that, say, resets the state of an object, which would include deleting any members that were new'd. In that case, there is no logic error, and you would need to store additional state in order to know if a pointer is valid or not. So why not just set it to null.

Ok, thats fair. I didn't think of the particulary case when you are like resetting a state in a class, in which case you certainly should null out the pointer. I'm not really sure what case I had in mind where you would not null out something after deletion (which is one of the things that make unique_ptr so handsome), but I think I was mostly thinking in terms of destructors or temporary objects, since Renthalkx mentioned merely protection against double delete. I can't speak for all cases and compilers, but newer MSVC seems quite reliable with catching deletion of invalid memory - got a few heap corruptions, but mostly just a crash in that very line, but might very well be different in other cases.


It also makes it harder to diagnose issues, because you have no idea if the pointer is "valid" when inspecting it in the debugger.

Thats not really an issue in MSVC at least, since once you inspect the pointer it will show you just garbage data, which is enough to identify an invalid pointer - but null values are still easier to catch, I give you that.


I'm probably going to get chastised for this, but I actually PERSONALLY prefer to use raw pointers. I like having to manually take care of them. Not that smart pointers are bad. By all means they are a great thing, and should be used whenever necessary.

Honestly, I'm more curious - why? Since I started using unique_ptr I don't want to use anything else, I'm still in the process of converting my old code because it makes things so much more safe, reliable and produces way cleaner code. Unless I have very specific requirements for memory management but yeah... is there anything particullary that you like about raw pointer managment, or is it rather habit? ;)

EDIT:


I was always under the assumption that it was best to set them to null for cases like this:

int *pX = new int(5);

delete pX;

if(pX)
{
std::cout << *pX;
}

That will crash. However setting pX to null would alleviate that.

See Phil_t's reply above, but IMHO this is one of the cases where it doesn't really make sense to null out the pointer, for me. You are creating a variable, then deleting it, and then trying to do something with it. This is clearly a logical issue, and should not be "protected" against by setting the variable to null after the delete - eigther your delete or the code afterwards is in the wrong place. Ok, in a more complex case you might conditionally delete pX, but even then I feel there are better ways than just setting pX to null after delete, like changing the execution paths to not even hit the other code if the pointer gets deleted, when it is deleted.

Well though, seeing Phil_t's remark about delete you pretty much have only the options to fail silently, or risk undefined behaviour. Well - its pretty much pick your poison anyways, unless you want to trust your compiler/debugger/platform to handle memory access violations in a somewhat reliable fashion.


but I think I was mostly thinking in terms of destructors or temporary objects

Agreed, no point in setting to null in those cases.


Thats not really an issue in MSVC at least, since once you inspect the pointer it will show you just garbage data, which is enough to identify an invalid pointer - but null values are still easier to catch, I give you that.

Well, re garbage data... when looking at what the invalid pointer points to in the debugger it will either show you either

1) ????? if the memory has been unmapped from the process's address space (causes access violation on delete, easy to catch)

2) whatever was left there before (has it already been deleted? you don't really know... the data looks "correct")

3) whatever has been placed there by new stuff that has been allocated (you'll need to inspect the data to see if it's "correct" or not)

Of course, with the debug heap, case 2 will show 0xfefefefefe (or whatever the marker is for freed memory)


Honestly, I'm more curious - why? Since I started using unique_ptr I don't want to use anything else, I'm still in the process of converting my old code because it makes things so much more safe, reliable and produces way cleaner code. Unless I have very specific requirements for memory management but yeah... is there anything particullary that you like about raw pointer managment, or is it rather habit? ;)

Agreed, I can't think of any good reason to eschew using smart pointers. Eliminates things like double-delete bugs, and makes intent clearer (assuming you're using the correct smart pointer for the job), with pretty much no drawbacks.


Honestly, I'm more curious - why? Since I started using unique_ptr I don't want to use anything else, I'm still in the process of converting my old code because it makes things so much more safe, reliable and produces way cleaner code. Unless I have very specific requirements for memory management but yeah... is there anything particullary that you like about raw pointer managment, or is it rather habit? ;)

I believe it's more of a habit rather than there being any logical reasoning behind it. I'm honestly a difficult person. I like to do stuff the "harder" way I guess you could say. It makes me feel more accomplished if I can successfully implement something myself rather than relying on other means. Obviously I have to draw the line somewhere. I'm not going to make my own string/vector/map/etc.

The problem with naively setting a pointer equal to nullptr after deleting its content is that the vast majority of times, the pointer goes out of scope immediately and double deletions happen because proper attention was not paid to pointer ownership and there is a copy drifting around somewhere.

That means the naive null won't fix the most common problem which goes something like this, if all the intervening code was removed.


  int* p = new in(7);
  int* q = p;
  // ...
  delete p;
  p = nullptr;
  // ...
  delete q;


The case of an object changing state is still handled by proper design. A valid object should always have an invariant that includes valid pointers (including nullptr values where appropriate). Tthere is not valid state for an object to be in in which it could possibly have a dangling or garbage value, if you're doing it right.

Setting a pointer to nullptr after deletion is either a sign that you're too lazy to be competent, or incompetent to begin with. You choose which you are.

If you like, I can also offer my opinion on the inappropriate use of shared_ptr.

Stephen M. Webb
Professional Free Software Developer

I believe it's more of a habit rather than there being any logical reasoning behind it. I'm honestly a difficult person. I like to do stuff the "harder" way I guess you could say. It makes me feel more accomplished if I can successfully implement something myself rather than relying on other means. Obviously I have to draw the line somewhere. I'm not going to make my own string/vector/map/etc.


Even then, I'd highly recommend writing your own set of smart pointers and using those as often as possible, if you're not already doing so. Smart pointers as a concept are quite simply awesome tools perfectly suited for just about all memory management roles not already satisfied by the stack itself or the various standard containers, and it'd be a shame not to take advantage of the benefits they offer.

I've implemented various smart pointers a few times myself (as well as written my own string, vector, and map classes), and the learning experience alone is valuable. These days I use the standard utilities in almost all cases, but that knowledge helps me understand why I'm doing so, how to use them correctly, the costs (if any) I'm paying for the convenience, and so on. It's rare that I feel the need to roll my own, now that I've done so and understand most of the internals (or at least understand enough to know that the experts can take the standard implementations well beyond what I would easily be able to do).

"We should have a great fewer disputes in the world if words were taken for what they are, the signs of our ideas only, and not for things themselves." - John Locke



Setting a pointer to nullptr after deletion is either a sign that you're too lazy to be competent, or incompetent to begin with. You choose which you are.

I'm going to have to disagree with you there. There is nothing inherently lazy or showing incompetence by setting a deleted pointer to null. Sure, the situations where it would be beneficial can be avoided by proper design. In my destructors, I delete everything dynamically allocated that is owned by the object, and then set every pointer to null. You may counter this by saying you might delete a pointer still in use by another object. Not if you destroy objects in the proper order. If B uses a pointer created by A, A doesn't get destroyed until B is first. I've NEVER had memory leaks or issues in general from my approach.


Of course, with the debug heap, case 2 will show 0xfefefefefe (or whatever the marker is for freed memory)

Huh, I quess I must have relied on the debug heap then most of the time. Case 3 happened to me in other cases where I was guarding a set of complicated state changes by whether a pointer passed in was the same as set before. I didn't even know that this could happen, but appearently if I delete an object and create an object directly afterwards it can/will place the new object in exactly the same location as before. Quess thats a thing, so yeah... new things just won't stop popping up :)


There is nothing inherently lazy or showing incompetence by setting a deleted pointer to null.

Actually I would say its quite the opposite of lazy, at least since you now explicitely mentioned destructors. Its just a lot of extra work for no gain. Think about it. You have a destructor:


~Foo(void)
{
     delete m_pBar;
     m_pBar = nullptr;
     delete m_pInt;
     m_pInt = nullptr;
}

Now what does the extra = nullptr's give you? Next to nothing. The only cases that "delete m_pBar" as an instance of this class could be called twice after this point are:

- You accidentially call "delete m_pBar" twice in the same destructor. Unless your destructor is clearing up 20 pointers and your names are really ambigous, there is no way this cannot be easily spotted.

- You somehow calling the destructor manually on an object (ie. that you created with placement_new), but are still using the memory/object afterwards. This would be an issue in and of itself and will lead to many more problems so setting the pointers to nullptr in this case doesn't do anything.

- The destructor of the same object is called twice. If you are already defensive about clearing up pointers, there is no way this is going to happen, outside of in another destructor, due to #1.

So yeah, there is just really no point in doing this inside of destructors. You can keep doing it if you want to, sure, but its not very efficient, so to say.


I believe it's more of a habit rather than there being any logical reasoning behind it. I'm honestly a difficult person. I like to do stuff the "harder" way I guess you could say. It makes me feel more accomplished if I can successfully implement something myself rather than relying on other means. Obviously I have to draw the line somewhere. I'm not going to make my own string/vector/map/etc.

I know that myself. I tend to not use any external library/premade solution just for the sake of making it myself, but at least I'm a very lazy person too so if something is going to safe me tremendeous amount of typing without any downsides I gladly take it :D I have to admit it took me a whole long time (> 2 years) from knowing the unique_ptr to finally using it in my code... feels like lots of wasted time in hindsight, oh well... Though I'd rather write my own string/vector than not use smart pointers (not map though :P).

Now what does the extra = nullptr's give you? Next to nothing.


Correct. It's just a habit of mine. Obviously in the destructor it is 99% cosmetic and offers no benefit assuming none of the cases you listed happen. Aside from destructors though, I believe the added safety net is advantageous.

Huh, I quess I must have relied on the debug heap then most of the time. Case 3 happened to me in other cases where I was guarding a set of complicated state changes by whether a pointer passed in was the same as set before. I didn't even know that this could happen, but appearently if I delete an object and create an object directly afterwards it can/will place the new object in exactly the same location as before. Quess thats a thing, so yeah... new things just won't stop popping up smile.png


An observation is that this could be seen as being related to what is known as the ABA problem, where a piece of data can change from the value A to B then back to A, and any bit of code that needs to be aware of changes to this data but only knows about changes by comparing the current value to the last value known by this particular code segment might miss that a change occurred at all.

The similarity being that one might naively hope that they can look at the current value of that piece of data and easily determine the correct course of action to take, but in reality their might have been multiple ways that the value became what it currently is, and the correct course of action is dependent upon more than just the current value, but also on how that current value came to be. With the double deleted pointer, as you noted, there are indeed ways for the pointer to technically be pointing to legitimately allocated memory even if the pointer shouldn't really be construed as pointing to anything valid at all because it had already been deleted.

"We should have a great fewer disputes in the world if words were taken for what they are, the signs of our ideas only, and not for things themselves." - John Locke

This topic is closed to new replies.

Advertisement