Try/catch absurdity and calling destructors...

Started by
20 comments, last by Bruno Sofiato 11 years, 3 months ago

quite good suggestion, BUT: in the cleanup function check for NULL and if not, then delete and assign to NULL. smile.png. I know that delete checks if the pointer is NULL, but for teaching purposes it is good to suggest that. Also, setting it to NULL after deleting is not mandatory, but is, again, good practice and, perhaps, would keep the user to double delete the same pointer and/or access it after deletion.


After reading what Servant of the Lord wrote on this, I really can no longer say it is good practice. Let's say that I do call delete on a pointer twice. If I set it to NULL, nothing happens and the error is never found and corrected. However, if I don't, the program crashes and the error gets fixed. It would seem to be better practice to not give yourself the ability to do things incorrectly in the first place.


Sorry to double post, but, for learning purposes, here's another suggestion: If you want to track "troublesome bugs" with memory allocation (i.e. using pointers after you delete them - happens more often than you think), you set them to an invalid, but easily recognizable value, kinda like 0xfefefefe. Then, when the program blows to bits, you look at the pointer in the debugger, and if it matches (or it is close) the 0xfefefefe, you know you have this problem. enjoy


I like this idea.
Advertisement

quite good suggestion, BUT: in the cleanup function check for NULL and if not, then delete and assign to NULL. :). I know that delete checks if the pointer is NULL, but for teaching purposes it is good to suggest that. Also, setting it to NULL after deleting is not mandatory, but is, again, good practice and, perhaps, would keep the user to double delete the same pointer and/or access it after deletion.

No.

If the cleanup() function is only called from the catch block of the constructor and from the destructor, there is no way it could ever get called twice accidentally. Setting the member pointers to NULL will do nothing, because the pointers are destroyed immediately. If the pointers hold values that have been deleted elsewhere, setting the members to NULL will make no difference.

I think it's better to teach how things actually work than to teach some misleading solution that will not prevent problems and will give a false sense of security. there's no advantage to superstition in the programming world.

Stephen M. Webb
Professional Free Software Developer

After reading what Servant of the Lord wrote on this, I really can no longer say it is good practice.[/quote]
My point is more against manual memory management and understanding why it should or should not be set to null - I edited my post to clarify.
Many things you are taught are only true in the context of which you are being taught those things. Outside of that learning context, they may no longer apply, in fact the complete opposite may apply.

"Safe Delete" is one of those things and goes in the same bucket of advice such as:
All your destructors should be marked virtual
Put constants before variables in your if-statement comparisons. (Yoda expressions)
Initialise ALL variables.
Only ever call srand once in your program.
Always use quicksort instead of bubblesort.
Don't ever use macros.
Don't ever use globals.
Don't ever use unsafe functions such as strcpy.
Don't use double-negation (i.e. !!x)
etc...

When you've gained the appropriate level of knowledge and really know what you are doing and why you are doing it, these turn from somewhat good advice into somewhat bad advice. Well bad in that they should not be followed 100% of the time.
"Safe Delete" is probably the worst of these though, in that it should be the first of such advice that you stop following religously. It's there to stop you from being hindered by stupid mistakes caused by a complete lack of knowledge about how pointers work. Once you know all about pointers, you know that it is a waste of time to still follow it.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

Put constants before variables in your if-statement comparisons. (Yoda expressions)


Is this really that widespread? I've only encountered one programmer who did this before, and I'd never heard of it before seeing his code. I find it makes code more difficult to read than is necessary. Certainly I've never bothered with this; confusing = and == is something I do very, very rarely, so I've never seen the need for it.
I think it mentions it (and gives pros and cons) in CodeComplete where it's not really arguing for it's use but just presenting it as something that's sometimes done.

I've tried it a little, and then decided to dismiss it from my own coding - I also don't often mistype = for ==, but maybe if I was switching between multiple languages and had a compiler that doesn't issue a good warning for that mistake, it might be worth doing.
I also find this "yoda comparing" quite confusing (nice term, btw). Also, it won't protect you in the case where you are comparing two variables instead of a variable against a constant.

EDIT: I once worked on a codebase where, apparently for the sake of consistency, smaller than and greater than comparisons were switched as well... wacko.png

[quote name='iMalc' timestamp='1350543904' post='4991357']
Put constants before variables in your if-statement comparisons. (Yoda expressions)


Is this really that widespread? I've only encountered one programmer who did this before, and I'd never heard of it before seeing his code. I find it makes code more difficult to read than is necessary. Certainly I've never bothered with this; confusing = and == is something I do very, very rarely, so I've never seen the need for it.
[/quote]

People do do it. I too find it ugly and not particularly helpful.

The other thing that is like this that people get religious about is only returning from a function at one place at the end of the function. Don't find this particulary helpful either because it often has the effect of making if-statement/conditional nesting deeper which I find harder to read then just bailing out of the function early in relevant cases.

I think it mentions it (and gives pros and cons) in CodeComplete where it's not really arguing for it's use but just presenting it as something that's sometimes done.

I've tried it a little, and then decided to dismiss it from my own coding - I also don't often mistype = for ==, but maybe if I was switching between multiple languages and had a compiler that doesn't issue a good warning for that mistake, it might be worth doing.


Yoda conditionals are a particular annoyance to me. They're less readable, and are not very useful if you write good tests.

More dangerous is accidentally forgetting to break at the end of a case in a switch. It's rare to test for things that you don't do, so a case that falls through and does something extra might not be caught.

quite good suggestion, BUT: in the cleanup function check for NULL and if not, then delete and assign to NULL. smile.png. I know that delete checks if the pointer is NULL, but for teaching purposes it is good to suggest that. Also, setting it to NULL after deleting is not mandatory, but is, again, good practice and, perhaps, would keep the user to double delete the same pointer and/or access it after deletion.


After reading what Servant of the Lord wrote on this, I really can no longer say it is good practice. Let's say that I do call delete on a pointer twice. If I set it to NULL, nothing happens and the error is never found and corrected. However, if I don't, the program crashes and the error gets fixed. It would seem to be better practice to not give yourself the ability to do things incorrectly in the first place.

Late reply, but better late... you know the rest.

There are two kinds of "best practices".

The first one is over-zealous, over-religious, fanatic approach "the program should blow to bits as soon as I do something stupid, so I get a chance to get all the context I need in order to fix this". This is wonderful, and for a while I was a zealot for this. Again, this is good IN TESTING CONDITIONS, when you have the means to do something about it and another crash won't matter that much.

The second one is the motherly, lovely, caring, "peace to the world" type of thinking, in which you try to recover and give the program as many chances to continue like nothing happened as you can. This is good for release code, when a crash is the worst you could do.

Try to have them both and to easily switch between them.

Think of this as a theater play/ live show. When doing the repetitions, the director/actors stop at every mistake, correct it and start over; that's why they have the repetitions. But during a live performance, if they stumble, they do whatever they can to carry on until the end of the show and recover the normal flow as soon as possible. Stopping the event and restarting it at each mistake would be too much for the audience. (back to game context) Not to mention that console owners will usually reject your game for any crash :)

This topic is closed to new replies.

Advertisement