View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Try/catch absurdity and calling destructors...

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

21 replies to this topic

### #1MarkS  Members

Posted 15 October 2012 - 12:26 PM

Let's say I have a class with several pointers that are allocated in the constructor:


class some_class{

public:

some_class(int);

~some_class();

private:

int	 *ptr1;

int	 *ptr2;

int	 *ptr3;

int	 *ptr4;

};



In the constructor, I would do this:


some_class::some_class(int some_val)

{

try{

ptr1 = new int(some_val);

}

{

std::cout << "Unable to allocate memory." << std::endl;

return; // Skip the rest of the constructor.

}

// Begin the absurdity...

try{

ptr2 = new int(some_val);

}

{

delete ptr1; // We can safely assume that ptr1 was allocated by this point.

ptr1 = NULL; // Safe delete...

std::cout << "Unable to allocate memory." << std::endl;

return; // Skip the rest of the constructor.

}

try{

ptr3 = new int(some_val);

}

{

delete ptr1; // We can safely assume that ptr1 was allocated by this point.

ptr1 = NULL; // Safe delete...

delete ptr2; // We can safely assume that ptr2 was allocated by this point.

ptr2 = NULL; // Safe delete...

std::cout << "Unable to allocate memory." << std::endl;

return; // Skip the rest of the constructor.

}

try{

ptr4 = new int(some_val);

}

{

delete ptr1; // We can safely assume that ptr1 was allocated by this point.

ptr1 = NULL; // Safe delete...

delete ptr2; // We can safely assume that ptr2 was allocated by this point.

ptr2 = NULL; // Safe delete...

delete ptr3; // We can safely assume that ptr3 was allocated by this point.

ptr3 = NULL; // Safe delete...

std::cout << "Unable to allocate memory." << std::endl;

}

}



As you can see, depending on the number of pointers to be allocated, this can approach absurd levels rather quickly. However...


some_class::some_class(int some_val)

{

try{

ptr1 = new int(some_val);

}

{

std::cout << "Unable to allocate memory." << std::endl;

return; // Skip the rest of the constructor.

}

// Much more clarity, less absurdity...

try{

ptr2 = new int(some_val);

}

{

some_class::~some_class();

std::cout << "Unable to allocate memory." << std::endl;

return; // Skip the rest of the constructor.

}

try{

ptr3 = new int(some_val);

}

{

some_class::~some_class();

std::cout << "Unable to allocate memory." << std::endl;

return; // Skip the rest of the constructor.

}

try{

ptr4 = new int(some_val);

}

{

some_class::~some_class();

std::cout << "Unable to allocate memory." << std::endl;

}

}

some_class::~some_class()

{

if(ptr1 != NULL)

{

delete ptr1;

ptr1 = NULL;

}

if(ptr2 != NULL)

{

delete ptr2;

ptr2 = NULL;

}

if(ptr3 != NULL)

{

delete ptr3;

ptr3 = NULL;

}

if(ptr4 != NULL)

{

delete ptr4;

ptr4 = NULL;

}

}



This is much more clear and compiles, however, I have heard that calling destructors directly is either a bad thing or frowned upon. Am I doing this correctly in the first place? Is there a better way? Is it OK to call the destructor in this case?

Edited by MarkS, 15 October 2012 - 12:30 PM.

### #2Brother Bob  Moderators

Posted 15 October 2012 - 12:35 PM

POPULAR

Use a smart pointer class instead and let the compiler do all that job for you.
struct some_class {
some_class(int some_val) :
ptr1(new int(some_val)),
ptr2(new int(some_val)),
ptr3(new int(some_val)),
ptr4(new int(some_val))
{ }

std::unique_ptr<int> ptr1, ptr2, ptr3, ptr4;
};


Edited by Brother Bob, 15 October 2012 - 12:36 PM.

### #3MarkS  Members

Posted 15 October 2012 - 12:40 PM

True, but for the sake of argument...

Edited by MarkS, 15 October 2012 - 12:40 PM.

### #4Bregma  Members

Posted 15 October 2012 - 12:40 PM

POPULAR

What Brother Bob said, except if you have an old out-of-date compiler you can choose to use std::auto_ptr instead. RAII FTW (sounds like a kitten fight).

Explicitly calling the destructor does not do what you think it does. In particular, calling the destructor of an object from within its constructor will not affect the members of the class, and you will still leak just as bad as before.
Stephen M. Webb
Professional Free Software Developer

### #5Bregma  Members

Posted 15 October 2012 - 12:42 PM

BTW, what is this "safe delete" thing?
Stephen M. Webb
Professional Free Software Developer

### #6MarkS  Members

Posted 15 October 2012 - 12:42 PM

Explicitly calling the destructor does not do what you think it does. In particular, calling the destructor of an object from within its constructor will not affect the members of the class, and you will still leak just as bad as before.

Interesting! OK, time to stop using C-style pointers...

BTW, what is this "safe delete" thing?

It comes from a book, "Teach yourself C++ in 24 hours." I don't remember the exact reason and no longer have the book, but something about calling delete on a NULL pointer is safe, but calling it on an uninitialized pointer can lead to problems. The book mentioned setting the pointer to NULL after delete in the event that delete is called twice (why this would happen, I do not know...). I have always done this.

Edited by MarkS, 15 October 2012 - 12:46 PM.

### #7Bregma  Members

Posted 15 October 2012 - 01:16 PM

If you absolute want to avoid using smart pointers, you could try using a cleanup member function.

class some_class{
public:
some_class(int);
~some_class();

private:
void cleanup();

private:
int	 *ptr1;
int	 *ptr2;
int	 *ptr3;
int	 *ptr4;
};

some_class:some_class(int some_val)
: ptr1(nullptr), ptr2(nullptr), ptr3(nullptr), ptr4(nullptr)
{
try
{
ptr1 = new int(some_val);
ptr2 = new int(some_val);
ptr3 = new int(some_val);
ptr4 = new int(some_val);
}
catch (...)
{
cleanup();
throw;
}
}

some_class::~some_class()
{
cleanup();
}

void some_class::cleanup()
{
delete ptr4;
delete ptr3;
delete ptr2;
delete ptr1;
}

This takes advantage of the fact that it's OK to use the delete operator on a pointer equal to nullptr.
Stephen M. Webb
Professional Free Software Developer

### #8Servant of the Lord  Members

Posted 15 October 2012 - 01:17 PM

OK, time to stop using C-style pointers...

More properly: "Time to stop defaulting to using C-style pointers to manually manage memory".
You can still use C-style pointers for non-memory management...
But you shouldn't manually manage memory yourself...
...unless you actually need to for performance reasons (which it sometimes is, even on normal projects).

It comes from a book, "Teach yourself C++ in 24 hours." I don't remember the exact reason and no longer have the book, but something about calling delete on a NULL pointer is safe, but calling it on an uninitialized pointer can lead to problems. The book mentioned setting the pointer to NULL after delete in the event that delete is called twice (why this would happen, I do not know...). I have always done this.

That's from someone trying to manually manage memory, and instead of fixing the problem (problem: delete gets called twice), pretends the problem doesn't exist by hiding it (delete won't delete a null pointer).

The most direct solution is: don't call delete twice on the same pointer.
A even better solution is: don't call delete (or new) at all*, let smart pointers do it for you.

*See previous comment of, 'except when you actually need to'

[edit:] That's not entirely to say setting an invalid pointer to null is bad. Dereferencing a null pointer crashes your program, which is good! So if there is an opportunity to use a pointer after it's been deleted, set it to null... but you shouldn't actually be calling new or delete to manage memory. If I have a raw pointer (that isn't managing memory) that's pointing at something, null is the ideal value to assign to it when it's not valid, because dereferencing null guarantees to crash, while dereferencing random memory maybe might crash, and maybe might do something incredibly weird that won't show up for several weeks or months.
In some cases it actually even makes sense (usually to avoid unnecessary checks for cleaner code) to delete twice since it has no effect - but it's important to A) know the reason why you raw pointers are set to NULL, and B) not use it to "solve" a program crashing, but actually find out why the program is crashing.

By default:
- Prefer memory on the stack over dynamic memory.
- Prefer smart pointers over raw pointers when you actually need dynamic memory.
- Use raw pointers when you actually need performance in that one area.

Note: 'by default' does not mean 'always'. And in the same way, "prefer smart pointers" does not mean "never use raw pointers".

Edited by Servant of the Lord, 15 October 2012 - 06:31 PM.

It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames -

### #9LordJulian  Members

Posted 15 October 2012 - 01:29 PM

If you absolute want to avoid using smart pointers, you could try using a cleanup member function.


class some_class{
public:
some_class(int);
~some_class();

private:
void cleanup();

private:
int	 *ptr1;
int	 *ptr2;
int	 *ptr3;
int	 *ptr4;
};

some_class:some_class(int some_val)
: ptr1(nullptr), ptr2(nullptr), ptr3(nullptr), ptr4(nullptr)
{
try
{
ptr1 = new int(some_val);
ptr2 = new int(some_val);
ptr3 = new int(some_val);
ptr4 = new int(some_val);
}
catch (...)
{
cleanup();
throw;
}
}

some_class::~some_class()
{
cleanup();
}

void some_class::cleanup()
{
delete ptr4;
delete ptr3;
delete ptr2;
delete ptr1;
}

This takes advantage of the fact that it's OK to use the delete operator on a pointer equal to nullptr.

quite good suggestion, BUT: in the cleanup function check for NULL and if not, then delete and assign to NULL. . I know that delete checks if the pointer is NULL, but for teaching purposes it is good to suggest that. Also, setting it to NULL after deleting is not mandatory, but is, again, good practice and, perhaps, would keep the user to double delete the same pointer and/or access it after deletion.

### #10LordJulian  Members

Posted 15 October 2012 - 01:35 PM

Sorry to double post, but, for learning purposes, here's another suggestion: If you want to track "troublesome bugs" with memory allocation (i.e. using pointers after you delete them - happens more often than you think), you set them to an invalid, but easily recognizable value, kinda like 0xfefefefe. Then, when the program blows to bits, you look at the pointer in the debugger, and if it matches (or it is close) the 0xfefefefe, you know you have this problem. enjoy

### #11MarkS  Members

Posted 15 October 2012 - 01:37 PM

quite good suggestion, BUT: in the cleanup function check for NULL and if not, then delete and assign to NULL. . I know that delete checks if the pointer is NULL, but for teaching purposes it is good to suggest that. Also, setting it to NULL after deleting is not mandatory, but is, again, good practice and, perhaps, would keep the user to double delete the same pointer and/or access it after deletion.

After reading what Servant of the Lord wrote on this, I really can no longer say it is good practice. Let's say that I do call delete on a pointer twice. If I set it to NULL, nothing happens and the error is never found and corrected. However, if I don't, the program crashes and the error gets fixed. It would seem to be better practice to not give yourself the ability to do things incorrectly in the first place.

Sorry to double post, but, for learning purposes, here's another suggestion: If you want to track "troublesome bugs" with memory allocation (i.e. using pointers after you delete them - happens more often than you think), you set them to an invalid, but easily recognizable value, kinda like 0xfefefefe. Then, when the program blows to bits, you look at the pointer in the debugger, and if it matches (or it is close) the 0xfefefefe, you know you have this problem. enjoy

I like this idea.

Edited by MarkS, 15 October 2012 - 01:40 PM.

### #12Bregma  Members

Posted 15 October 2012 - 02:35 PM

POPULAR

quite good suggestion, BUT: in the cleanup function check for NULL and if not, then delete and assign to NULL. . I know that delete checks if the pointer is NULL, but for teaching purposes it is good to suggest that. Also, setting it to NULL after deleting is not mandatory, but is, again, good practice and, perhaps, would keep the user to double delete the same pointer and/or access it after deletion.

No.

If the cleanup() function is only called from the catch block of the constructor and from the destructor, there is no way it could ever get called twice accidentally. Setting the member pointers to NULL will do nothing, because the pointers are destroyed immediately. If the pointers hold values that have been deleted elsewhere, setting the members to NULL will make no difference.

I think it's better to teach how things actually work than to teach some misleading solution that will not prevent problems and will give a false sense of security. there's no advantage to superstition in the programming world.
Stephen M. Webb
Professional Free Software Developer

### #13Servant of the Lord  Members

Posted 15 October 2012 - 02:46 PM

After reading what Servant of the Lord wrote on this, I really can no longer say it is good practice.

My point is more against manual memory management and understanding why it should or should not be set to null - I edited my post to clarify.

Edited by Servant of the Lord, 15 October 2012 - 02:47 PM.

It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames -

### #14iMalc  Members

Posted 18 October 2012 - 01:05 AM

Many things you are taught are only true in the context of which you are being taught those things. Outside of that learning context, they may no longer apply, in fact the complete opposite may apply.

"Safe Delete" is one of those things and goes in the same bucket of advice such as:
All your destructors should be marked virtual
Put constants before variables in your if-statement comparisons. (Yoda expressions)
Initialise ALL variables.
Only ever call srand once in your program.
Always use quicksort instead of bubblesort.
Don't ever use macros.
Don't ever use globals.
Don't ever use unsafe functions such as strcpy.
Don't use double-negation (i.e. !!x)
etc...

When you've gained the appropriate level of knowledge and really know what you are doing and why you are doing it, these turn from somewhat good advice into somewhat bad advice. Well bad in that they should not be followed 100% of the time.
"Safe Delete" is probably the worst of these though, in that it should be the first of such advice that you stop following religously. It's there to stop you from being hindered by stupid mistakes caused by a complete lack of knowledge about how pointers work. Once you know all about pointers, you know that it is a waste of time to still follow it.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

### #15Oberon_Command  Members

Posted 18 October 2012 - 09:25 AM

Put constants before variables in your if-statement comparisons. (Yoda expressions)

Is this really that widespread? I've only encountered one programmer who did this before, and I'd never heard of it before seeing his code. I find it makes code more difficult to read than is necessary. Certainly I've never bothered with this; confusing = and == is something I do very, very rarely, so I've never seen the need for it.

Edited by Oberon_Command, 18 October 2012 - 09:28 AM.

### #16Servant of the Lord  Members

Posted 18 October 2012 - 10:00 AM

I think it mentions it (and gives pros and cons) in CodeComplete where it's not really arguing for it's use but just presenting it as something that's sometimes done.

I've tried it a little, and then decided to dismiss it from my own coding - I also don't often mistype = for ==, but maybe if I was switching between multiple languages and had a compiler that doesn't issue a good warning for that mistake, it might be worth doing.

Edited by Servant of the Lord, 18 October 2012 - 10:00 AM.

It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames -

Posted 18 October 2012 - 11:50 AM

I also find this "yoda comparing" quite confusing (nice term, btw). Also, it won't protect you in the case where you are comparing two variables instead of a variable against a constant.

EDIT: I once worked on a codebase where, apparently for the sake of consistency, smaller than and greater than comparisons were switched as well...

Edited by Madhed, 18 October 2012 - 11:52 AM.

### #18jwezorek  Members

Posted 18 October 2012 - 01:09 PM

Put constants before variables in your if-statement comparisons. (Yoda expressions)

Is this really that widespread? I've only encountered one programmer who did this before, and I'd never heard of it before seeing his code. I find it makes code more difficult to read than is necessary. Certainly I've never bothered with this; confusing = and == is something I do very, very rarely, so I've never seen the need for it.

People do do it. I too find it ugly and not particularly helpful.

The other thing that is like this that people get religious about is only returning from a function at one place at the end of the function. Don't find this particulary helpful either because it often has the effect of making if-statement/conditional nesting deeper which I find harder to read then just bailing out of the function early in relevant cases.

Edited by jwezorek, 18 October 2012 - 01:11 PM.

### #19Slavik81  Members

Posted 18 October 2012 - 02:14 PM

I think it mentions it (and gives pros and cons) in CodeComplete where it's not really arguing for it's use but just presenting it as something that's sometimes done.

I've tried it a little, and then decided to dismiss it from my own coding - I also don't often mistype = for ==, but maybe if I was switching between multiple languages and had a compiler that doesn't issue a good warning for that mistake, it might be worth doing.

Yoda conditionals are a particular annoyance to me. They're less readable, and are not very useful if you write good tests.

More dangerous is accidentally forgetting to break at the end of a case in a switch. It's rare to test for things that you don't do, so a case that falls through and does something extra might not be caught.

### #20LordJulian  Members

Posted 28 December 2012 - 11:35 PM

quite good suggestion, BUT: in the cleanup function check for NULL and if not, then delete and assign to NULL. . I know that delete checks if the pointer is NULL, but for teaching purposes it is good to suggest that. Also, setting it to NULL after deleting is not mandatory, but is, again, good practice and, perhaps, would keep the user to double delete the same pointer and/or access it after deletion.

After reading what Servant of the Lord wrote on this, I really can no longer say it is good practice. Let's say that I do call delete on a pointer twice. If I set it to NULL, nothing happens and the error is never found and corrected. However, if I don't, the program crashes and the error gets fixed. It would seem to be better practice to not give yourself the ability to do things incorrectly in the first place.

Late reply, but better late... you know the rest.

There are two kinds of "best practices".

The first one is over-zealous, over-religious, fanatic approach  "the program should blow to bits as soon as I do something stupid, so I get a chance to get all the context I need in order to fix this". This is wonderful, and for a while I was a zealot for this. Again, this is good IN TESTING CONDITIONS, when you have the means to do something about it and another crash won't matter that much.

The second one is the motherly, lovely, caring, "peace to the world" type of thinking, in which you try to recover and give the program as many chances to continue like nothing happened as you can. This is good for release code, when a crash is the worst you could do.

Try to have them both and to easily switch between them.

Think of this as a theater play/ live show. When doing the repetitions, the director/actors stop at every mistake, correct it and start over; that's why they have the repetitions. But during a live performance, if they stumble, they do whatever they can to carry on until the end of the show and recover the normal flow as soon as possible. Stopping the event and restarting it at each mistake would be too much for the audience. (back to game context) Not to mention that console owners will usually reject your game for any crash

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.