Is Throwing From a Non-Virtual Stateless Destructor Still Bad?

Started by
7 comments, last by GenuineXP 15 years, 9 months ago
The subject/title says it all. My question is, is it still unsafe to throw from a destructor, even if it is non-virtual and the class is state-less (has no data members)? For example:
class foo {
public:
    ~foo() {
        func_very_likely_to_throw();
    }
};
...
void bar() {
    foo f;
    ... // The destruction of 'f' may cause an exception to be thrown.
}

I know about the (big) problems associated with throwing from destructors, but I'm wondering if that only applies to non-trivial objects/destructors. In this case, foo is extremely simple; it just makes a function call in its destructor that is likely to throw. Is this still a problem? If so, why? Would it corrupt the stack? (I don't plan on dynamically allocating foo objects; they'll all be autos.) Thanks. EDIT: Fixed the title. Whoops.
Advertisement
You shouldn't throw from a destructor because the destructor might be called because of an already active exception in the first place. If you throw another one there would be two exceptions propagating at the same time and this is not allowed.
You should be able to catch the (new) exception within the destructor and handle it there but you shouldn't let it leave the destructor.
Quote:Original post by visitor
You shouldn't throw from a destructor because the destructor might be called because of an already active exception in the first place.

Ah, forgot about that one. That could be a serious problem.

I need an automagic mechanism for throwing stored exceptions (the idea is to free clients from having to make explicit calls to a function that throws them). The exceptions are actually being pushed onto a stack after certain operations. It's not ideal, but I prefer this over no exceptions at all.

I'm guessing there's no way to detect if another exception is already being propagated in the foo destructor, huh? I suppose explicit calls are the only way to accomplish what I want.

Thanks for the reply.
I'm not getting far with this idea. :-)

I'm trying to make it possible to store and then throw exceptions entirely in a client binary despite the fact that the actual error occurred in a plugin or engine (dynamically linked).

Instead, I think I'll try daisy chaining some C compatible function calls together that ultimately throw in the client binary, that way exceptions can be safely invoked on the client end (e.g., I won't be trying to throw exceptions between binaries).
Just a thought:

Could you have a variable (threadsafe or thread-local if necessary) that is set prior to throwing, and unset after catch. You could perhaps make a template function to set this variable then throw, like:
template <typename T>void do_throw(T x) {     if(throwing) {        // Already inside a throw. Don't throw again.    } else {        throwing = true;        throw x;    }}//Somewhere in another functioncatch(TYPE &caught) {    // catch code...    throwing = false;}


It won't help with 3rd party throws though.

EDIT:
iMalc: You're completely right - no try block in do_throw.

[Edited by - yacwroy on July 22, 2008 2:10:10 PM]
You can't have your catch handler in a different function from your try block. Of course there's no need to put any kind of try block inside do_throw though anyway.

I think the way this is sometimes handled is by moving the code out of the destructor into a shutdown function that you call before the object is deleted:
class foo {public:    ~foo() {}    shutdown() {        func_very_likely_to_throw();    }};
It's not nice as such, but you don't exactly have much choice.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
Thanks for the insight. It seems I'm going to abandon the stack approach and use "tunneling" instead, where non-client binaries (e.g., plugins or the engine) make remote calls that result in an exception being thrown from within the client binary. I've implemented a system, and it seems to work quite well.

Somehow, the stack appears to unwind properly, and objects at different scope are destroyed as would be expected (I tested this to be sure). I'm hoping this isn't a niche feature of g++ though, as I haven't tested this with other compilers. In any case, I won't have to throw from any destructors, which is a very good thing. :-)

Thanks again!
One thing that caught me which might be an issue here:

Objects alloc'd on the heap inside a DLL by default use their own heap. If you try to destroy them outside the DLL all your capacitors blow up.
- That bug took me forever to figure out.

If your exception handler steps through and deletes your items, it _might_ pick the wrong heap to destroy things. Especially if you used new inside your DLL.

My way of getting around this is to make another allocator that DLLs can use which simply calls the main program allocator (by a passed function pointer or something), then overload ::new inside the DLLs.
You have to be careful to ensure that either the DLL allocator isn't called (even by preloading code) before you pass the function pointer, or that it uses the DLL-local heap before you specify the new one (and after you exit the DLL function).



Great point. I'll have to do some looking into this.

At this point, all of the objects sent across binaries (e.g., from plugins to engine or client code) use simple C-based allocation functions (via opaque void pointers). This means that if engine or client code deletes a foreign (plugin) object, it actually results in a call to the plugin's deallocation function and does not occur locally (only a local wrapper/adapter is destroyed, which is what makes the deallocation function call).

Hopefully this saves me from this problem. When I tested to see if objects were destroyed as expected after a throw, I only tested autos. I created some structs that printed construction/destruction information and peppered the code with them at different scopes. The output was as expected.

Thanks.

This topic is closed to new replies.

Advertisement