C++ lightweight exception

This topic is 3949 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I have a persistence library, which boils down to one memory access method:
void read( unsigned char *dst, size_t length)
{
if (buffer + length > buffer_end) {
throw // something
}
// copy into destination
memcpy(dst, buffer, length);
}

Given modern compilers (VS 2005, recent gcc), is there an exception that results in minimal overhead? Or are exceptions even at best too heavy weight and it's worth taking the plunge and following the "fail" flag mechanism of streams? In this case the performance benefits would need to severly outweigh the change in API. I'm not interested into any extended information whatsoever, or the location of where the overrun happens. It's a bool state, entire archive is either fully reconstructed, or has corrupted state.

Share on other sites
Quote:
 Original post by AntheusOr are exceptions even at best too heavy weight and it's worth taking the plunge and following the "fail" flag mechanism of streams? In this case the performance benefits would need to severly outweigh the change in API.

I'd say doubtful that you'd see a severe difference in performance, but all you could really do is put together a couple of stress tests using the two approaches and profile.

It is not just a question of performance in isolation though. If you take the fail-flag approach (and I'd guess that the iostream library works this way because it was designed before exceptions were fully integrated into the standard, given that the remainder of the standard library uses exceptions quite extensively) then performance will also depend on how often the routine is called, how often the fail flag needs to be tested and what the action taken in the event of failure was. That is notwithstanding the development time issues of either you or users of your library forgetting to test the fail flag, of course.

Stroustrup always argued that the penalties incurred by a decent implementation of exception handling would be outweighed by the removal of need for large amounts of "normal" error handling code. I guess whether that is the case or not depends as much on the application as the implementation of exceptions by the compiler.

Share on other sites
Actually, sorry about this, should have tested the obvious case first.

By using "throw ;" , or "throw true", and not deriving from standard exceptions, the overhead of exception is 0, or the tests run exactly as fast as when exceptions disabled from the build, as well as no throw clauses are used at all.

And by using the exception classes with full error message, the performance hit is 4%.

So never mind this, there is no noteworthy overhead. Even if running without any error checks whatsoever, the maximum performance hit is negligible.

Share on other sites
If performance is critical, it may be worth testing the implications of something like:

class MyLibMemoryException{public:    MyLibMemoryException(){ } // inline and empty constructor};

and throwing that rather than throwing a built in type like a bool. I'd assume that the difference, if any, would be negligable and it at least provides some information to the catcher about the nature of the exception.

I'd assume that the overhead from throwing the exception you have tried, including a message, would come from the chain of constructors called when creating the exception object.

Share on other sites
The overhead caused by exceptions is not in instantiating the class (so you won't save much by using a bool instead of std::exception, for example).
The problem is in unwinding the stack once the exception is thrown, and that is exactly the same operation no matter which type you throw.

(This overhead is usually no big deal in any case, but I thought it was worth pointing out that the overhead you do have is not related to which type of exception you throw)

Share on other sites
both overheads can be significant. In fact there are 3:

1. The overhead the compiler must introduce to support the possiblity of an exception being thrown, this is paid pretty much throughout any project which has exception support not disabled, except in tight circumstances that the compiler has optimized with complete knowledge that no exception actually can occur.

2. The cost of creating / initializing an exception prior to throwing it.

3. The cost of unwinding the stack, and comparing exception specifiers at run-time for catching it.

Cost 2, allocating an object is extremely low by comparison to 1 and 3 EXCEPT in situations that are highly mutli-threaded and yet use standard shared new operator (which of course must be thread-safe and do locking or some such). Once you go heavily multi-threaded, you learn to see memory allocation and deallocation as a potentially significant performance cost (still use profiling to determine relevance to you). In normal programs with few threads, the cost doesn't matter at all.

Cost 3 is the big cost, and as mentioned it is the same when you throw a bool, an int, a std::exception or MySuperHugeDoEverythingException.

Share on other sites
I would have assumed what Spoonbender and Xai are saying about the type of the exception not affecting the perfomance, but for the fact that Antheus's profiling data appears to contradict that.

Antheus - how exactly were you profiling to get this 0 difference on a bool and 4% difference on a std::exception? Perhaps the profiling method was flawed in some way.

Share on other sites
Quote:
 Original post by Xaiboth overheads can be significant. In fact there are 3:1. The overhead the compiler must introduce to support the possiblity of an exception being thrown, this is paid pretty much throughout any project which has exception support not disabled, except in tight circumstances that the compiler has optimized with complete knowledge that no exception actually can occur.

Which, in any major C++ runtime published in the last half decade or so, is zero. None. Zilch. Nada. Not a sausage. Well, maybe an extra instruction or two when entering an explicit try-block, but nothing worse than the overhead of a function call. If you do the smart thig and design your software to minimize try-blocks, you will not be able to measure this Cost.
Quote:
 2. The cost of creating / initializing an exception prior to throwing it.

Yes, this may incur a Cost. The exception object must be constructed, and then in all likelihood copied. A simple object (like a bool or a class my_exception { }; will provide pretty close to zero construction or copy cost. A class like std::runtime_error, which constructs a std::string and copies it (likely invoking ::operator new twice) can have a non-trivial cost, but more than likely to disappear into te background noise.
Quote:
 3. The cost of unwinding the stack, and comparing exception specifiers at run-time for catching it.

This is the big expense. It's very expensive. This pays for Cost 1 being zero by following the pay-only-for-what-you-use principle that C++ was designed around. The same principle is used to move all of the constant overhead of C-style return value checking into this phase.

This Cost is what leads to the general guideline of using exceptions only for exceptional things. You pay only for what you use, but you pay a premium for that convenience.

--smw

Share on other sites
Quote:
 Original post by EasilyConfusedI would have assumed what Spoonbender and Xai are saying about the type of the exception not affecting the perfomance, but for the fact that Antheus's profiling data appears to contradict that.Antheus - how exactly were you profiling to get this 0 difference on a bool and 4% difference on a std::exception? Perhaps the profiling method was flawed in some way.

The code is modelled after boost serialization API (that one is unsuitable for me), all parameters to functions are templated, and most of the library gets inlined by the compiler. As long as no STL containers are used, the code generated is almost identical to copying memory blocks directly.

Since dozens of classes (and 30+ template generated versions) get linked into this mess, I originally suspected that exception throwing might cause invisible performance hits, such as some weird jumps, or suboptimal branching. This is not the case.

The reason why there's a difference is due to compiler inlining. In my case, given this is test code, the penalty is irrelevant. In that particular case, the compiler simply chose to inline exception creation several times, causing slightly larger code.

But even then, the difference is so small, and in some builds I've made now none, that I don't consider this to be relevant.

As I said, I should have tested the ideal case first.

In my case, this exception shouldn't be thrown. Since lengths are checked at higher level, only maliciously generated data could possibly get here. This is why my primary concern is that during normal operation, the exception mechanism stays as much out of the way as possible.

Share on other sites
Quote:
Original post by Bregma
Quote:
 Original post by Xaiboth overheads can be significant. In fact there are 3:1. The overhead the compiler must introduce to support the possiblity of an exception being thrown, this is paid pretty much throughout any project which has exception support not disabled, except in tight circumstances that the compiler has optimized with complete knowledge that no exception actually can occur.

Which, in any major C++ runtime published in the last half decade or so, is zero. None. Zilch. Nada. Not a sausage. Well, maybe an extra instruction or two when entering an explicit try-block, but nothing worse than the overhead of a function call. If you do the smart thig and design your software to minimize try-blocks, you will not be able to measure this Cost.

Oh really?

http://gamearchitect.net/Articles/ExceptionsAndErrorCodes.html would disagree.

Share on other sites
Quote:
Original post by Bregma
Quote:
 Original post by Xaiboth overheads can be significant. In fact there are 3:1. The overhead the compiler must introduce to support the possiblity of an exception being thrown, this is paid pretty much throughout any project which has exception support not disabled, except in tight circumstances that the compiler has optimized with complete knowledge that no exception actually can occur.

Which, in any major C++ runtime published in the last half decade or so, is zero. None. Zilch. Nada. Not a sausage. Well, maybe an extra instruction or two when entering an explicit try-block, but nothing worse than the overhead of a function call. If you do the smart thig and design your software to minimize try-blocks, you will not be able to measure this Cost.

I've been wondering about this. Isn't "an extra instruction or two when entering an explicit try-block" absolutely required? Otherwise, how does the stack-unwinding procedure know when to stop?

For that matter, just how does it work, anyway?

Share on other sites
Quote:
 Original post by SunTzuhttp://gamearchitect.net/Articles/ExceptionsAndErrorCodes.html would disagree.

It mainly disagrees when the error status is sent up only one function. I would be curious to see how propagating an error up five function calls (an average situation) measures between one exception and five error return codes.

Share on other sites
Indeed; the main point was, the claim that there is zero cost and zilch overhead for exceptions is completely untrue.

I strongly suspect that regardless of whether it is one level, five, or more, the actual penalty for handling exceptions with error codes or exceptions will be similar. However, the point is that turning on exception handling automatically incurs a cost in just about every function in the program (not "maybe one or two extra instructions in an explicit try block"), so the situation is basically: either don't use exceptions at all and disable them completely, or enable them in which case you might as well use them as you're paying for them whether you use them or not.

Share on other sites
Quote:
 Original post by ZahlmanI've been wondering about this. Isn't "an extra instruction or two when entering an explicit try-block" absolutely required? Otherwise, how does the stack-unwinding procedure know when to stop?

Yes. But like I said, it's about the same amount of overhead as when a function is entered. The overhead is necessary to register the catch clauses. It's sort of like pushing arguments on the stack.

Other than try-blocks, (and of course throwing and catching), the use of exceptions add absolutely no extra overhead to an application.
Quote:
 For that matter, just how does it work, anyway?

I can't speak for for the Microsoft compiler, but I can describe how GCC unwinds its stack.

There are two phases. The first phase crawls back through the stack looking for a registered catch that will receive the thrown object (the RTTI system is used internally for this). If nothing is found, terminate() is called, which by default calls abort().

If a catch clause is found, the stack is unwound, one frame at a time, executing any automatic destructors as necessary, until the appropriate catch-block is reached

It's the stack crawl and RTTI that gives exception handling the most overhead, and that's all done during the throw.

--smw

Share on other sites
Quote:
 Original post by BregmaOther than try-blocks, (and of course throwing and catching), the use of exceptions add absolutely no extra overhead to an application.

Quote:
 http://gamearchitect.net/Articles/ExceptionsAndErrorCodes.htmlNeither Microsoft C++ nor GCC implement zero-overhead exception handling. Instead, both compilers add prologue and epilogue instructions to track information about the currently executing scope. This enables faster stack unwinding at the cost of greater runtime overhead when an exception isn't thrown....In Microsoft C++, the overhead of exception handling in the absence of an exception is the cost of registering an exception handler as each function is entered and unregistering it as the function is exited. As far as I can tell, that's three extra push instructions and two movs in the prologue, and three more movs in the epilogue.

Sigh.

Share on other sites
Quote:
 Original post by SunTzuSigh.

I won't try to speak for Bregma, but I believe the relevant metric here is extra overhead, and your article only mentions total overhead. To evaluate the extra overhead of exception handling, you consider the total overhead of exception handling and substract the total overhead of handling errors through return codes. Because, well, we're comparing two methods here, not comparing exception handling to the absence of any handling at all (where we already know that exception handling is slower).

So, let's consider a typical example:

// Exception handling method: // We call the function and store the result. If an exception is triggered, // we let it propagate up the call tree.int number = computation();// Return code method:// We call the function, check that the return code is correct. If it isn't, we// propagate up the call tree.int number;if (computation(number)){  return ERROR;}

I have voluntarily removed the cleanup code from both examples, since it will be fairly similar. I have also considered "number" to be an integer. In the case of a class, it would have been trickier (a design problem if the class has no default constructor) and would have caused optimization problems in the second case because of the redundant initialization. Note that I've also considered the average case where an exception will propagate up the call stack through several functions before reaching the corresponding catch: in an average program, the number of try blocks is much smaller than the number of function calls, so the per-function overhead of try block breadcrumb laying is negligible.

So, the first call has three push and five mov in addition to the standard function call, which is the expected overhead for exceptions. On the second call, we have at least one compare, one jump and one additional argument. So, how do the two compare?

A possible counter-argument is the fact that error handling does not span over the entire project. It error handling only occurs in a small part of your project, then it is clear that said portion is cleanly separated from the rest of the project. You may then gain from delegating that project portion to a library compiled with exceptions enabled, and the rest of the project to one or more libraries compiled with exceptions disabled, both on the performance aspect, and on the design aspect.

Share on other sites
Something to note:
Considering the try setup cost is all fine, but we are talking about catching exceptions here.

If performance is going to be an issue maybe you should look at your code more closely, i'd suggest that having a number of try-catch blocks to the point that it becomes a performance concern would indicate a bad design.

Similarly, there should be no need to worry about the cost of throw-ing an exception. Something should have gone horribly wrong, else you would have been checking on a return type or error flag not hitting the panic button.

Just my 2 cents, try-throw-catch is just a glorified goto after all.

Share on other sites
Since I started what threatens to become a holy war, here's my take on it.

Exceptions need to introduce some overhead. In some cases, this may be critical.

But my original question refered to "real-world" example, namely the use of it in persistence library, where exception will never be thrown.

In that case, building and linking the entire library (some template 15 classes supporting 30 to some 80 types in final version) either with exceptions completely disabled, or proper exceptions, resulted at worst in 4% overhead, but depending on what inlining produces, usually with 0 overhead.

While there may be some overhead on paper, the simple cost of serializing STL collections of strings and other objects dwarfs any overhead caused by exceptions.

I feel that this is important because of premature optimization falacy. Exceptions in my case make code elegant. There isn't a single command other than << or >> in any of the persistable classes, and the only exception handler is around the network or file code that reads raw data.

And this was my main concern. Is it worth throwing away simple, reliable design that completely hides implementation at the expense some performance penalty. The answer is a big NO.

As always, knowing the details about various platforms may help, but premature optimization in this case would probably cause quite a few headaches and perhaps even a few timed bombs.

Share on other sites
[quote]Original post by SunTzu
Quote:
 http://gamearchitect.net/Articles/ExceptionsAndErrorCodes.htmlNeither Microsoft C++ nor GCC implement zero-overhead exception handling. Instead, both compilers add prologue and epilogue instructions to track information about the currently executing scope. This enables faster stack unwinding at the cost of greater runtime overhead when an exception isn't thrown.

Consider this (the Gypsy lays her cards out on the table...)

If a C++ compiler were required to emit prologue and epilogue code for each and every stack frame it emits, how would you you ever hope to link to the vast majority of third party libraries and code base written in C? You couldn't. The function prologue emitted for a C function, a C++ function with exceptions disabled, and a C++ function with exceptions enabled are identical.

The compiler designers at the major vendors realized years ago that reducing the runtime overhead of a throw at the expense of slower overall response was a bad tradeoff. They responded to the complaints, and there were many. Runtime overhead due to enabled exceptions is a thing from a previous century.

Throwing and catching exceptions is expensive. It should be done only under exceptional circumstances.

--smw

Share on other sites
Quote:
 Original post by BregmaIf a C++ compiler were required to emit prologue and epilogue code for each and every stack frame it emits, how would you you ever hope to link to the vast majority of third party libraries and code base written in C? You couldn't.

Actually, you could.

For instance, the prologue and epilogue could be made part of the function itself, without affecting that function's linkage — from the outside, it would only appear as if the function was 'playing' with the stack upon call and return, which is a pretty innocent and indifferent thing for a function to do.

Then, the exception is thrown, reaches the nearest compatible catch breadcrumb on the stack, and unwinds everything using the jump information that was placed on the stack by the prologue (and wasn't removed by the epilogue).

To be honest, I cannot seem to find a way to perform stack unwinding correctly without a prologue and epilogue adding some information to the stack in addition to the usual stack frame data.

Share on other sites
[quote]Original post by Bregma
Quote:
Original post by SunTzu
Quote:
 http://gamearchitect.net/Articles/ExceptionsAndErrorCodes.htmlNeither Microsoft C++ nor GCC implement zero-overhead exception handling. Instead, both compilers add prologue and epilogue instructions to track information about the currently executing scope. This enables faster stack unwinding at the cost of greater runtime overhead when an exception isn't thrown.

Consider this (the Gypsy lays her cards out on the table...)

If a C++ compiler were required to emit prologue and epilogue code for each and every stack frame it emits, how would you you ever hope to link to the vast majority of third party libraries and code base written in C? You couldn't. The function prologue emitted for a C function, a C++ function with exceptions disabled, and a C++ function with exceptions enabled are identical.

The compiler designers at the major vendors realized years ago that reducing the runtime overhead of a throw at the expense of slower overall response was a bad tradeoff. They responded to the complaints, and there were many. Runtime overhead due to enabled exceptions is a thing from a previous century.

Throwing and catching exceptions is expensive. It should be done only under exceptional circumstances.

--smw

while the essence of what you are trying to convey may be correct (that enabling exceptions today does not have as bad a performance cost as it used to, and is likely very very small. The details you use to say it are flawed. You are wrong, plain and simple - or at least unless I have been smoking something lately.

1. How would it be possible, in any conceivable world to know what to do during stack unwinding, given no information about the objects that have been allocated.

2. C does not normally have ANY run-time information about objects or memory allocated, and neither does C++ when RTTI is disabled.

Perhaps you are saying that much of the cost of exception support is paid already when RTTI support is included, and the ADDITIONAL cost is either 0 or negligable (which may be true, i'm not sure). But fundamentally there MUST be a runtime cost paid for C++ exception support over C functions. There is no imaginable way to create information out of air, and in C no run-time information exists.

As for your logic that C++ function "prolog" must match C for compatibility. There is the Caller code, which must be the same for compatibility, and then the callee code, which is absolutely NOT the same between them. Write 2 dynamic link libraries, 1 that calls a function in another. In version 1, turn of exception and RTTI support in both, view disassembly. In version 2 turn on exception and RTTI support in both, view disassembly.

They cannot be the same, even if neither library ever has the word "throw" or "exception" or "typeof" anywhere in them.

I say to use "dynamic link" libraries on purpose, because in any other case, the compiler can optimize based on the full reality it sees. But in a dynamic library which uses external code, no such assumptions can be made ... so you will see the true cost of such general support.

Share on other sites
Quote:
 Original post by BregmaYes. But like I said, it's about the same amount of overhead as when a function is entered. The overhead is necessary to register the catch clauses. It's sort of like pushing arguments on the stack.

Fine; logical enough.

Quote:
 Other than try-blocks, (and of course throwing and catching), the use of exceptions add absolutely no extra overhead to an application.

This seems semantically null. In what other ways could I possibly (erroneously) *expect* exceptions to add overhead?

Quote:
 There are two phases. The first phase crawls back through the stack looking for a registered catch that will receive the thrown object (the RTTI system is used internally for this). If nothing is found, terminate() is called, which by default calls abort().If a catch clause is found, the stack is unwound, one frame at a time, executing any automatic destructors as necessary, until the appropriate catch-block is reached

I was hoping for a bit more detail. What does a "catch registration" look like, and how is it distinguished from random stuff on the stack (e.g. function parameters) that happen to have a compatible bit pattern? (I assume the registration data includes some kind of vtable-pointer-like thing which is processed by "the RTTI system"?) Does the try-block create an actual separate stack *frame*, then? (Which I assume is still pretty cheap :) The real overhead of functions is in pushing all the arguments around, no?) Also, I take it that cleanup for each scope is abstracted into a separate subroutine (in the 'jmp' sense - no need for call and ret - just making a call to each necessary dtor) so that the exception handler can jump to the cleanup for each unwound frame?

ToohrVyk: Thanks for the link - hopefully it will make me look like an idiot for continuing to ask this stuff ;)

Quote:
 Original post by RdFJust my 2 cents, try-throw-catch is just a glorified goto after all.

Ugh. try-throw-catch is only "a glorfied (non-local) goto" in the sense that for or while is "a glorified (local) goto". Except that it does more (i.e. it calls destructors, which you can't do with setjmp and longjmp).