How slow are exceptions?

Started by
40 comments, last by legalize 16 years, 6 months ago
Was this the video?
Advertisement
Well, if you use exceptions like most people do in imperative languages, the amount of exceptions thrown during normal execution of the program is probably below 100 (likely zero). Then the amount of time it takes to throw these would have to be absurdly high for it to be noticeable. The interesting part is then how much it slows your program down when you're not throwing, which has been discussed already.

I don't really see any performance reason not to use exceptions. And in the OP's case, it's not even being used in performance critical code...
Quote:Original post by Jan Wassenberg
Quote:Umm, you do realize that those time differences are showing how long it takes to throw 0 exceptions, vs throwing and catching 1,000,000 exceptions?

No, I do not. Look at example 2 to see that XBench.c throws and catches "exceptions" much like CPPBench.cpp; the difference is that it uses a home-brewed method akin to SEH.
Or maybe you have chosen a narrow definition of "exception"; if that's the case, let's not bandy semantics.


I'd just like to point out the following:
Quote:Code sizes and benchmark results for C and C++ exception-handling compiled with Borland C++ Builder 4.0, run under Windows NT


IIRC, that version was released in '99. Or 8-9 years ago, which makes it almost as relevant as VC6.

I would however be interested into someone re-running the same benchmarks on today's compiler. Preferably, without modification.

It's commonly considered that MVS2003 was first step in the right direction, and that MVS2005 is the first commonly accepted "good" C++ compiler outside of gcc.
Quote:
Quote:Use exceptions until you've profiled your application and determined that exceptions cost too much.

Bwahahaa! There's nothing like failing to look before you leap; the end result may be something you'd like to change, but simply can't because there's way too much that depends on it.

That sounds more like an argument for profiling something before adding an orgasm of dependencies, or better yet, not doing so -- keeping your code sanely decoupled, avoiding such a situation in the first place. Of course, if you're working at developer hell, and for some god forsaken reason feel the need to stay there, where all your coworkers are basically conspiring against you personally and eschewing all common sense WRT sane coding practices, then yes, I suppose an avoidance policy like that would be good approach. Personally, I'd prefer to deal with whatever issues are preventing me from jumping ship, and prevent them from happening again.

Here's some slightly less dated numbers from Q4 2006.

A couple highlights from VS2k5 release build on a 2.4Ghz processor:
~87 thousand exceptions thrown and caught/second max (0.011 msec/exception thrown and caught in a local scope)
~1.6 billion individual try blocks/second (max)

Quote:But: EOFException et al. really are the kind of goto-travesty people are so happy to jump on, yet even worse, because they may end up jumping ANYWHERE. (heh, that's not going to bode well for static analysis..)

Error codes can be handled ANYWHERE too. Even worse, they can be silently forgotten and ignored without the slightest shred of code evidencing it. I fail to see how this alternative isn't 100x worse.

Certainly, where local error handling is sane and viable, exceptions make no sense. This may be what you were getting at with your "EOFException et al.". However, there's a lot of failure conditions which end up as a "controlled crash and burn" up to a given scope. An EOF inside 3 levels of indentation in the parsing of a C++ file would qualify, crashing and burning all the way out to the end of that translation unit.

Player pingout? Propagating error codes from possibly hundreds of individual socket read and write positions by hand is only going to make for horribly jumbled code for the most part. Crash and burn out to the player I/O iteration loop.

There are plenty of non-app-fatal exceptional circumstances which exceptions are appropriate for. I'd argue all of these situations are going to be places where the overhead of exception handling is entirely acceptable too.
Quote:IIRC, that version was released in '99. Or 8-9 years ago, which makes it almost as relevant as VC6.

Yes and no. BCB was worlds better than the VC6 travesty, and the mechanism of how exceptions are *thrown* remains unchanged AFAICS.

Quote:It's commonly considered that MVS2003 was first step in the right direction, and that MVS2005 is the first commonly accepted "good" C++ compiler outside of gcc.

huh? ICC isn't "commonly accepted to be good"? Comeau is of course the gold standard for conformance, and Watcom has its fans.


[q]keeping your code sanely decoupled, avoiding such a situation in the first place[/q]
Um, no. No amount of decoupling is going to change the fact that going from exception handling to error codes or vice versa is very hard.

From the other thread:
Quote:Note: The timing mechanism used was boost::timer, I'm uncertain how coarsely grained it is.

FYI, it's good to about 10ms on Windows.

Quote:~1,686,340,640 entries/leaves from try blocks/second

Since you mention a 2.4 GHz processor, your results indicate that each operation takes 1.5 clocks. There's no way in hell that this is realistic or even remotely imaginable. C++ exceptions are based on SEH, which involves a kernel transition, which by itself burns a few hundred cycles. I bet you're only measuring the loop and increment.

Quote:I realized there was no signal to be found in the noise

Correct! The compiler is clearly optimizing out the throw/catch.
When presenting microbenchmarks, you must look at the asm code.


Quote:Error codes can be handled ANYWHERE too.

Yes, but existing static analysis tools can see where.

Quote:Even worse, they can be silently forgotten and ignored without the slightest shred of code evidencing it.
Alexandrescu has a solution :)



// in a rush, got to head to work..
E8 17 00 42 CE DC D2 DC E4 EA C4 40 CA DA C2 D8 CC 40 CA D0 E8 40E0 CA CA 96 5B B0 16 50 D7 D4 02 B2 02 86 E2 CD 21 58 48 79 F2 C3
Quote:Original post by Jan Wassenberg
Quote:Use exceptions until you've profiled your application and determined that exceptions cost too much.

Bwahahaa! There's nothing like failing to look before you leap; the end result may be something you'd like to change, but simply can't because there's way too much that depends on it.
Time for some due diligence beforehand: (http://www.on-time.com/ddj0011.htm; the numbers are somewhat dated, but quite interesting)

Program			Code Size	Time (no throws)	Time (with throws)XBench.c		4.6k		1392 ms				1362 msCPPBench.cpp	35.3k		1492 ms				71343 ms


That's the kind of difference I'd like to know beforehand, while it's not yet too late.

Ok, using l33t C exception handling, 1,000,000 exceptions take 1362ms compared to 71343 ms in C++, making C++ exceptions 52 times slower.

Does this matter? Not necessarily.

If my code is only going to throw, say, 12 exceptions per hour, then I don't give a fuck if during that hour I've spent an extra 0.054999 on error handling...

Yes if I'm trying to throw 1,000,000 per minute I'd choose something faster, but C++ exceptions are not designed to be used in that way (and neither are those l33t C variants).
It's just a case of knowing your tools...

Quote:Original post by Jan Wassenberg
You might ask: why not just roll back the current transaction or whatever if there's not enough memory to do it? That may work from the perspective of a single routine, but I bet not all code will have been tested for this and your app WILL die.

If you app *WILL* die because an exception was thrown, then you're not a good C++ programmer... Better to stick with "C with classes" in that case.
Quote:Original post by Jan Wassenberg
Quote:~1,686,340,640 entries/leaves from try blocks/second

Since you mention a 2.4 GHz processor, your results indicate that each operation takes 1.5 clocks. There's no way in hell that this is realistic or even remotely imaginable. C++ exceptions are based on SEH, which involves a kernel transition, which by itself burns a few hundred cycles. I bet you're only measuring the loop and increment.

I'm profiling release mode, which uses the wonderful invention known as the "optimizing compiler".

Quote:
Quote:I realized there was no signal to be found in the noise

Correct! The compiler is clearly optimizing out the throw/catch.
When presenting microbenchmarks, you must look at the asm code.

Sure, if you're trying to coerce your benchmarks into oversampling the cases where your optimizer isn't doing as well -- in other words, the complex, outer loop cases where performance shouldn't be an issue in the first place.

Quote:
Quote:Error codes can be handled ANYWHERE too.

Yes, but existing static analysis tools can see where.

Sure. But if we want to argue that point, I'd care to point out that VS2k5's built in static analysis seems to actually work better with exceptions -- note the total lack of dead code warnings WRT benchmarks 4 and 5, compared to 2 and 3 which do issue dead code warnings.

Quote:
Quote:Even worse, they can be silently forgotten and ignored without the slightest shred of code evidencing it.
Alexandrescu has a solution :)

But do you actually use it? (Idea #1 here)

You certainly haven't* advocated it as the obvious and better alternative to use exceptions or plain error codes. Since we're talking about him and his idea, let's quote him on the subject of the thread-listed alternative that we're debating exceptions against: "Mandatory error codes fit in between the too-low-key-to-be-useful error codes and the sometimes-too-radical exceptions." (emphisis added -- note, he doesn't even bother qualifying the first remark with a "sometimes", and just plain flat out generalizes the statement)

(* unless off hand ambiguous reference as a counterpoint deep down the page and list of the debate point/counterpoint list counts as "advocacy")

[Edited by - MaulingMonkey on October 23, 2007 2:33:56 AM]
*sigh*

Quote:l33t C exception handling

Aw3som3!!1

Quote:If you app *WILL* die because an exception was thrown, then you're not a good C++ programmer... Better to stick with "C with classes" in that case.

No, better to stick with reading comprehension. "will die" referred to the case where your app has run out of memory, which leads to (and is indicative of) trouble.

Quote:I'm profiling release mode, which uses the wonderful invention known as the "optimizing compiler".

OMFG! You consider it good that the very thing you're trying to measure has been optimized away?
Bad enough to present blatantly wrong results (see http://portal.acm.org/citation.cfm?id=337885.337899), but praising the compiler for it is just.. silly.

Quote: I'd care to point out that VS2k5's built in static analysis seems to actually work better with exceptions -- note the total lack of dead code warnings WRT benchmarks 4 and 5, compared to 2 and 3 which do issue dead code warnings.

Interesting. While VC2005's "static analysis" is nice to see in a widely available compiler, it is not quite what I have in mind. Consider formal proofs of correctness (TU Dresden has been working on verifying an OS microkernel for the past few years; see http://os.inf.tu-dresden.de/vfiasco/doc.html); exceptions would be the kiss of death there.

Quote:But do you actually use it?

Yep, sure have/do. That, and Lint warnings that tell you when return codes aren't being checked.

Quote:You certainly haven't* advocated it as the obvious and better alternative to use exceptions or plain error codes.

I have advocated nothing but "looking before you leap" and "look[ing] at the asm code [of benchmarks]".
But feel free to get all riled up and totally miss the point of what has been said - it's amusing :)
E8 17 00 42 CE DC D2 DC E4 EA C4 40 CA DA C2 D8 CC 40 CA D0 E8 40E0 CA CA 96 5B B0 16 50 D7 D4 02 B2 02 86 E2 CD 21 58 48 79 F2 C3
Quote:Original post by Jan Wassenberg
Quote:I'm profiling release mode, which uses the wonderful invention known as the "optimizing compiler".

OMFG! You consider it good that the very thing you're trying to measure has been optimized away?


When the point is to demonstrate that *it can be* optimized away, I would say so :)
Quote:Original post by Zahlman
Quote:Original post by Jan Wassenberg
Quote:I'm profiling release mode, which uses the wonderful invention known as the "optimizing compiler".

OMFG! You consider it good that the very thing you're trying to measure has been optimized away?


When the point is to demonstrate that *it can be* optimized away, I would say so :)

Indeed.

Quote:Original post by Jan Wassenberg
I have advocated nothing but "looking before you leap" and "look[ing] at the asm code [of benchmarks]".

Fair enough, although I'd argue looking at your premises at least as important as looking at your conclusions, the later of which is all you'd been calling attention to.

Quote:But feel free to get all riled up and totally miss the point of what has been said - it's amusing :)

You're the one shouting "OMFG!" mate :). But I'm glad you're enjoying yourself.

This topic is closed to new replies.

Advertisement