• Advertisement
Sign in to follow this  

C++ exceptions

This topic is 713 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

in various places around the internet I've seen strong recommendations against using exceptions in C++, for a variety of reasons (performance, semantics, code size, etc). Can anyone here make any comment on whether these arguments still hold today? I agree that exceptions aren't necessarily the best means of error handling, but in some cases they meet requirements that return-code style handling simply does not.

 

As game developers I'm sure many of you are biased towards disabling them completely (for the reasons above), but if that's your suggestion I'd appreciate it if you could supply hard evidence from updated compilers to support it.

 

Thanks

Share this post


Link to post
Share on other sites
Advertisement
I do agree with what Sean said.

Exception safety is a complex beast, and these arguments remain.

It used to be that memory and performance were the main concerns of enabling exceptions as was the case with many c++ features we now use blindly without concern. In days of old inheritance was considered a memory and cpu intensive feature.

Do certainly be aware of the issues Sean raised, they are still valid and exceptions can be a minefield waiting to happen.

As I said use them sparingly and for exceptional circumstances. Don't fall into the java trap of having silly exceptions for any minor program condition! -- C++ is not java and is not C#! :)

Share this post


Link to post
Share on other sites

4) the no-throw guarantee. No matter what, your function can never be involved in exceptional stack unwinding.


I would instead put it as "this function will never throw an exception". A little clearer wording.
 

If exceptions are enabled, you should aim for level 3/4 guarantees.


Why? There is nothing wrong with the "basic guarantee" that says "nothing is leaked and I am in a valid, if unspecified, state".

As yourself and others have already stated, trying to provide the "strong" or "no-throw" guarantee is very hard, sometimes impossible, and usually has large penalties.

But there is little to no reason to go for it in a game. Heck, the standard library itself only provides "strong" or "no-throw" when it can do so for little to no cost, and guarantees to provide the "basic" level of exception safety across the board. Edited by SmkViper

Share this post


Link to post
Share on other sites

1) C++ exceptions with modern compilers are zero cost (or very close to it) when not thrown.
2) Exceptions can even improve performance by avoiding lots of checks and cache invalidations that checked error codes can cause.
3) Exceptions are the only way to return errors from constructors.
4) Exceptions cannot be silently ignored.
5) Exceptions make error handling obvious and easier to separate from function logic.

1) No, thid depends more on the target architecture than the compiler. A modern compiler for x86-64 can do cheap exceptions. For other architectures, not so much.
2) I'd like to see that benchmark. Exceptions are just syntactic sugar to automatically insert those checks for you (removing the human error of forgetting to propagate an error). They also insert these checks in places where they aren't needed (the compiler doesn't know if an external function call might throw or not), harming performance. In either case, the performance solution is to write APIs that don't rely on errors so much in the first place / can't encounter errors, not to micro-optimize your error handling.
3) No, there's lots of ways to return errors from a constructor. Exceptions are the only way to automatically roll back the memory allocation associated with the constructor. This is very error prone though, and nested constructor calls plus exceptions is a common source of leaks.
4) There's two ways to ignore exceptions which are both just as disastrous - catching with too wide of a base class, e.g. when somone adds catch(std::exception&){} to solve out-of-range bug, but now causes out-of-memory to get ignored too...
The other is simply failing to catch some kind of exception. Exceptions are invisible, such that any function can throw any class as an error, and there's no way at all to know this from the API. So a bit of middleware forgets to catch an internal::widget in some rare case, so they fail to tell you to catch it, so you don't, and now it bubbles up to main and kills your game. That's not a silent error, sure, but it's a silent API, which is just as bad.
5) yes the verbose syntax makes error handling code obvious, but as above, it completely hides all details of which errors occur when/where, moving that knowledge into documentation instesd of self-documenting code.

The right way to handle errors is one of those never ending debates older than Usenet itself...

[edit]

Why [go for the strong guarantee]? There is nothing wrong with the "basic guarantee" that says "nothing is leaked and I am in a valid, if unspecified, state".

As yourself and others have already stated, trying to provide the "strong" or "no-throw" guarantee is very hard, sometimes impossible, and usually has large penalties.

But there is little to no reason to go for it in a game.

The point of using exceptions over regular flow control is that some higher level algorithm further up the stack can resolve the error, but you can't. That high level code can then run the algorithm again (or abort) after the error is resolved.

If a resolvable error occurs half way through gameplay logic, is resolved, and then the algorithm (game rules) are applied again, it seems pretty important that the rules of the game stay consistent.
Doing double damage because an auto-save ran out of disk space should not be a feature of your error handling ideal.

Share this post


Link to post
Share on other sites

The best arguement I've heard agains using exceptions is that they do hurt your worst case performance. If you have a rendering loop, exceptions thrown there will hurt your frame rate. In this case return value error cheching is better because the cost is comperable every frame.

 

I disagree that exception safety is hard. It is something to be aware of, but with good RAII single responsibility containers it's not a huge problem. It is however something that a programmer would need to learn if using exceptions in C++. I would be wary of people reccomending not using exceptions who don't know how to use exceptions: they don't know what they're missing.

 

In the non-error case, exceptions are faster than return type error handing. It also makes your code shorter. You handle error when you can do something about them; you never have to manually return an error code in a function when an a function inside it has as error condition; that's boilerplate you don't need to write with exceptions. Your code is arguably more clear as a result, because it plainly states the main path of execution.

 

In summary, I would reccomend  enableing and using exceptions, except in cases were you care about worst case performance.

Edited by King Mir

Share this post


Link to post
Share on other sites


Items need to remain invariant in the case of exceptions: if the exception happened in the middle of a sort, repairing the object's condition would mean unsorting to the original state;

 

Does std::sort actually observe this?  That's an enormous amount of overhead to be able to rollback an in-place sort (which I'm guessing most std::sort implementations are).

Share this post


Link to post
Share on other sites

 


Items need to remain invariant in the case of exceptions: if the exception happened in the middle of a sort, repairing the object's condition would mean unsorting to the original state;

 

Does std::sort actually observe this?  That's an enormous amount of overhead to be able to rollback an in-place sort (which I'm guessing most std::sort implementations are).

 

Std::sort just garuntees that the container is in a valid state when an expeption is thrown, but not that the partially sorted elements are in any particular order.

Share this post


Link to post
Share on other sites

4) the no-throw guarantee. No matter what, your function can never be involved in exceptional stack unwinding. This means that all the functions that you call must also provide this same guarantee (or they'll) infect your function. This one is hard to achieve in practice because there's no way to know if a 3rd party function may throw or not. Simply disabling exceptions at the compiler level magically bestows this guarantee on your entire code-base though smile.png

 

Including dynamically linked DLLs or system calls that were compiled with exceptions enabled?

 

What I mean is, even if you have exceptions disabled in your code, third party function calls can still throw exceptions and crash your program, right?

 

Take operator new, for example. People sometimes mistakenly think that disabling exceptions means that 'new' returns null on failure. In reality, even if you disable exceptions, operator new still throws exceptions on failure. Only if you explicitly use new (std::nothrow) does it return null.

 

And if you call third party code like an OS function call or a call from a DLL you're linking to, if that function throws, it'll still get thrown, not be caught by your application, and just be caught by the OS that launched your application, and pop up a generic dialog box saying something like "YourGame has stopped working".

 

So basically, if you disable exceptions, unless you only call third party (and OS) functions that are also Level 4, isn't your 'entire code base' only providing Level 1 guarantee?

 

</genuine questions>

Share this post


Link to post
Share on other sites

Including dynamically linked DLLs or system calls that were compiled with exceptions enabled?
What I mean is, even if you have exceptions disabled in your code, third party function calls can still throw exceptions and crash your program, right?

DLLs are a broken feature in C++, because C++ doesn't define a standard ABI, so there's no standard way to create a DLL that other people can use... which leads to the stupid situation where you need people to give you DLL's that have been compiled with your specific compiler, using your specific compiler settings.
The real solution there is to get the source code from your 3rd parties instead of just the compiled DLLs, in which case it becomes "your code" too ;)
OR
You use C DLLs, which means you shouldn't be throwing C++ exceptions across that boundary.

 

Same for system calls - they're usually a C API, which means it's nonsense to try and make a C++ exception cross that language boundary. I don't know of any OS API's that are C++.

Depending on your runtime under the hood, you may be making system calls into .NET/etc, which would throw it's own exceptions -- but these are not C++ exceptions, you would catch them with C++/CLI or C++/CX or whatever language your runtime is based on.
 
But yes, in the situation where you've disabled exceptions, it's more of an assertion that all of your code follows the no-throw guarantee. If that assertion fails, bad things will happen. 

Take operator new, for example. People sometimes mistakenly think that disabling exceptions means that 'new' returns null on failure. In reality, even if you disable exceptions, operator new still throws exceptions on failure. Only if you explicitly use new (std::nothrow) does it return null.

In the kind of environment where you're passing the compiler flags to disable exceptions, you're also usually replacing the standard memory allocator with your own implementation smile.png
The new/delete keywords and malloc/free functions have been banned for almost every single project I've ever worked on professionally; instead being told to use a project/game/engine-specific macro, which can easily redirect all allocations for different builds.

Edited by Hodgman

Share this post


Link to post
Share on other sites

If exceptions are enabled, you should aim for level 3/4 guarantees.

Obviously true if it's cheap to do so, but if an exception is thrown, something is wrong so rolling back the state may not be worthwile. This seems too broad a generalisation.

Also, the same rollback is needed regarless of your error handling mechanism. Exceptions don't change where you handle the error, they just take away the boilerplate of propigating it up.

C++ is much easier to read/write/maintain if we all just pretend that exceptions don't exist.

I disagree. Explicit value checking is harder to read because the main path of execution is more opaque, and because code is just plain longer.

 

You have to handle the error case whether you have exceptions or not, the only thing that exceptions make difficult is it's not explicit which code can fail. But the same code can fail with any aproach.

Share this post


Link to post
Share on other sites

 

4) the no-throw guarantee. No matter what, your function can never be involved in exceptional stack unwinding. This means that all the functions that you call must also provide this same guarantee (or they'll) infect your function. This one is hard to achieve in practice because there's no way to know if a 3rd party function may throw or not. Simply disabling exceptions at the compiler level magically bestows this guarantee on your entire code-base though smile.png

 

Including dynamically linked DLLs or system calls that were compiled with exceptions enabled?

 

What I mean is, even if you have exceptions disabled in your code, third party function calls can still throw exceptions and crash your program, right?

 

Take operator new, for example. People sometimes mistakenly think that disabling exceptions means that 'new' returns null on failure. In reality, even if you disable exceptions, operator new still throws exceptions on failure. Only if you explicitly use new (std::nothrow) does it return null.

 

And if you call third party code like an OS function call or a call from a DLL you're linking to, if that function throws, it'll still get thrown, not be caught by your application, and just be caught by the OS that launched your application, and pop up a generic dialog box saying something like "YourGame has stopped working".

 

So basically, if you disable exceptions, unless you only call third party (and OS) functions that are also Level 4, isn't your 'entire code base' only providing Level 1 guarantee?

 

</genuine questions>

 

Hopefully the libraries you consider will document if they use exceptions. But turning off exceptions does considerably restrict your library choices.

Share this post


Link to post
Share on other sites

Items need to remain invariant in the case of exceptions: if the exception happened in the middle of a sort, repairing the object's condition would mean unsorting to the original state;

Does std::sort actually observe this? That's an enormous amount of overhead to be able to rollback an in-place sort (which I'm guessing most std::sort implementations are).
Std::sort just garuntees that the container is in a valid state when an expeption is thrown, but not that the partially sorted elements are in any particular order.
The official standards groups have had much argument over most of the standard's guarantees when it comes to exceptions. As a whole it does not even provide the basic guarantee. Some operations meet some guarantees, and a few conditionally meet some guarantees, and a bunch of it is implementation specific. So generally no, it does not meet any of the exception guarantees as a whole.


Some items have a nothrow specification by definitions, so they have a guarantee for that level. They won't throw exceptions, and if something would happen then it will terminate the program.

However, nothrow has a cost in that those functions will not be fully inlined for many compilers. Often the nothrow specification means an invisible wrapper that says: try { theFunction(); } catch (...) { teriminate(); } Or said more simply, they implicitly have a try block around them and if anything throws your program suffers an insta-death. It is more difficult for the compiler to inline and cause the wrapper to disappear completely... unless internally it boils down to intrinsic operations the compiler knows cannot possibly throw like pure math operations. When an exception does happen inside a nothrow specified function, it is not friendly since terminate() typically just makes your program vanish. At least with Java's similar guarantee the VM is more friendly about reporting to the user.

But apart from nothrow specifications, it is very rare to see any exception safety guarantees in the c++ language. In a few things there are notes that something should not throw or will pass along exceptions, but the strong and weak guarantees are not specified, and often don't exist.


In fact, as discussed frequently in c++ language groups, if you've got two members and both have the potential to throw on any operation, then even tasks like simple assignment become a nightmare with the strong guarantee; if you assign to the first then the assignment to the second potentially throws, you need to unwind the initial assignment. Since composition is so commonplace and the standard libraries rely on templates for so much with composed objects, it would be insanely difficult to attempt to make the strong guarantee.

For the basic guarantee, (the guarantee that invariants are unchanged, items will be valid but potentially modified, and nothing leaked) again the template nature of much of the standard libraries makes that difficult. Many containers and templated items are able to make a the basic guarantee on a conditional basis: as long the thing inside the template does not throw in particular ways, and as long as those ways also implement the basic guarantee and will themselves remain valid, then the operation will be neutral to exceptions and therefore follow the basic guarantee.

Share this post


Link to post
Share on other sites
Ugh - this quickly devolved into the same religious war. Probably my last post because these never go anywhere.

1) No, thid depends more on the target architecture than the compiler. A modern compiler for x86-64 can do cheap exceptions. For other architectures, not so much.


Depends on both the architecture and compiler. Modern compilers are far better at this than older ones. And 90% of the code the people on this site are writing is for x86. As such, I am assuming x86 target. (Stated because I don't know performance characteristics of, say, mobile platforms. But then you're potentially writing Swift/Obj-C, Java, or .NET code)

You should always know the performance characteristics of your target platform (i.e. that branch mispredictions destroy performance on 360/PS3). From a brief spat of research to make this post, it seems ARM can't quite do 0-cost exception handling as additional information needs to be logged on function call and return.

2) I'd like to see that benchmark. Exceptions are just syntactic sugar to automatically insert those checks for you (removing the human error of forgetting to propagate an error). They also insert these checks in places where they aren't needed (the compiler doesn't know if an external function call might throw or not), harming performance. In either case, the performance solution is to write APIs that don't rely on errors so much in the first place / can't encounter errors, not to micro-optimize your error handling.


Incorrect. Exception handling does not require any checks in the normal (non-exception) case because the very act of throwing an exception takes a different path than normal though a stack-unwinder that uses additional information provided by the C++ compiler as to what needs to be destructed in what order, and which catch block will match the throw, able to jump to it directly (or usually through fewer hops than manual returns would).

This is why Stroustrup includes his "3% size increase over zero error handling" number - for the additional information needed to perform stack unwinding.

Long article on GCC exception handling implementation

The way gcc (and many other compilers) implement this ABI on x86 is by using metadata (the .gcc_except_table and the CFI). Although it’s rather difficult to parse, and it might take a long time to parse this on runtime when an exception is thrown, it has a great upside: if no exceptions are thrown then there’s no setup cost to be payed. This is called “Zero-cost exception handling” because a normal execution, where no exceptions are thrown, no penalty is payed. The performance is exactly the same we would have as if we had specified nothrow. That’s right, leaving code locality & caching issues aside, using exceptions or not has no performance penalty unless an exception is actually thrown. This is a great advantage and it goes in line with C++ philosophy of having no-cost for non used features


3) No, there's lots of ways to return errors from a constructor. Exceptions are the only way to automatically roll back the memory allocation associated with the constructor. This is very error prone though, and nested constructor calls plus exceptions is a common source of leaks.


Proper RAII use will never leak memory, no matter how deep your constructor calls and how many exceptions you throw. See the before-linked Stroustrup article on why.

And I wouldn't call a memory leak an acceptable method of error handling in a constructor.

4) There's two ways to ignore exceptions which are both just as disastrous - catching with too wide of a base class, e.g. when somone adds catch(std::exception&){} to solve out-of-range bug, but now causes out-of-memory to get ignored too...
The other is simply failing to catch some kind of exception. Exceptions are invisible, such that any function can throw any class as an error, and there's no way at all to know this from the API. So a bit of middleware forgets to catch an internal::widget in some rare case, so they fail to tell you to catch it, so you don't, and now it bubbles up to main and kills your game. That's not a silent error, sure, but it's a silent API, which is just as bad.


catch(...) is visible in the code. I can see that you've ignored exceptions. Return values can be completely ignored invisibly (and by default) because C++ doesn't care if you've ignored a return value. Passing a return value by reference in the parameter list is more visible, however.

If I forget to catch an exception then I immediately crash at the source of the throw. I'd call that pretty visible and easy to debug. If I ignore a return value or error parameter nothing happens (immediately) and I may crash later far away.

5) yes the verbose syntax makes error handling code obvious, but as above, it completely hides all details of which errors occur when/where, moving that knowledge into documentation instesd of self-documenting code.


I don't care where exceptions are thrown. If I can handle a particular exception, then I handle it. If I need to know where an exception is thrown I can easily see that in a debugger without tracing some kind of convoluted return value propagation code which leaves no trace.

The point of using exceptions over regular flow control is that some higher level algorithm further up the stack can resolve the error, but you can't. That high level code can then run the algorithm again (or abort) after the error is resolved.

If a resolvable error occurs half way through gameplay logic, is resolved, and then the algorithm (game rules) are applied again, it seems pretty important that the rules of the game stay consistent.
Doing double damage because an auto-save ran out of disk space should not be a feature of your error handling ideal.


That's entirely on you in how you've set up your code. This is no different from any other form of error handling.
 

Exceptions are the only way to return errors from constructors.


Which should never actually be necessary. Also, constructors can take out-parameters; the constructed object must simply be left in a valid-for-destruction state upon error and you're fine, which is generally trivial to accomplish.


For better or worse (I assume you think worse considering past arguments with you) exceptions are how the C++ runtime knows to abort allocations and clean up from failed construction.

Sure, you can set a fail bit or a parameter return value or something, but then it's on you to remember to clean up.

There's also the use of monadic error states in objects. You see an example of this in C++'s standard I/O library. It's also the preferred mechanism for error states in most graphics or game libraries.


Yup, that's certainly an option. And one that is easy to ignore and forget.

Exceptions make error handling obvious and easier to separate from function logic.


Very arguable. They only make it clear in the place that you handle the error. They have little improvement at the place errors are raised, and unlike _any_ other sane form of error handling, make it impossible to tell that an error is happening in intermediary functions in the call chain: exceptions are automatically propagated upward with no clear indication of where or when an exception might happen in a function's body. That's their biggest evil. Modern replacements require explicit per-statement or per-expression TRY markers to indicate that an expression which can fail is allowed to be invoked while still propagating the exception upward.


I don't care where a function is thrown. If I've coded things correctly (basic guarantee) then I know whatever function I tried to call (or one it called) failed to do what I asked it to do. Which is the exact same amount of information I get from error codes or monadic error objects, just automatically.

No, we understand this perfectly. This is something I've done a non-trivial amount of research on lately specifically as I'm working on getting better containers pushed up into the C++ ISO standard, as the current ones (which all provide the strong guarantee) are very sub-optimal simply because they must be to provide the strong guarantee.

The basic exception guarantee is not good enough for most interesting container uses. If the result of the exception turns into "the whole data structure must be wiped clean before propagation" then the exception often equates to data loss. It is no longer safe to overwrite user files or continue a process with meaningful side effects in the face of such lost data. The only safe choice at that point is to abort the application.

So far as error-safe, yes, true, but most errors simply shouldn't propagate. If you're out of memory, just abort. Only a very very very tiny handful of applications have any reason to be tolerant of OOM errors, and those applications are typically written in C or an exception-free subset of C++ anyway.

The problem with exceptions is that they can be thrown without your knowledge. In C++, you can use noexcept to test for things, but you can't require noexcept in as many cases as you might like. C++'s std::string and most other containers are not guaranteed to have noexcept move semantics, for instance, because the standards gives implementors the freedom to use brain-damaged algorithms to implement the containers that always allocate on move or even default construction. (Which incidentally is one of the reasons to never use the STL and to reimplement most of it from scratch in a game; the standard intentionally has very weak quality requirements.)


Either you are handling the error (in which case you're writing a ton of if/else statements and may forget something) or you aren't handling the error. In both cases with exceptions the compiler gives it to you automatically.

I'm not a library writer, so I won't answer the "must provide strong guarantee" and "hampers performance" part of the STL, I'll simply "appeal to authority" and say the STL guys aren't stupid, and if you think you know better (and you probably do for your exact application, because the STL has to be generic) then C++ gives you all the facilities to make your own libraries with your own performance characteristics and tradeoffs.

Including dynamically linked DLLs or system calls that were compiled with exceptions enabled?
 
What I mean is, even if you have exceptions disabled in your code, third party function calls can still throw exceptions and crash your program, right?


C++ has no ABI, so DLLs and system calls can only expose C++ features if compiled with the exact same compiler. Otherwise you expose an interface that has a defined ABI (for example, C and COM) and catch C++ exceptions at the boundary layer and provide some other defined mechanism for exposing errors (commonly, return error codes).

There have been efforts to provide a C++ ABI, but they've been trying for years.

This also isn't an "either, or" situation. Exceptions or return-codes is a false dichotomy. There's so many more ways to think about program flow than those two options.
e.g. often in game middleware code I see a kind of pre-validate-and-assume-the-preconditions error handling mechanism.


And yet they all boil down to "something wrong happened, I can't handle it, pass it to my caller". Exceptions provide one way to do this in a way that the compiler can automate a lot of the gruntwork.

The "false dichotomy" I believe comes from the hardware - which generally only has interrupts (i.e. a form of exceptions) and memory/registers (i.e. return values).

Compilers and languages can do whatever they want to hide and wrap them, but that is what it comes down to.

IMHO, designing your code so that there's the least amount of error handling required is the best option, as then this debate gets less and less relevant -- the best way to handle errors doens't matter if you don't have any errors smile.png


Very true. Unfortunately, we have to go outside our code base at some point, which is where errors will creep in. If you can compartmentalize those very close to the source, then it makes everyone's lives easier.

However, nothrow has a cost in that those functions will not be fully inlined for many compilers. Often the nothrow specification means an invisible wrapper that says: try { theFunction(); } catch (...) { teriminate(); } Or said more simply, they implicitly have a try block around them and if anything throws your program suffers an insta-death.


FYI - it's noexcept, not nothrow. And the compiler can typically be more optimized than doing a try/catch because the compiler knows about it.

Share this post


Link to post
Share on other sites

I don't have much to add other than I love exceptions.  I don't find it difficult to use RAII (its pretty much the C++ way exceptions or not).  I have a 'catch all' exception handler (along with exception specific handlers where needed) that wraps pretty much any project I'm working on that gives me file number, line number, stack info, etc... so I know of anything that might 'get through the cracks' making it very easy to track.  I personally find it much more difficult for errors to propagate through my code because you can't forget to check an exception, doing so produces a beautiful error message with stack dump and everything.  Flow of control I personally find much easier to follow with exceptions, having to repeatedly check each function call's return values for an error sentinel is tedious and error prone IMHO.  I find exceptions to be much like 'const': you have to get used to it, and tacking it on at the end is nigh impossible; but when you start a project from scratch with exceptions in mind I find them relatively effortless to use, safer, and faster.

 

In fact, as discussed frequently in c++ language groups, if you've got two members and both have the potential to throw on any operation, then even tasks like simple assignment become a nightmare with the strong guarantee; if you assign to the first then the assignment to the second potentially throws, you need to unwind the initial assignment. Since composition is so commonplace and the standard libraries rely on templates for so much with composed objects, it would be insanely difficult to attempt to make the strong guarantee.


For the basic guarantee, (the guarantee that invariants are unchanged, items will be valid but potentially modified, and nothing leaked) again the template nature of much of the standard libraries makes that difficult. Many containers and templated items are able to make a the basic guarantee on a conditional basis: as long the thing inside the template does not throw in particular ways, and as long as those ways also implement the basic guarantee and will themselves remain valid, then the operation will be neutral to exceptions and therefore follow the basic guarantee.

I don't see how exceptions have any real effect on that.  If you have two members with the potential to produces errors on assignment, whether the errors are indicated through an exception or through some other mechanism, the problem of a strong guarantee doesn't change.  Its the same problem whether its through a throw/try/catch, a sentinel return value, a global/thread_local flag, or whatever other mechanism you plan on using.  At the very least with an exception I don't need to know the exception type, all I need to know is an exception can be thrown.  Every other error checking method not only do I have to know that an error could occur, but I have to delve into the documentation on how I need to check for it.  So while providing a strong exception guarantee is hard enough with exceptions, its far harder without exceptions, and in many places near impossible when you bring templates into the picture.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement