When to use SEH vs checking return value?

Started by
2 comments, last by frob 12 years, 3 months ago
The book im reading from Mr. Richter from microsoft states the following about SEH

if a situation is the norm, checking for the situation explicity is much more efficient then relying on SEH capabilities.


This might be kinda silly but could anyone give me an example of a situation of the norm? thanks ;D
Advertisement
The more frequently something might happen, the less likely you are to want to use an exception to handle it.

For example, if you are writing a video game and want to keep the player from leaving the boundaries of the screen, using a simple set of if-checks to clip them is much more efficient than throwing an exception if the character exits the "legal" area.


Also, note that SEH is a Microsoft-specific invention that interacts in weird and sometimes undefined ways with other language exception mechanisms. Assuming you are programming in C++, if you want to use exceptions, use the language exception mechanism instead of SEH unless you absolutely know what you are doing.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

SEH overhead is about the same order of magnitude of overhead as C++ exception on MSVC, which makes sense because C++ exceptions are implemented in terms of SEH. You need the frequency of error conditions to occur less than around one in ten thousand times for SEH to beat out just checking a return value. The precise value depends on how you measure things, how deeply errors propagate, etc. but one in a ten thousand feels about right for most purposes.
C++ is very different from Java or C# when it comes to exceptions.

In C++, generally if you think it can happen you should use a return value.



Consider that many game companies routinely disable C++ exception handling in all released games. They don't do it just because they fear exception handling being slow.

This might be kinda silly but could anyone give me an example of a situation of the norm? thanks ;D
[/quote]
That depends on your application. Let us say you're writing a console application that reads numbers, e.g. maybe it outputs the sum of the numbers or something. If this application is designed to be used interactively by a user, the chances of it being passed a non-numeric value are high. However, if this program is design to be invoked by another program (one designed to output numeric values), then the chances of it being passed a non-numeric value are much lower.

In either case the situation can happen, and must be accounted for. However, the "normal" usage of the applications is different. In the former, the human "user" is present, ready to make (but also correct) mistakes. In the latter, the "user" is typically a another program. The normal usage will consist of a stream of correct input. The inclusion of a non-numeric input is likely to indicate that something has gone terribly wrong.

This is an overall design decision one must make. In the former case, I'd probably design the program to prompt the user to correct their mistake. For the latter, I might terminate the program with a given exit code, which can be detected by the calling program.

There is no one answer to this problem. However, your gut instinct should be to assume that errors do happen, and happen frequently enough to be worth handling gracefully, unless you have special knowledge that overrides this (like the second case in the above example - special knowledge of the input creator can result in a different design).

This topic is closed to new replies.

Advertisement