Questions about Error-Handling

Started by
41 comments, last by rip-off 7 years ago

He has not argued in good faith which disappoints me because I generally hold his opinion in high regard.

In your oh so very humble opinion. In mine, and I'm sure quite a few other peoples minds, he has.

I'm quite done with this thread now. All the mods, most of whom work in the game industry, have weighed in against you. You're mostly playing word games now it seems, personally deciding for us all what definition of terms we should use. If you've been to university you know referencing Wikipedia is an automatic failure.

Error/Bug free software can and does exist without C++ exceptions. Most airplane software is written in C and now C++ with very strict guidelines. Airbus and Boeing from my understanding now use C++ in their software development. The F-22 & F-35 use C++ when I read about their guidelines, banned the use of exceptions and have zero tolerance for bugs or error states for I hope obvious reasons.

@Oberon_Command - Thank for that last post :)

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

Advertisement

But I guess it's a bit a question of philosophy, too. If you're 100% positive that you test all conditions that may possibly fire an assertion, it's of course fine... you get the same net effect without runtime cost. I'm not that confident with myself, I prefer still having a minimum amount of runtime checks.

It's a lot of philosophy, actually.

Much of that originates from the kind of programs, and scale of programs that you write. For small to medium programs, written by a small number of people, exceptions work. Everybody knows what exceptions to expect at every point in the program, and can make a decision whether or not to catch each of them at the function they are writing.

One thing you cannot see in the code however, is whether a developer actually considered handling exception X and concluded he didn't have to, or that he never even considered that exception X could happen. Lack of catching an exception doesn't tell that in the code.

This is where exceptions break down in bigger projects. With more people, and people coming and leaving the project (with a 100 employees, people do get fired and hired while the project is developed). Nobody has a full overview of all the code, nobody knows what exceptions to expect precisely from the called sub-routines, since that is typically not documented. If you want to know for some routine P, you have to follow all calls from P recursively down to the OS calls, and then see how all functions that it directly or indirectly calls, handle each kind of exception that they might get from their child function calls. In the worst case, this means you have to examine all code of the project. Unfortunately, with a large program, this is not feasible due to time and resource constraints. Even if you had enough time, it's all manual work, which means errors will be made, so your final conclusion about the set of exceptions that may happen at some point in the program is likely not even correct.

So, for big programs and large projects, it's pretty much not feasible to know what exceptions to expect, so essentially, you have no idea what exceptions you should actually handle in the code. How to get out of this? This is where the "exceptions are bad" idea comes from. Instead of having exceptions that may or may not happen in some routine P, we return the error result in the return-value, and at the next higher level decode the cases, and handle the normal and all error cases.

It costs a little CPU time, but the code explicitly tells us at every function, what error cases exist. In the code, I can see what the programmer did for each case. The code is different between developers forgetting about an error case, and forwarding it to the higher level. That means it's simpler to find in code reviews that some error was never considered.

The bigger advantage is however that all over the program, for every function P, I can see what errors it currently returns. I can look at the child function calls, and see what they can return. I don't need to recursively dig through half the code base to collect that information. The normal case and all error cases can be found locally. Reviewing becomes much simpler, and thus gives better result, leading to higher quality code.

It may cost a millisecond CPU time, but it saves you several developers that you don't have to pay, and gives higher quality code, for a large program.

I think both sides are arguing from a position of good faith. It is a matter of worldview and which definition of the words you subscribe to. Indeed, I feel I've sat on different sides of that fence at different times in my career.

IMO, assertions are a special case of error handling. I like the idea in Joe Duffy's blog, between unexpected program states and legitimate external conditions in a hostile world. Personally, I think this is a useful distinction to make and it is worthwhile considering having different strategies for these situations. The economic impracticality of eradicating the former case before software is released is one factor, balanced against the difficulty specifying behaviour for what should happen for such cases.

The essence of these worldviews is a fault line running through the industry, contrast C with its undefined behaviour vs Java which tries to give consistent and safe behaviour, e.g. for out of bounds array accesses.

Interesting as some of the discussion has been, I'd like to bring the discussion back to Angelic Ice's question now, which is "what would you do"? In particular, for these " hostile world " cases - modulo cosmic rays flipping bits I suppose.

This topic is closed to new replies.

Advertisement