First: The more you can do to make errors impossible to add to the code in the first place, the better. In C++, this is things like passing by reference instead of by pointer when null is not valid. Or using the new strongly-typed enums. Or making new light-weight "stub" types that don't support implicit casting for certain things so you don't mix up parameters that look similar like position and size. The list generally goes on and on smile.png
Agreed. I try to do this as much as I can in my personal projects. If I can design code in a manner such that it does not require obscene amounts of error checking, then that's a good thing.
However, in general, you should never ignore an error. Errors are there for a reason, to tell you you've done something wrong.
Also agreed. Unless you mean ignoring an error is only using an assert and not an if check, in which case, that's what I'm not sure yet, heh.
If it's a "public-facing" function, then I always use pre-condition checks, or code so that pre-condition checks are unnecessary. If it is an internal function that I control all calls in, then I'm more comfortable with using asserts to catch contract violations.
This seems more reasonable than using if checks everywhere in some functions (in this case, internal functions).
It's fine saying "but the error checking costs me in performance in my game loop" but you have to balance game performance with your ability to debug and finish the project and player perception if you release a buggy game.
Yeah. I guess my next questions would be: how do you personally decide what to do when an error has occurred? Where do you draw the line for performance and error handling? As an extreme example...are you really going to handle NaNs with if checks in a math/physics library?
Debug-only asserts may seem like a good idea in theory, but they are an absolute nightmare in production systems. Every piece of software I've worked on has ended up with a hard rule of no debug-only asserts.
The issue is that as soon as you have asserts that don't trigger in release builds, you are now testing fundamentally different code paths than are executing in production. That assert(x != nullptr) seems like a great idea at the time, but did you realise that you've just guaranteed that every piece of code after that line has never been tested with null values?
And this is why I'm on the fence about how I should approach error handling. If you even use a single assert (without a matching if check to exit the function or otherwise handle the problem) to eliminate one possible code path in a Debug build, your code is technically broken (or has a bug, at least) in Release builds.
Maybe what I should also ask is that in what cases should you: only use asserts, use asserts and some appropriate if checks*, or use asserts and if check* absolutely everything? Or is the slight performance hit and the development time cost of gracefully handling every single error worth it?
*By if check, I either mean setting the value to something appropriate so the function can continue running, exiting the function and/or deciding to crash the program (if the error is severe enough).