Crap
You're completely missing the point, or not seeing the forest for the trees.
For the trying to allocate some disk space examples, let's say the possibilities are that the OS either doesn't let us create the file, it creates the file with less bytes than we requested, or it creates the file with the correct number of bytes.
A typical "error handling" approach, using exceptions would look something like:
struct out_of_space_exception : std::exception { ... };
struct cannot_create_file_exception : std::exception { ... };
void allocate_file( const char* path, int size );// throws out_of_space_exception, cannot_create_file_exception
If you use return codes instead of exceptions, you can rearrange that to something like:
#define ERROR_SUCCESS 0
#define ERROR_OUT_OF_SPACE 0x87654321
#define ERROR_CANNOT_CREATE_FILE 0x87654320
typedef int ERROR;
ERROR allocate_file( const char* path, int size );//returns ERROR_SUCCESS, ERROR_OUT_OF_SPACE, ERROR_CANNOT_CREATE_FILE
But the "there are no errors" philosophy says to try to minimise the different types of results that can occur and streamline the caller towards easily dealing with them, so you'd end up with something like:
int try_allocate_file( const char* path, int size );//returns the number of bytes allocated, or -1 if the file couldn't be created
In contrast to the "error handling" philosophies, the "error" conditions aren't defined as exceptional or error conditions. Instead, the "error" conditions are normal (expected) values within the defined output domain. IMHO, making them expected values increases the likelihood of them being handled correctly. Furthermore, the fact that this is a request to the OS is made perfectly clear to readers at the call-site by clear naming, in this case the "try_*" prefix. If the call-site doesn't handle the return code, this prefix is a code-smell to readers, making the mistake stand out ("if this is a request, how is the result checked here?").
Of course the same "errors" can occur. The difference is in the label that you put on them.
One camp says that "allocate_bytes" can generate error conditions, including total failure or incorrect number of bytes.
The other camp says that "try_allocate_bytes" can allocate an unpredictable amount of bytes, including less than none.
Another way to word the philosophy might be "don't treat errors as being exceptional", or "make failures expected"
And yes, if you adopt this philosophy you can rearrange the vast majority of your code so that there are no "errors" at all, by moving the parts that can fail (e.g. OS requests) off into their own corner, and then making the rest of your code error-less. Exceptions have the opposite effect, where you have to treat almost every single function call in the entire program (including operators that can be overloaded) as if it could potentially trigger an error condition that must be handled. To pretend that there is no difference there, and that the real-world effects on your architecture are imaginary and not concrete is to indulge in significant ignorance or farce.
No one's arguing that if we can write code without errors/exceptions/whatever that we shouldn't do so. But just calling errors something else, doesn't change the method of handling them. For example, how are these any different?
if (try_allocate_file(file_name, file_size) != ERROR_SUCCESS) {}
if (try_allocate_file(file_name, file_size) != file_size) {}
Its the same thing, its still an error return code just called something different.
In the original discussion we had on this you posted a link to this article http://joeduffyblog.com/2016/02/07/the-error-model/. I read it and found it interesting. It even gave this quote:
Anders Hejlsberg : No, because in a lot of cases, people don’t care. They’re not going to handle any of these exceptions. There’s a bottom level exception handler around their message loop. That handler is just going to bring up a dialog that says what went wrong and continue. The programmers protect their code by writing try finally’s everywhere, so they’ll back out correctly if an exception occurs, but they’re not actually interested in handling the exceptions.
He calls it a 'head scratcher'. Now if writing an OS or a driver, ya that's a bad idea, but for a user application, 99% of the time that makes perfect sense. After going through all the rigmarole he decides on two things. First is to call bugs as 'not errors':
Given that bugs are inherently not recoverable, we made no attempt to try. All bugs detected at runtime caused something called abandonment, which was Midori’s term for something otherwise known as “fail-fastâ€.
Given that he calls the original quote a 'head-scratcher' and then basically does the exact same thing... I'm sorry, this is just a semantic game he's playing. Its not really an 'error' its an 'unrecoverable bug'... And if ignoring an exception is bad, then how is automatic program termination any less bad? At least in the case of the exception you can attempt to unwind the stack and maybe save some of the data. And as far as the assertion that he, you, and many others give that 'anything at any time can throw an exception', here's a list he came up with of things that are bugs that warrant 'fail fast':
- An incorrect cast.
- An attempt to dereference a null pointer.
- An attempt to access an array outside of its bounds.
- Divide-by-zero.
- An unintended mathematical over/underflow.
- Out-of-memory.
- Stack overflow.
- Explicit abandonment.
- Contract failures.
- Assertion failures.
And that's just the tip of the iceberg. Talk about 'can fail anywhere'. I don't see how you can criticize exceptions and then turn around and tout the same problem as a 'feature'. This is nonsense.
Now I agree in that errors (and hence exceptions) can occur near anywhere. But exceptions have an advantage over the 'std::terminate' method he and you advocate for: some can be caught if you want to catch them, because not all of these are unrecoverable under all circumstances. So worst case scenario, exceptions are just as bad as 'unrecoverable bugs' best case they are better. The reality lies in between but at least with exceptions you have the option.
I also find it ironic that in the end after calling many errors 'unrecoverable bugs', the rest of the time he advocates exceptions. I also found this quote interesting:
The model I just described doesn’t have to be implemented with exceptions. It’s abstract enough to be reasonably implemented using either exceptions or return codes. This isn’t theoretical. We actually tried it. And this is what led us to choose exceptions instead of return codes for performance reasons.
Time and time again I hear people state that exceptions are slow. This is not true. Exceptions are faster than checked return codes on any modern compiler; and if you're not checking your returns codes, then exceptions are safer. With RIAA they are almost always faster and safer.
So in short:
- Errors occur, whether you call them errors, assertions, exceptions, 'unrecoverable bugs', or otherwise, its still a condition that can and does occur and must be handled. Playing semantic games helps no one and leads to these ridiculously confusing debates.
- Errors can occur near everywhere. Whether you choose to throw an exception, call std::terminate, or ignore them, doesn't make any difference to the frequency of these extraneous conditions occurring. Using exceptions doesn't magically cause them to occur more often, calling them 'unrecoverable bugs' doesn't cause them to occur any less.