The answer is --- "during development and to deal with changes in behavior of OS, API and library functions".
It seems we both agree that once we have our applications working (or even just functions or subsystems working), we almost don't get any errors at all. However, when we write a couple thousand lines of new code, we might have made a mistake, inserted a typo, or misunderstood how some OS/API/library function/service is supposed to work [in some situations]. So that's mostly what error checking is for.
This might imply we can just remove the error checking from each function or subsystem after we get it working. There was a short time in my life when I did that. But I discovered fairly soon why that's not a wise idea. The answer is... our programs are not stand-alone. We call OS functions, we call API functions, we call support library functions, we call functions in other libraries we create for other purposes and later find helpful for our other applications. And sometimes other folks add bugs to those functions, or add new features that we did not anticipate, or handle certain situations differently (often in subtle ways). If we remove all our error catching (rather than omit them with #ifdef DEBUG or equivalent, we tend to run into extremely annoying and difficult to identify bugs at random times in the future as those functions in the OS, APIs and support libraries change.
There is another related problem with the "no error checking" approach too. If our application calls functions in lots of different APIs and support libraries, it doesn't help us much if the functions in those support libraries blow themselves up when something goes wrong. That leaves us with few clues as to what went wrong. So in an application that contains many subsystems, and many support libraries, we WANT those function to return error values to our main application so we can figure out what went wrong with as little hassle as possible.
You seem like a thoughtful programmer, so I suspect you do what I do --- you try to write much, most or almost all of your code in an efficient but general way so it can be adopted as a subsystem in other applications. While the techniques you prefer work pretty well in "the main application", they aren't so helpful if portions of your applications become a support library. At this point in my career, almost every application I write (even something huge like my entire 3D simulation/graphics/game engine) is designed to become a subsystem in something larger and more inclusive. So I sorta think of everything I write now as a subsystem, and worry how convenient and helpful it will be for an application that adopts it.
Anyway, those are my thoughts. No reason you need to agree or follow my suggestions. If you only write "final apps" that will never be subsystems in other apps, your approaches are probably fine. I admit to never having programmed with RAII, and generally avoiding nearly everything that isn't "lowest-level" and "eternal". The "fads" never end, and 99% of everything called "a standard" turns out to be gone in 5 or 10 years.... which obsoletes application that adopt those fads/standards with them. I never run into these problems, because I never adopt any standards that don't look reliably "eternal" to me. Conventional errors are eternal. OS exception mechanisms are eternal. Also, all the function in my libraries are C functions and can be called by C applications compiled with C compilers (in other words, the C function protocol is eternal). This makes my applications as generally applicable as possible... not just to my own applications, but to the widest possible variety of others too.
There's no reason you or anyone else needs to make these same policy decisions. I am fully aware that most people chase fads their entire lives, and most of the code they write becomes lame, problematic or worthless after a few years --- not because their code was bad, but because assumptions they and support libraries adopted are replaced by other fads or become obsolete. All I can say is, my policies accomplish what I want extremely effectively. Most of the code I write is part of a very large, very long term application that will end up taking 20 years to complete (and will then be enhanced and extended indefinitely). So I literally must not adopt any fads, or anything that might become a fad in the next 30 years. You would be completely correct to respond that not everyone needs to write in such an "eternal", "bomb proof" and "future proof" manner as I do. People can make their own decisions. That's fine with me. I hope that's fine with you too.
One final comment that is also somewhat specific to my long term application (and therefore a requirement for every subsystem I develop). This application must be able to run for years, decades, centuries. True, I don't count on this, the application is inherently designed to recognize and create "stable points" (sorta like "restore points" in windows), and therefore be able to crash, restart and pick up where it left off without "losing its mind". But the intention isn't to crash, restart, restore very often... the attempt is to design in such a way that this never happens. Yet the application must be able to handle this situation reliable, efficiently and effectively. Perhaps the best example of this kind of system is an exploration spacecraft that travels and explores asteroids, moons, planets (from orbit) and the solar-system in general. The system must keep working, no matter what. And if "no matter what" doesn't work out, it needs to restart-restore-continue without missing a beat. Now you'll probably say, "Right... so go ahead and let it crash". And I'd say that maybe that would work... maybe. But physical systems are too problematic for this approach in my opinion. Not only do physical machines wear and break, they go out of alignment, they need to detect problems, realign themselves, reinitialize themselves, replace worn or broken components when necessary, and so forth. And those are only the problems with the mechanisms themselves. The number of unexpected environments and situations that might be encountered are limitless, and the nature of many of these are not predictable in advance (except in the very most general senses).
I suppose I have developed a somewhat different way of looking at applications as a result of needing to design something so reliable. It just isn't acceptable to let things crash and restart again. That would lead to getting stuck in endless loops... trying to do something, failing, resetting, restarting... and repeating endlessly. A seriously smart system needs to detect and record every problem it can, because that is all evidence that the system will need to figure out what it needs to fix, when it needs to change approach, how it needs to change its approach, and so forth. This leads to a "never throw away potentially useful information" premise. Not every application needs to be built this way. I understand that.
I'm not sure what "error driven code" is supposed to be. In my programs, including my 3D simulation/graphics/game engine, errors are extremely rare, pretty much vanishingly rare. You could say, this (and many programs) are "bomb proof" in the sense that they are rock solid and have no "holes". Unfortunately, things go wrong in rare situations with OS API functions and library functions, including OpenGL, drivers, and so forth... so even "a perfect application" needs to recognize and deal with errors.
In short: why choose to have your code full of error checking (which breaks code flow and makes the code harder to read - that is really undeniable, IMO) to handle errors that are rare and unrecoverable anyway? Leave those to exceptions (or just crash the process), and keep the error checking code for cases where you can intelligently handle them and take appropriate action. It's best not to conflate exceptional conditions with expected errors.