Best Practice for Values Return C/C++

Started by
25 comments, last by Lightness1024 11 years, 1 month ago

I have to disagree with part of your message. In many of the functions in my engine, some resources further down in the function cannot be created unless the resources created earlier in the function succeed. So you can't just willy-nilly "free everything" in one place... at least not in every function or situation. Some of these problems can be eliminated by setting all resource variables to zero at the function entry, and then checking those resource identifiers for non-zero values in a single "free all resources if an error is encountered" spot near the end. I used to do that sometimes years ago, but found that "freeing the appropriate resources" in each place is no big deal. I suppose part of the reason for that is that these situations occur only a handful of times in even my large applications.

I think you have gotten a wrong idea, because RAII and scope guards deal with cleanup and rollback at any granularity, it's not "free everything". In fact, the more complicated things are, the better these techniques become vs. C-style error handling.

These presentation slides by Andrei Alexandrescu from Dec 2012 demonstrate it very clearly. Slide 37: C-style error handling. Painful, scatters the normal execution path in one big ball of error handling, destroys readability. Slide 42: same code with modern RAII. Easy to read, hard to get wrong. (+ Slide 43: further syntax improvement.)
http://sdrv.ms/RXjNPR
Advertisement

I'm not sure what "error driven code" is supposed to be. In my programs, including my 3D simulation/graphics/game engine, errors are extremely rare, pretty much vanishingly rare. You could say, this (and many programs) are "bomb proof" in the sense that they are rock solid and have no "holes". Unfortunately, things go wrong in rare situations with OS API functions and library functions, including OpenGL, drivers, and so forth... so even "a perfect application" needs to recognize and deal with errors.

"errors are extremely rare, pretty much vanishingly rare" --> Exactly. And most of those vanishingly rare errors, I would argue, are essentially impossible to recover from in any reasonable way (essential game data files not loading, graphics driver not able to allocate a resource, etc...). The only reasonable thing to do is "crash", or stay alive long enough to generate a crash report and try to inform the user what happened, etc...

You may think you've handled everything with your error checking - but unless these errors have actually happened, all you really have is a lot of untested code paths. Untested code paths are scary! I used to work on a large codebase where exceptions were banned, and error handling was done as you are doing (albeit often with RAII and clearer scoping). We used fault injection techniques to catch bugs in error handling code. I hope you are doing the same!

In short: why choose to have your code full of error checking (which breaks code flow and makes the code harder to read - that is really undeniable, IMO) to handle errors that are rare and unrecoverable anyway? Leave those to exceptions (or just crash the process), and keep the error checking code for cases where you can intelligently handle them and take appropriate action. It's best not to conflate exceptional conditions with expected errors.

void DoSomething(int* const result); // multiple value

If you've never seen const in that specific position (between the pointer and name), it means you can change the data but not the pointer (barring any goofy tricks).

I believe that "const" is a no-op in a forward declaration? Remove it, and nothing will change.

[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

It could change the mangled function name in c++ so you could get a link error if you dont change the function definition too? Though it is near useless, just a reminder for you that you not intend to increase the pointer inside the function where you can still copy it or array index from it.

It could change the mangled function name in c++ so you could get a link error if you dont change the function definition too? Though it is near useless, just a reminder for you that you not intend to increase the pointer inside the function where you can still copy it or array index from it.

I don't think the "const" is part of the mangled name when it is a modifier of the argument name. Just as the argument names can be ignored, the "const" in front of the argument name can be ignored. I am not sure of this, but it seems to work with gcc.

But in the function implementation itself, it will have an effect. That means that you can't change the argument. This is good in general I think, making for more robust code.

[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

The answer is --- "during development and to deal with changes in behavior of OS, API and library functions".

It seems we both agree that once we have our applications working (or even just functions or subsystems working), we almost don't get any errors at all. However, when we write a couple thousand lines of new code, we might have made a mistake, inserted a typo, or misunderstood how some OS/API/library function/service is supposed to work [in some situations]. So that's mostly what error checking is for.

This might imply we can just remove the error checking from each function or subsystem after we get it working. There was a short time in my life when I did that. But I discovered fairly soon why that's not a wise idea. The answer is... our programs are not stand-alone. We call OS functions, we call API functions, we call support library functions, we call functions in other libraries we create for other purposes and later find helpful for our other applications. And sometimes other folks add bugs to those functions, or add new features that we did not anticipate, or handle certain situations differently (often in subtle ways). If we remove all our error catching (rather than omit them with #ifdef DEBUG or equivalent, we tend to run into extremely annoying and difficult to identify bugs at random times in the future as those functions in the OS, APIs and support libraries change.

There is another related problem with the "no error checking" approach too. If our application calls functions in lots of different APIs and support libraries, it doesn't help us much if the functions in those support libraries blow themselves up when something goes wrong. That leaves us with few clues as to what went wrong. So in an application that contains many subsystems, and many support libraries, we WANT those function to return error values to our main application so we can figure out what went wrong with as little hassle as possible.

You seem like a thoughtful programmer, so I suspect you do what I do --- you try to write much, most or almost all of your code in an efficient but general way so it can be adopted as a subsystem in other applications. While the techniques you prefer work pretty well in "the main application", they aren't so helpful if portions of your applications become a support library. At this point in my career, almost every application I write (even something huge like my entire 3D simulation/graphics/game engine) is designed to become a subsystem in something larger and more inclusive. So I sorta think of everything I write now as a subsystem, and worry how convenient and helpful it will be for an application that adopts it.

Anyway, those are my thoughts. No reason you need to agree or follow my suggestions. If you only write "final apps" that will never be subsystems in other apps, your approaches are probably fine. I admit to never having programmed with RAII, and generally avoiding nearly everything that isn't "lowest-level" and "eternal". The "fads" never end, and 99% of everything called "a standard" turns out to be gone in 5 or 10 years.... which obsoletes application that adopt those fads/standards with them. I never run into these problems, because I never adopt any standards that don't look reliably "eternal" to me. Conventional errors are eternal. OS exception mechanisms are eternal. Also, all the function in my libraries are C functions and can be called by C applications compiled with C compilers (in other words, the C function protocol is eternal). This makes my applications as generally applicable as possible... not just to my own applications, but to the widest possible variety of others too.

There's no reason you or anyone else needs to make these same policy decisions. I am fully aware that most people chase fads their entire lives, and most of the code they write becomes lame, problematic or worthless after a few years --- not because their code was bad, but because assumptions they and support libraries adopted are replaced by other fads or become obsolete. All I can say is, my policies accomplish what I want extremely effectively. Most of the code I write is part of a very large, very long term application that will end up taking 20 years to complete (and will then be enhanced and extended indefinitely). So I literally must not adopt any fads, or anything that might become a fad in the next 30 years. You would be completely correct to respond that not everyone needs to write in such an "eternal", "bomb proof" and "future proof" manner as I do. People can make their own decisions. That's fine with me. I hope that's fine with you too.

One final comment that is also somewhat specific to my long term application (and therefore a requirement for every subsystem I develop). This application must be able to run for years, decades, centuries. True, I don't count on this, the application is inherently designed to recognize and create "stable points" (sorta like "restore points" in windows), and therefore be able to crash, restart and pick up where it left off without "losing its mind". But the intention isn't to crash, restart, restore very often... the attempt is to design in such a way that this never happens. Yet the application must be able to handle this situation reliable, efficiently and effectively. Perhaps the best example of this kind of system is an exploration spacecraft that travels and explores asteroids, moons, planets (from orbit) and the solar-system in general. The system must keep working, no matter what. And if "no matter what" doesn't work out, it needs to restart-restore-continue without missing a beat. Now you'll probably say, "Right... so go ahead and let it crash". And I'd say that maybe that would work... maybe. But physical systems are too problematic for this approach in my opinion. Not only do physical machines wear and break, they go out of alignment, they need to detect problems, realign themselves, reinitialize themselves, replace worn or broken components when necessary, and so forth. And those are only the problems with the mechanisms themselves. The number of unexpected environments and situations that might be encountered are limitless, and the nature of many of these are not predictable in advance (except in the very most general senses).

I suppose I have developed a somewhat different way of looking at applications as a result of needing to design something so reliable. It just isn't acceptable to let things crash and restart again. That would lead to getting stuck in endless loops... trying to do something, failing, resetting, restarting... and repeating endlessly. A seriously smart system needs to detect and record every problem it can, because that is all evidence that the system will need to figure out what it needs to fix, when it needs to change approach, how it needs to change its approach, and so forth. This leads to a "never throw away potentially useful information" premise. Not every application needs to be built this way. I understand that.

I'm not sure what "error driven code" is supposed to be. In my programs, including my 3D simulation/graphics/game engine, errors are extremely rare, pretty much vanishingly rare. You could say, this (and many programs) are "bomb proof" in the sense that they are rock solid and have no "holes". Unfortunately, things go wrong in rare situations with OS API functions and library functions, including OpenGL, drivers, and so forth... so even "a perfect application" needs to recognize and deal with errors.


In short: why choose to have your code full of error checking (which breaks code flow and makes the code harder to read - that is really undeniable, IMO) to handle errors that are rare and unrecoverable anyway? Leave those to exceptions (or just crash the process), and keep the error checking code for cases where you can intelligently handle them and take appropriate action. It's best not to conflate exceptional conditions with expected errors.

The answer is --- "during development and to deal with changes in behavior of OS, API and library functions".

It seems we both agree that once we have our applications working (or even just functions or subsystems working), we almost don't get any errors at all. However, when we write a couple thousand lines of new code, we might have made a mistake, inserted a typo, or misunderstood how some OS/API/library function/service is supposed to work [in some situations]. So that's mostly what error checking is for.

This might imply we can just remove the error checking from each function or subsystem after we get it working. There was a short time in my life when I did that. But I discovered fairly soon why that's not a wise idea. The answer is... our programs are not stand-alone. We call OS functions, we call API functions, we call support library functions, we call functions in other libraries we create for other purposes and later find helpful for our other applications. And sometimes other folks add bugs to those functions, or add new features that we did not anticipate, or handle certain situations differently (often in subtle ways). If we remove all our error catching (rather than omit them with #ifdef DEBUG or equivalent, we tend to run into extremely annoying and difficult to identify bugs at random times in the future as those functions in the OS, APIs and support libraries change.

There is another related problem with the "no error checking" approach too. If our application calls functions in lots of different APIs and support libraries, it doesn't help us much if the functions in those support libraries blow themselves up when something goes wrong. That leaves us with few clues as to what went wrong. So in an application that contains many subsystems, and many support libraries, we WANT those function to return error values to our main application so we can figure out what went wrong with as little hassle as possible.

You seem like a thoughtful programmer, so I suspect you do what I do --- you try to write much, most or almost all of your code in an efficient but general way so it can be adopted as a subsystem in other applications. While the techniques you prefer work pretty well in "the main application", they aren't so helpful if portions of your applications become a support library. At this point in my career, almost every application I write (even something huge like my entire 3D simulation/graphics/game engine) is designed to become a subsystem in something larger and more inclusive. So I sorta think of everything I write now as a subsystem, and worry how convenient and helpful it will be for an application that adopts it.

Anyway, those are my thoughts. No reason you need to agree or follow my suggestions. If you only write "final apps" that will never be subsystems in other apps, your approaches are probably fine. I admit to never having programmed with RAII, and generally avoiding nearly everything that isn't "lowest-level" and "eternal". The "fads" never end, and 99% of everything called "a standard" turns out to be gone in 5 or 10 years.... which obsoletes application that adopt those fads/standards with them. I never run into these problems, because I never adopt any standards that don't look reliably "eternal" to me. Conventional errors are eternal. OS exception mechanisms are eternal. Also, all the function in my libraries are C functions and can be called by C applications compiled with C compilers (in other words, the C function protocol is eternal). This makes my applications as generally applicable as possible... not just to my own applications, but to the widest possible variety of others too.

There's no reason you or anyone else needs to make these same policy decisions. I am fully aware that most people chase fads their entire lives, and most of the code they write becomes lame, problematic or worthless after a few years --- not because their code was bad, but because assumptions they and support libraries adopted are replaced by other fads or become obsolete. All I can say is, my policies accomplish what I want extremely effectively. Most of the code I write is part of a very large, very long term application that will end up taking 20 years to complete (and will then be enhanced and extended indefinitely). So I literally must not adopt any fads, or anything that might become a fad in the next 30 years. You would be completely correct to respond that not everyone needs to write in such an "eternal", "bomb proof" and "future proof" manner as I do. People can make their own decisions. That's fine with me. I hope that's fine with you too.

One final comment that is also somewhat specific to my long term application (and therefore a requirement for every subsystem I develop). This application must be able to run for years, decades, centuries. True, I don't count on this, the application is inherently designed to recognize and create "stable points" (sorta like "restore points" in windows), and therefore be able to crash, restart and pick up where it left off without "losing its mind". But the intention isn't to crash, restart, restore very often... the attempt is to design in such a way that this never happens. Yet the application must be able to handle this situation reliable, efficiently and effectively. Perhaps the best example of this kind of system is an exploration spacecraft that travels and explores asteroids, moons, planets (from orbit) and the solar-system in general. The system must keep working, no matter what. And if "no matter what" doesn't work out, it needs to restart-restore-continue without missing a beat. Now you'll probably say, "Right... so go ahead and let it crash". And I'd say that maybe that would work... maybe. But physical systems are too problematic for this approach in my opinion. Not only do physical machines wear and break, they go out of alignment, they need to detect problems, realign themselves, reinitialize themselves, replace worn or broken components when necessary, and so forth. And those are only the problems with the mechanisms themselves. The number of unexpected environments and situations that might be encountered are limitless, and the nature of many of these are not predictable in advance (except in the very most general senses).

I suppose I have developed a somewhat different way of looking at applications as a result of needing to design something so reliable. It just isn't acceptable to let things crash and restart again. That would lead to getting stuck in endless loops... trying to do something, failing, resetting, restarting... and repeating endlessly. A seriously smart system needs to detect and record every problem it can, because that is all evidence that the system will need to figure out what it needs to fix, when it needs to change approach, how it needs to change its approach, and so forth. This leads to a "never throw away potentially useful information" premise. Not every application needs to be built this way. I understand that.

I'm not sure what "error driven code" is supposed to be. In my programs, including my 3D simulation/graphics/game engine, errors are extremely rare, pretty much vanishingly rare. You could say, this (and many programs) are "bomb proof" in the sense that they are rock solid and have no "holes". Unfortunately, things go wrong in rare situations with OS API functions and library functions, including OpenGL, drivers, and so forth... so even "a perfect application" needs to recognize and deal with errors.

In short: why choose to have your code full of error checking (which breaks code flow and makes the code harder to read - that is really undeniable, IMO) to handle errors that are rare and unrecoverable anyway? Leave those to exceptions (or just crash the process), and keep the error checking code for cases where you can intelligently handle them and take appropriate action. It's best not to conflate exceptional conditions with expected errors.

you're not the only one to take this approach. in a less strict fashion, somehow linux kernel guidelines follow that mentality as well. I like the idea, though I'll never practice it because I love too much my "fads" and high level libraries, because they're so much fun. its fun to learn and apply practices e.g. from patterns, or from boost stuffs like optional, tuples, mpl, functions, lambdas. typically fads. but genetic evolution works by keeping the best. Some companies encourage ideas so that out of the emulsion they make keep the best (free Friday). If we try lots of software engineering stuffs, we are free to throw 80% after 5 years and decide it was not so nice after the hype has passed, but the 20% could stick along for the next 50 years so it was worth the effort.

This topic is closed to new replies.

Advertisement