# Is This Good Coding Practice?

This topic is 4828 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

A lecturer at uni mentioned something about it being good practice to validate any incoming and outgoing variable at both the start and end of a function. Surely this is overkill for functions that arnt always being manually called by the programmer. ace

##### Share on other sites
My friend, there is NO SUCH THING as "overkill" when it comes to documentation. When your program involves thousands upon thousands of lines of code, it is far better to have three remarks per little expression than it is to leave even one line remarkless...because though you *think* you'll remember what it does, you may and probably will not.

And that SUCKS.

Spam your code with remarks, man. That's why IDEs allow you to color code them...so you can skim thru and ignore them, but if you need to read them, they are there!

##### Share on other sites
It's a very good idea. The nastiest bugs you'll ever run into will be the ones where the source of the bug is non-obvious. Testing the validity of every input and output helps to ensure that your program crashes near the source of the error.

assert statements are perfect for this, as they are not executed in release builds.

##### Share on other sites
Quote:
 Original post by the Speed Bumpassert statements are perfect for this, as they are not executed in release builds.

This does not have enough emphisis. Let me put it more clearly:

USE ASSERT WHEREVER YOU CAN
USE ASSERT WHEREVER YOU CAN
USE ASSERT WHEREVER YOU CAN
USE ASSERT WHEREVER YOU CAN
USE ASSERT WHEREVER YOU CAN
USE ASSERT WHEREVER YOU CAN
USE ASSERT WHEREVER YOU CAN
USE ASSERT WHEREVER YOU CAN
USE ASSERT WHEREVER YOU CAN

Don't make me start doing ASCII art letters.

-----   -----  ----------  ----------|   |   |   |  |        |  |        ||   |   |   |  |    -----  |   ------|   \   /   |  |        |  |        ||    ---    |  -----    |  |   ------\           /  |        |  |        | -----------   ----------  ----------

It.

##### Share on other sites
How does one unit-test something that's spattered with assertions. Does one replace the assert handler?

##### Share on other sites
I would think so, whether that means brute-force #define or using your own Assert functions that throw in the presence of unit tests.

I was thinking about this the other day. It is trivial to evaluate function preconditions, but not quite as trivial to evaluate post conditions. If you only have one exit point, it is easy. However, if you have multiple exit points, you may have troubles with maintenance.

It is in these situations that I begin to see the value of preferring private virtual functions with public nonvirtual functions to call them. The virtual call can be wrapped with pre and post condition checks easily.

##### Share on other sites
The way I understand it, Unit Testing and Asserts address different problem domains. Asserts are for more fine grained tests (i.e: variable values and invariants), where as Unit Testing tests the validity of an entire logical concept (i.e: The function/class does what it's supposed to).

##### Share on other sites
Quote:
 Original post by daeridThe way I understand it, Unit Testing and Asserts address different problem domains. Asserts are for more fine grained tests (i.e: variable values and invariants), where as Unit Testing tests the validity of an entire logical concept (i.e: The function/class does what it's supposed to).

Yes, I understand that. But the point is: The default behavior for an assert on Windows is to put up a modal dialog box. That doesn't play too well if you want to execute a large number of unit tests, possibly automated.

##### Share on other sites
Putting in validation can really save you some time by taking you to the immediate place your problem stems from, rather than having to back-track through tons of debug. So yes its good practise.

unfortunately on functions that need to be very efficient (functions in games, or on servers dealing with very high trasaction-rates for example) you have to miss it out.

you can write special validators that can be compiled in or out of your program depending on what you feel like, so when you hit a problem you can recompile with them in and once you've fixed the bugs compile them out for speed.

i.e. the code you put in for validation can be turned off so theres no speed-loss and then activated when you hit a problem.

try looking up the 'assert' command for C++

##### Share on other sites
Quote:
Original post by Arild Fines
Quote:
 Original post by daeridThe way I understand it, Unit Testing and Asserts address different problem domains. Asserts are for more fine grained tests (i.e: variable values and invariants), where as Unit Testing tests the validity of an entire logical concept (i.e: The function/class does what it's supposed to).

Yes, I understand that. But the point is: The default behavior for an assert on Windows is to put up a modal dialog box. That doesn't play too well if you want to execute a large number of unit tests, possibly automated.

Stroustrup recommends writing your own assert that throws an exception rather than explode the application and exit. It makes sense, and it'll make it possible to work through the scenario you describe.

##### Share on other sites
The problem is that you're looking at unit tests as a debugging tool, when in fact they are validation tools. Thus, they should be run on release code, in which asserts are disabled... the two complement each other.

##### Share on other sites
Quote:
 Original post by psamty10The problem is that you're looking at unit tests as a debugging tool, when in fact they are validation tools. Thus, they should be run on release code, in which asserts are disabled... the two complement each other.

Has the angry mob of eXtreme Programmers with pitchforks and torches arrived outside your door yet?

##### Share on other sites
Quote:
 Original post by the Speed Bumpassert statements are perfect for this, as they are not executed in release builds.

"release build" is object code without Debug Info?

Lazzar

##### Share on other sites
See Design By Contract : Bill Venners' interview with Bertrand Meyer.

##### Share on other sites
I don't see any problem with what the behaviour of assert is while unit testing. Whatever it does, if you get an assert failure then you have found a bug which needs to be fixed. The only issue is with fools that try and assert things that they shouldn't.

For this kind of thing It's good to make two more variants of assert for pre and post condition checking. i.e.
#define PRE(x) assert("Pre: " && (x))#define POST(x) assert("Post: " && (x))
or similiar.

This helps make it clear to the caller that the assert is their fault for not following the procondition rules, not your bug.
Then use those for validating your pre and post conditions and regular asserts for normal stuff like state-based checks. You can also add an Invariant macro like the two above for invariant checking. Did your lecturer cover invariants yet?

If you're not the one calling your code then there's all the more reason to put as least precondition checks in.

##### Share on other sites
Quote:
 Original post by iMalcI don't see any problem with what the behaviour of assert is while unit testing. Whatever it does, if you get an assert failure then you have found a bug which needs to be fixed. The only issue is with fools that try and assert things that they shouldn't.

You're missing the point. The default behavior of assert(on Windows) is to throw up a blocking dialog box. That doesn't play with automated unit testing at all.

##### Share on other sites
Quote:
Original post by Arild Fines
Quote:
 Original post by iMalcI don't see any problem with what the behaviour of assert is while unit testing. Whatever it does, if you get an assert failure then you have found a bug which needs to be fixed. The only issue is with fools that try and assert things that they shouldn't.

You're missing the point. The default behavior of assert(on Windows) is to throw up a blocking dialog box. That doesn't play with automated unit testing at all.
My point is that the behaviour is irellevant because an assert failure should never occur anyway. The idea is to find bugs and some companies consider an assert failure a bug, regardless of whether the release build has bad consequences or not, so bug found either way! Plenty of companies nowadays leave assert type stuff in release builds.

You're right that realistically it might not go so well with automated testing. However the tool I have used at work was able to detect and process assert dialogs and log the dialog contents to file instead, so it's do-able.

I can't say I'm a fan of MS's assert dialog though. At work out assert function does diferent things depending on a setting in the registry. It can log the failure, pop up a dialog, throw an exception, create a mini-dump or full-dump, whichever of these you like.

##### Share on other sites
Its important to note the difference between Assertions and Unit Testing. There is a difference between a broken program and an incorrect program.

First, Assertions are something that are "ALWAYS TRUE". If, for some reason, an assertion is untrue, the system is broken, unstable, and must be stopped. Your program is broken and must be fixed in order for execution to continue to the end.

For example, dividing by zero or trying to call a function on a null pointer. If you were to try and do either of these at run time, your application would crash, so it would have stopped execution anyways. The assert just gives you more information. If you're experiencing assertions in your unit tests, than your code is broken, you should fix it, and stop using unit tests as a smoke screen for broken code. =)

Second, Unit Testing is not for testing that your program is functional, its for testing whether your program is "Correct." That is, given a set of inputs, do you get the expected outputs.

Example:
short output = Add( 1, 1 );
TEST( output == 2 );

If for some reason output does not equal 2 in the above example, the program is functioning incorrectly, but that does not mean that it's unstable - just that its wrong.

Think of Unit Testing as Black-Box testing. Does your output match your expected results, etc...

##### Share on other sites
Quote:
 Original post by ace_lovegroveA lecturer at uni mentioned something about it being good practice to validate any incoming and outgoing variable at both the start and end of a function. Surely this is overkill for functions that arnt always being manually called by the programmer.ace

Well, it got threadjacked into a unit testing and assertion discussion, but I will address the OP.

[PARANOID SECURITY PERSPECTIVE]
One of the big reasons I expect applications I am responsible for designing, developing, or auditing include a heavy data validation component is for security reasons.

Basically, the root of all security issues in an application can be tracked down to either a design issue (bad crypto algo, weak authentication protocol, etc) or an implementation issue (unchecked buffer, race conditions, credential management, etc).

From the perspective of overcoming these kinds of issues, there are also different types of controls that one can put in place; for design issues it usually involves implmenting additional layers (wrapper classes, additional network infrastructure, tunneling protocols over more secure versions, etc).

From an implementation perspective you correct virtually all bugs by implementing data validation. If the data you receive to operate on is not in the correct format, you (throw|assert|erase the hard-drive) *before* you begin to operate on the data. If the result is not an acceptable value then you (throw|assert|etc) after releasing all resources acquired during the chunk of code, and verifying that they are released.

By generating extensive meaningful errors you accomplish the following:

1. Ensure that future users of your libraries, etc can count on receiving meaningful error messages

2. Ensure that future users can rely on having a specific, expected result when something goes wrong

3. Ensure that your libraries logic is discrete and does not require third party tools to implement what should be basic functionality

4. Ensure that error conditions leave the application in a *known* and *trusted* state rather than exiting abruptly and abnormally.

By ensuring that your code handles errors properly, even at the expense of performance, you can ensure the reliability and improve the security of your code.

[/PARANOID SECURITY PERSPECTIVE]

As an additional benefit, through conditional compilation, it is fairly trivial to create libraries that can override the additional data checks if you want to squeeze out the additional performance.

Just my \$.02

##### Share on other sites
Quote:
 Original post by Arild FinesYou're missing the point. The default behavior of assert(on Windows) is to throw up a blocking dialog box. That doesn't play with automated unit testing at all.

Do you usually write unit tests that feed flawed data into functions and as such expect asserts to always be triggered during the unit tests?

##### Share on other sites
Quote:
Original post by Arild Fines
Quote:
 Original post by iMalcI don't see any problem with what the behaviour of assert is while unit testing. Whatever it does, if you get an assert failure then you have found a bug which needs to be fixed. The only issue is with fools that try and assert things that they shouldn't.

You're missing the point. The default behavior of assert(on Windows) is to throw up a blocking dialog box. That doesn't play with automated unit testing at all.

Roll your own assert - it's not that hard to do. It's just a macro anyways.

##### Share on other sites
You don't even need to hack the macro. Most C compilers implement assert() via an _assert() function whose behavior you can simply override with something that will be more palatable to your unit testing framework.

##### Share on other sites
1:If you have c-style functions that return 0 in case of fail and 1 in case of success, don't put 'em into default arrest, because in release function will not be called[grin] and program will not work in release mode. You can even have hard to spot bugs. Better write 3 your own asserts that

first will be completely keept in release.
second's expression will be keept in release, but without checks

Use second and third in non-critical points, say, where wrong results only make visual artefacts in output image.

2:If you work on data that came from internet or from disc / whatever, you shouldn't just use assert as soon as data enter your program . Instead, you should put *actual* checks if data is valid, checks that is always there, unlike assert.
That is, something like
or even
It's better than
assert(datasize<=max_allowed_buffer_size);
alone.
You can put both assert and normal tests to be sure.

Assert should be triggered by bugs in application, and ideally, only by bugs. Application should handle bad packets/bad disc input correctly, and asserts should be triggered only if application is failed to work correctly with packets, either correct or bad. That is, if you try to load corrupted jpeg file, application shouldn't crash due to assert, and shouldn't ignore assert(and get buffer overflow), it must show message that file is wrong. If it failed to do that, it should crash due to assert. Same with network, it ideally must log to file that it received bad packet/request/something and ignore packet at all, and in case it failed to detect that packet might do something bad, it must crash due to assert (especially on bounds checks).

3: unit tests and assert:
Unit test idea is based on providing some predetermined/special data to routine, and checking output. In some simple cases, unit tests is not necessary if you have many asserts.
It's best shown on simple examples:
Let i'll for nth time post my ray-sphere intersection as example [grin]
//// Ray-sphere intersection.// p=(ray origin position - sphere position),// d=ray direction,// r=sphere radius,// i1=first intersection distance,// i2=second intersection distance// i1<=i2// i1>=0// returns true if intersection found,false otherwise.//bool RaySphereIntersect(const Vec3d &p, const Vec3d &d,real64 r, real64 &i1, real64 &i2){        real64 det,b;        b = -DotProd(p,d);        det=(b*b) - (p.x*p.x + p.y*p.y + p.z*p.z) + r*r;        if (det<0){// we should handle non-intersecting cases                return false;        }//else        det= sqrt(det);        i1= b - det;        i2= b + det;        /*// and that's assertion of output data        logmsg("Intersection inaccuracy:");        logmsg(abs((p+d*i1).Length()-r));        logmsg(abs((p+d*i2).Length()-r));        */        //        // intersecting with ray?        if(i2<0) return false;        if(i1<0)i1=0;        return true;};

Unit tests is not much needed in that case because it's easy to check if output is correct using asserts, and by just plugging it into application, and with logging on, i can see if it works correctly by reading log.

But in general case there's no easy way to check if output is correct, and you need to do unit tests. I have other example, half-edge data structure implementation. I placed asserts everywhere and also wrote routine that checks mesh for consistancy. So then in unit test i just fixed some bugs that was instantly spotten by asserts. But when testing "split edge" routine, i had to look at output to determine if it's correct, because my asserts only showed that data structure is self-consistant, and my edge-splitting routine coulda not split edge at all and assert wouldn't be triggered.