Sign in to follow this  

Any advice on testing your own code?

This topic is 662 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I recently did a test for a game company. It was an assignment sent to me by email, and I was given a time limit to get it done. I did it, tested it, and it worked fine, so I sent it back to them. They came back to me saying that it failed their "testing harness". So I must not have tested it thoroughly enough. Now I'm in the same situation with another company, and I really want to pass their testing harness. I find it hard to test my own code; it's easy to come up with tests that it will pass, and hard to come up with tests that it will fail. I want to believe that it'll work, and so that works against me. I was wondering if anyone had any advice for properly testing your own code, what do you usually do to make sure your code is flawless?

Share this post


Link to post
Share on other sites
It depends on the code in question.

Much code is transformitive. You get some input values, you run some operations, and you transform it to a result.

A test harness is another program that is used to test the code. Usually that means giving specific inputs and testing the results, or passing in proxy objects and verifying the operations done on the objects are correct. It can be something custom written for their code, or larger existing test frameworks like xUnit (JUnit, NUnit, CppUnit, ...) or similar. These test systems usually have you write up a large number of tests, one test per thing you are testing, and run automatically as part of the build system. Other test frameworks do some work to test whatever it is they are interested in, and ensure the thing hasn't been broken. Most testing tools are designed to test permanence, to ensure the behavior has not changed unexpectedly.


When building tests, look at boundaries and the operations involved.

Does it do what the design says it is supposed to do? Does it do anything extra that it shouldn't? Many defects are caused by missing items on the list. An action is supposed to do five steps, but only four were implemented. Or it is supposed to do one thing, but has some extra side effects.

Does it operate at the full range of boundaries? If you've decided a function can take a range of numbers, does it work with the full range? Generally you can test this with endpoints and a few mid-point values. If there is a specified result outside the range, such as returning a code or throwing an exception, verify that on both ends of the range it generates those results. If the range includes key values like null inputs, test those too. If it requires non-null parameters, make that assert noisily on debug builds, and follow your project's practice on release builds, like returning a failsafe result or log and crash. Those can be done black-box, you don't need to see what is inside the code to test it.

If you have access to the code you can do some white box tests on that, too. If your transformation involves other operations, does it operate within those ranges? For example, many implementations require input to sin/cos operations be within a specific range, does your call fit the range? How about square roots of negative numbers? If any library you are using has limits, can you verify that the full range of your boundaries also fits the range of those libraries? Review it for the full range and boundaries.

Next, look for key values and error propagation. Look at the operations involved. If it involves division, is there a chance for division by zero? Are there any operations you use that can fail? How do you handle any calls that can potentially return null? Do you test for nulls for everything before dereferencing any pointer? Are you testing return codes for all functions you call that potentially return an error result?

Those are the basics I usually look at for all code I write. The pattern is so automatic I have to stop and think carefully about what I typically search for.

There are many more tests you can build, all are depending on what the system is supposed to be doing. There are also many good books on the subject and hundreds of test patterns, much like design patterns for code, but patterns in making sure software is supposed to do what it is supposed to do.

Share this post


Link to post
Share on other sites
For every precondition, postcondition, invariant and assumption, add an assertion for it. Good code proves that it's correct via assertions (or quickly and loudly suffers assertion failure and aborts).
Good code has a high assertion density (maybe 10 to 1 ratio of actual code to asserts)

I also like to step through my code line by line, for every possible code path and watch values change in the debugger. New code always contains silly bugs, which often aren't noticed for a long time, causing subtle problems elsewhere in the program. Assertions catch most of these, but a live line-by-line "desk check" usually catches something that I hadn't thought of - usually weird edge cases.

Testing all possible paths (including the rare edge cases) is vitally important. Code that hasn't been tested is code that is wrong :)

For unsafe languages like C/C++, the Windows Application Verifier is very handy for catching nasty things like buffer overruns and use-after-frees. You also need to use a leak detector to make sure your cleanup is functioning.

Share this post


Link to post
Share on other sites

Thanks for the responses! I tried Visual Leak detector, and It's detecting leaks for things I know I de-allocated. It's also detecting leaks for simple things like:

class foo
{
    std::string name;
    void foo::func(const char* str)
    {
        name = str;
    }
}

apparently that calls 'new' inside of std::string somewhere? Super strange. Is it normal for something like visual leak detector to give a lot of false positives?

 

 

PS: If the program's closing anyway, why does anyone care if there's leaks?

Edited by totesmagotes

Share this post


Link to post
Share on other sites

I tried Visual Leak detector, and It's ... Is it normal for something like visual leak detector to give a lot of false positives?

 

That is more technical than job-advice. 

 

It might not be a false positive. It might indicate you are leaking foo objects.  Assignment to a string can allocate memory, but the snippet you posted does not leak itself. Assuming you properly clean up your foo object, the string inside it should release the memory it allocated. Leak detection tools indicate that something was allocated but not cleaned up yet. Maybe you really did leak a foo object, or maybe it will still get cleaned up at some point after the tool stops detecting. That would require looking at more code.

 

If you want help with finding memory leaks, post the relevant information in a discussion topic in another area, perhaps in 'general programming'.

 

 

PS: If the program's closing anyway, why does anyone care if there's leaks?

 

Several reasons, but again that is a technical question rather than job advice.

 

Since this was for a coding test for a company, it is an example of your work. You're submitting sloppy work.  Leaks are always problem. They accumulate and grow and consume additional system resources until eventually your program dies. Employers will wonder if all your work is that sloppy.

 

You are somewhat right about program termination.  Many libraries have 'fast teardown' options that do not call destructors or otherwise does not do cleanup. Sometimes this happens when the program is shutting down and the objects can be dumped without consequence. Sometimes code works with pool allocators and the pool will be recycled. If the blocks are being destroyed and recycled and there is no additional cleanup to do, having a fast teardown that doesn't clean up is fine.  Note that this is not accidentally leaked, this is typically an explicit action.

 

Sometimes there is stuff that needs to be handled.  Sometimes there are buffers that need to be flushed, sent to disk or across a network. Sometimes there are stats that need to be computed and stored. Sometimes there are resources that the system cannot easily reclaim and should be returned to a proper state.  If those are what you are leaking, you end up losing vital information or leaving the system unstable. If someone wrongly used a fast teardown on those objects that is a bug caused by being too aggressive.  If someone just leaks the objects, that is probably a bug from lazy or not fully understanding the system.

 

 

Bringing both technical questions back to job advice...

 

Understanding object lifetimes is a vital thing in programming.  If your code test shows you don't know how to manage object lifetimes, that's a good warning sign that you may not be the best programmer for the job.  Maybe if they are looking for programmers in languages less sensitive to object lifetimes you might work out with some training, but that is a bad candidate for a C++ programming job.

 

During interviews if one candidate writes code with memory or resource leaks and another candidate writes clean code, the second is far more likely to get the job.

Edited by frob

Share this post


Link to post
Share on other sites

Since this was for a coding test for a company, it is an example of your work. You're submitting sloppy work.  Leaks are always problem. They accumulate and grow and consume additional system resources until eventually your program dies. Employers will wonder if all your work is that sloppy.

...
 
During interviews if one candidate writes code with memory or resource leaks and another candidate writes clean code, the second is far more likely to get the job.


Quoted for truth.

Share this post


Link to post
Share on other sites

PS: If the program's closing anyway, why does anyone care if there's leaks?

I wouldn’t hire such a candidate because it speaks about his or her philosophy towards coding. It means you go out of your way to find excuses to take shortcuts. I want a programmer whose philosophy is simply, “I have allocated this, so I should delete it,” not, “I have allocated this, so I should delete it, except if it is used in this way or that way, and on every 3rd Monday of the month.”

One of these strategies is rock-solid and responsible, while the other is error-prone and lazy, not to mention that it makes it hard for people on a team to all be on the same page.

In addition to the general philosophical arguments above, pragmatically speaking the closing of an application is a good time to check for memory leaks. If you fill your debug window with reports about leaks that you consider “intentional” then you will never find leaks that are accidental. Leaks caused by shutting down and leaks caused by actually leaking memory at run-time will all be part of the same print-out soup.

Adopting the philosophy that the program is shutting down so leaks don’t matter suggests a foundation not only of laziness but of breakable programming practices, and neither quality would ever allow me to trust such a person.


While we are on the same topic, one of my past coworkers had a philosophy such that he would allocate allocate allocate, test, and if it worked he would go back and free free free, because, “Why waste my time freeing things if I don’t even know if the code will work?”
Except that testing sometimes takes a while and requires heavy focus, both of which cause one to forget every spot where an allocation had been made, leading sometimes to free free instead of free free free.
He wasted even more time tracking down all the memory leaks he had created, and at a time when it was most crucial to have a working and stable product for shipment. He was fired after 2 more projects with exactly the same problems.

Solid, trustworthy programmers never make excuses for their code.


L. Spiro

Share this post


Link to post
Share on other sites

As far as methodology goes, your tests aim to tackle two broad categories -- that under expected conditions the system does exactly what's advertised correctly and with no hidden side-effects, and that under unexpected conditions it breaks, aborts, asserts, or otherwise just-deals-with-it in a predictable, preferred manner that doesn't fail silently allowing the program to go on to corrupt itself. In short, you want to prove that if it succeeds it produces the expected result, and that if it would not produce the expected result that it fails loudly. Failing an ability to do even that, document (with justification) known-bad cases at the very least.

 

Usually for reasons of performance or in the event of irrecoverable error-states you want to remove error handling/logging type code from retail bits, but you should still have those things enabled for at least some of your internal testing (certainly all unit testing, IMO)

 

The plan of attack is usually to ensure that all (or a representative sample) of valid inputs succeed, to hit boundary conditions hard, to hit lesser-used functionality or combinations of parameters hard (e.g. in a dynamic array implementation, cause it to grow and see if it leaks the old memory, try writing past the end and before the beginning), and do stuff that's legal but otherwise nonsensical (e.g. grow and shink it 10 times in a row and never write anything to it). Those are simple, contrived examples but they should illustrate.

 

 

As far as C++ unit testing frameworks, I've yet to try it but Google Test is free and open source, and it seems robust.

Edited by Ravyne

Share this post


Link to post
Share on other sites

This topic is 662 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this