Testing - structure, how to?

Started by
2 comments, last by TheUnbeliever 12 years, 1 month ago
So I've done a fair bit of hobby development, but never done anything beyond look furtively at any remotely formal testing. I 'check things work' and not much more. I've tried to do differently for my current project, partly because it's towards a dissertation and the assessment considers it, but also because the failure modes are sufficiently involved that it's a terrible idea not to.

How are tests normally structured? I wrote tests for components either beforehand or at least alongside, but I usually replaced these with functional code afterwards. So I suppose my regression testing is pretty limited, but the components are sufficiently orthogonal that I've been more-or-less able to write and debug them then leave alone. I have some Octave scripts that generate test cases for some elements. Do I just have a '[font=courier new,courier,monospace]make tests[/font]' which runs those scripts to generate the test data files and swaps in an alternative main.cpp?

Also, how do I test e.g. stochastic model-fitting code? So far I've checked by eye that it looks sensible, but do I want to have some code which compares the fitted model to ground truth and checks it's within some tolerance, maybe with some sort of 'voting' system to account for the element of randomness?
[TheUnbeliever]
Advertisement
Have you checked http://sourceforge.net/apps/mediawiki/cppunit/index.php?title=Main_Page?
There's a wide variety of tests that you may or may not have in mind. Ideally, you want as many as possible to run as part of the build (and be a prerequisite for build success). Typically this will unit tests, or any other kind of tests that run extremely fast. The idea is to have a quick indication of whether your code does what it's supposed to.

For more heavyweight (longer-running) tests, you might consider wiring up a pre- or post-checkin hook in your version control system to run the tests and notify you/your team if any of the tests failed. The same set of tests could also be run manually if/when needed(perhaps via a make target if you like).

The way in which you might structure tests also depends again on what kind of tests you have in mind, and the existing structure of your code. If you're fortunate enough to be in the position where a good chunk of your code is segregated in to libraries, you could consider having one test executable per library, where each executable links against that library and tests the functionality it exports and quite possibly some of its internals.

For tests involving some kind of approximate comparison, you need to decide what qualities you are detecting 'by eye' that make the result acceptable/unacceptable. Then your test code examines such qualities. Without a specific example, it's hard to give more concrete advice, but often calculating the variance/standard deviation of the data produced by your code against a known-good reference will suffice.

For the nitty gritty of writing tests, just pick up a testing library. UnitTest++ is a nice small one without much fanfare or boilerplate. I think that's important, as if tests are a pain to write, fewer get written. FWIW, I rolled my own library as I needed a few additional features (I don't understand why so many libraries insist on make you use C++ identifiers to name tests, but I digress...).

EDIT: Noel Llopis has a lot of content on his blog about setting up testing environments. He's pretty keen on Test Driven Development, so you may or may not want to skip those parts depending on whether you 'believe' in it, but there's plenty of good chin-stroking material there.
Thanks guys, that's really helpful information. I shall get dug into those links. Thanks again!
[TheUnbeliever]

This topic is closed to new replies.

Advertisement