Members - Reputation: 962
Posted 01 March 2012 - 06:10 AM
How are tests normally structured? I wrote tests for components either beforehand or at least alongside, but I usually replaced these with functional code afterwards. So I suppose my regression testing is pretty limited, but the components are sufficiently orthogonal that I've been more-or-less able to write and debug them then leave alone. I have some Octave scripts that generate test cases for some elements. Do I just have a 'make tests' which runs those scripts to generate the test data files and swaps in an alternative main.cpp?
Also, how do I test e.g. stochastic model-fitting code? So far I've checked by eye that it looks sensible, but do I want to have some code which compares the fitted model to ground truth and checks it's within some tolerance, maybe with some sort of 'voting' system to account for the element of randomness?
Members - Reputation: 512
Posted 01 March 2012 - 10:09 AM
Members - Reputation: 2109
Posted 01 March 2012 - 01:53 PM
For more heavyweight (longer-running) tests, you might consider wiring up a pre- or post-checkin hook in your version control system to run the tests and notify you/your team if any of the tests failed. The same set of tests could also be run manually if/when needed(perhaps via a make target if you like).
The way in which you might structure tests also depends again on what kind of tests you have in mind, and the existing structure of your code. If you're fortunate enough to be in the position where a good chunk of your code is segregated in to libraries, you could consider having one test executable per library, where each executable links against that library and tests the functionality it exports and quite possibly some of its internals.
For tests involving some kind of approximate comparison, you need to decide what qualities you are detecting 'by eye' that make the result acceptable/unacceptable. Then your test code examines such qualities. Without a specific example, it's hard to give more concrete advice, but often calculating the variance/standard deviation of the data produced by your code against a known-good reference will suffice.
For the nitty gritty of writing tests, just pick up a testing library. UnitTest++ is a nice small one without much fanfare or boilerplate. I think that's important, as if tests are a pain to write, fewer get written. FWIW, I rolled my own library as I needed a few additional features (I don't understand why so many libraries insist on make you use C++ identifiers to name tests, but I digress...).
EDIT: Noel Llopis has a lot of content on his blog about setting up testing environments. He's pretty keen on Test Driven Development, so you may or may not want to skip those parts depending on whether you 'believe' in it, but there's plenty of good chin-stroking material there.