Unit Testing

Started by
19 comments, last by xiuhcoatl 19 years, 6 months ago
It may be that some unit testing frameworks dont work well for integration testing, but that would usually mean that the author was trying to do something that unit testing wasn't meant to do. Take a close look at the name unit testing. The idea is that you are certain that a unit works well. You do this for each unit. Then, when you combine say a few small units into a larger subsystem you write unit tests for that subsystem. You dont have to test the small ones again, as their individual unit tests take care of that. Then, as you make changes to any of the units, the unit tests can be used to do regression testing with no extra work. There is point where it becomes a diminishing returns thing, meaning that it is so time consuming to write the tests that it is no longer economically sound to do so. That is when you switch to some other testing technique for any higher level tests.
If you look at the book "Large Scale C++ Software Design," the author makes one of the best arguments for unit testing I have read, although he calls in "Heigharchical testing" because the book came out before unit testing had caught on.
Lucas Henekswww.ionforge.com
Advertisement
Side-effects are to be avoided when possible. Without side effects unit-testing is easy.
When you do have side-effects just load it full of Asserts, Invariants, and Pre/Post condition type of checks. Then unit test it inside of the thing that uses it, or at some higher level.

capn_midnight has hit the nail on the head though. But it takes a long time before anyone builds up the discipline to do that.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
Quote:Original post by Washu
Craftsman to the Rescue!

Just finished reading the series. You have an amusing taste [smile] Anyway, one interesting thing I noticed that I didn't realize before is that I already develop like that, I just never keep the tests. Essentially, when I write code I also write a function that tests it and I continuously refine the test as well as the production code. Once I'm satisfied that the code runs fine I simply get rid of the test code and move on to the next task on my list. However, I acknowledge the numerous advantages of keeping the tests and organizing them into automated test suits. One potential problem I see is that many of my test functions that I write now require user interaction (either the user has to *look* at the screen to see if the test passes or fails, or the user has to provide some input to fire off the test). In many cases I just don't think it's possible to create completely automated testing functions. What do you do about these issues?

Another question I have is about organization of test suits. I plan to use boost unit testing framework but I'm not sure how to go about creating tests. Do I create a separate project or do I keep the tests in the same project as production code? If I use the same project, how do I get rid of the test code from the production build? Preprocessor? Also, suppose I have two independant classes A and B, and a class C whose functionality depends on A and B. I develop A, then B, then C. How do I structure the test suite? First create a test for A and add it to the test framework. Then do the same for B. Once I create C, do I move the tests for A and B into C and then add a test for C, or keep tests for A, B and C on the same level? I see that boost supports hierarchial test suite structure but I'm not sure how to properly take advantage of it.
GuiUnitTesting

Martin Fowler recommends moving as much stuff out of the gui class itself into your own classes. That way the gui classes are a client of the code which does all the work (i.e. the code you want to test), and the gui class will be a thin, forwarding class, which will be very hard to get wrong and won't need testing itself.
Quote:Original post by xiuhcoatl
I would not recommend using this in lieu of a formal testing process (black box/white box/functionality verification/usability) but rather as a mechanism to enhance the formal testing process.


Well, blackbox and whitebox are not testing processes, as I understand it. Instead, they are ways to come up with test cases. So, they and the unit testing are not mutually exclusive (you can't use one in lieu of the other). Both apply to all testing stages. Suppose you want to do unit testing. How do you come up with test cases? Well, that's where blackbox and whitebox come in. You want to do integration testing. How do you come up with test cases? Same as before: blackbox and whitebox provide you with ideas how to make the tests.

Just being picky about terms. [smile]

Vovan
Vovan
THanks for the craftsman link...

THermo
Quote:Original post by CoffeeMug
Quote:Original post by Washu
Craftsman to the Rescue!

Just finished reading the series. You have an amusing taste [smile] Anyway, one interesting thing I noticed that I didn't realize before is that I already develop like that, I just never keep the tests. Essentially, when I write code I also write a function that tests it and I continuously refine the test as well as the production code. Once I'm satisfied that the code runs fine I simply get rid of the test code and move on to the next task on my list. However, I acknowledge the numerous advantages of keeping the tests and organizing them into automated test suits. One potential problem I see is that many of my test functions that I write now require user interaction (either the user has to *look* at the screen to see if the test passes or fails, or the user has to provide some input to fire off the test). In many cases I just don't think it's possible to create completely automated testing functions. What do you do about these issues?

Ok, In this case I actually abstract the portion that has to do with user input. Thus I can either supply a mock object that provides input, or an actual object that provides the input (see mock object pattern). Of course, if you are designing a gui, things get a bit more complex, but you can simulate key presses and mouse moves fairly easily.
Quote:
Another question I have is about organization of test suits. I plan to use boost unit testing framework but I'm not sure how to go about creating tests. Do I create a separate project or do I keep the tests in the same project as production code? If I use the same project, how do I get rid of the test code from the production build? Preprocessor?

I tend to use a second project (I use C# mostly now days) and build my tests there. My reasoning for this is simple, a couple of my projects are a few hundred thousand lines long. So recompiling the thing because I changed a test, or wrote a new one, tends to be a waste of time. However, there is no reason not to keep the tests as part of the code base, reason being: only the test running will execute the tests. So a good compiler (for C++) will just remove the tests if they aren't run. Not to mention, even if they do remain, they don't take away anything but minor space overhead.
Quote:
Also, suppose I have two independant classes A and B, and a class C whose functionality depends on A and B. I develop A, then B, then C. How do I structure the test suite? First create a test for A and add it to the test framework. Then do the same for B. Once I create C, do I move the tests for A and B into C and then add a test for C, or keep tests for A, B and C on the same level? I see that boost supports hierarchial test suite structure but I'm not sure how to properly take advantage of it.

No, you keep the tests seperate. The goal is to validate that A and B perform as expected. Then you write tests for C, these tests should validate that both C performs as expected, and that the use of A and B, if publicly visible in C, performs as expected.
However, you should use the heirarchy for modules, or namespace. This promotes both organization, and clarity of reading the results.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

Quote:Original post by Washu
Ok, In this case I actually abstract the portion that has to do with user input. Thus I can either supply a mock object that provides input, or an actual object that provides the input (see mock object pattern). Of course, if you are designing a gui, things get a bit more complex, but you can simulate key presses and mouse moves fairly easily.

Properly simulating user input may get rather time consuming but it isn't the main problem. The main problem is evaluating the resulting action.
Quote:Original post by Washu
No, you keep the tests seperate. The goal is to validate that A and B perform as expected. Then you write tests for C, these tests should validate that both C performs as expected, and that the use of A and B, if publicly visible in C, performs as expected.
However, you should use the heirarchy for modules, or namespace. This promotes both organization, and clarity of reading the results.

I'm not sure I understand. If I have a layered design that includes three main systems, each of which includes 10-20 subsystems and each of those includes 20-50 classes I have a total of about 1500 classes which means I'll have 1500 unit tests. How do I manage their *invocation*? Add 1500 invocation calls to one "StartTesting()" function? That's just rediculous. Essentially I need to A) be able to run every test without having to run others and B) be able to maintain test suites in hierarchial manner. For instance, if I am testing a subsystem S5 with classes C1 - C40 I need to be able to say TestS5() and have the test automatically test classes C1-C40 and then test class S5. Now if I need to test a sysbsystem S6 that contains classes C35 - C60, I should be able to say TestS6(). The problem now becomes that TestS6() will test classes C35 - C40 which have been tested already by TestS5(). I'd imagine this process may significantly increase test time which seems important. How are these types of problems handled?
Quote:Original post by CoffeeMug
I'm not sure I understand. If I have a layered design that includes three main systems, each of which includes 10-20 subsystems and each of those includes 20-50 classes I have a total of about 1500 classes which means I'll have 1500 unit tests. How do I manage their *invocation*? Add 1500 invocation calls to one "StartTesting()" function? That's just rediculous. Essentially I need to A) be able to run every test without having to run others and B) be able to maintain test suites in hierarchial manner. For instance, if I am testing a subsystem S5 with classes C1 - C40 I need to be able to say TestS5() and have the test automatically test classes C1-C40 and then test class S5. Now if I need to test a sysbsystem S6 that contains classes C35 - C60, I should be able to say TestS6(). The problem now becomes that TestS6() will test classes C35 - C40 which have been tested already by TestS5(). I'd imagine this process may significantly increase test time which seems important. How are these types of problems handled?

...
Each system is a member of the heirarchy. Along with each subsystem. Hence your tests would be organized to follow that same tree.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

Hmm, wasteful testing of shared classes and lack of selective execution seems to be a problem specific to boost testing framework. Selective testing is easy to hack in, but I'm not sure how to get around testing classes shared accross subsystems multiple times. It's not so important for my project right now as it's currently fairly small but it may become a problem in the future... I've e-mailed Boost.Test maintainer to see if he has solutions in the pipeline.

This topic is closed to new replies.

Advertisement