How do I unit test complicated ideas?

Started by
5 comments, last by frob 8 years, 3 months ago

The problem is largely that to write test cases, I would have to manually work out quite a lot of things that really I am relying on a computer to process in the first place.

My troubles with this are around writing some code for collision detection. Sphere-Polygon collision detection.

A good example is given by this: http://www.peroxide.dk/papers/collision/collision.pdf

It's a guide for writing the kind of thing I want, complete with code at the end. This algorithm combines several functions: rays intersecting planes, spheres intersecting lines, and all that. I could run unit tests over those, sure, it's just a heap of maths. But the algorithm on the whole actually has 2 mistakes. Both bugs are of a nature such that I can't think of a way to test for them without actually being aware of them before they happen. And that's why I haven't described what those 2 bugs are; to demonstrate how hard/unreliable it is to think we can predict every mistake in an algorithm.

So I still don't know how to design an algorithm this way without being at the mercy of the bug that appears in 2 weeks. How can I test this stuff in a smart way?

Thanks smile.png

Advertisement

You should try to structure your code in a way that would allow you to easily insert and remove modules of code. If you want to try out some new collision detection, ideally you would be able to just insert it into your existing collision handling system. Then if it doesn't work, pop it back out and try something else.

What I do for non-graphical related stuff, I will usually just make a console app and test/fine tune it in there before actually putting it into my project. That way, I'm not hacking into my other code to try to get it it work properly, I already know it will just plugin.

Hope that sort of made sense.

How can I test this stuff in a smart way?
The key is often that you want correct code rather than fast code in a unit test.

Think of a stupid way to code a collision detection that you simply cannot fail to implement. For example, you don't have a collision if all points are at the same side of a line, or so. Then literally implement that idea, eg enumerate all points, check on which side each point is, check if they are all on the same side. If so, you don't have collision.

Maybe you need a different check, but basically, you invent a second check, and now you have two algorithms to compare against each other. Two algorithms both giving the wrong answer is less likely than one algorithm giving the wrong answer, so if you have a different answer, you know that some algorithm is wrong, but not which one (although the smart/fast one is more likely to be wrong in general :P )

It is difficult to write a test for each corner case in advance because, like you said, you typically don't discover them until after the fact. Once an issue like this is discovered I find that the best thing I can do is write a test for it which initially fails, then correct the bug so that the test passes. Going forward this test will ensure that if the bug is reintroduced it will be identified before it makes it into production or QA.

In general unit tests are not intended to test complex things. They should be testing small units of code with simple assertions, ideally one assertion per test. Sometimes this rule can be bent when to do otherwise is not practical, but that's not usually the case when code is well factored.
Make things as small as possible and testable on their own. This might sound "challenging" but keeping it as a 80/20 guideline can help.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Tests help discover bugs WHILE developing, for math it is more complicated.

If you have a complete designed algorithm, it should be tested as a whole algorithm.

Otherwise it will be as testing 1+1. (The outcome of math is straight forward, Unless you make it complicate by using logic).

Therefore you should have some class(X) which has a DetectCollision(Other X) function.

And then you test for your 2 bugs and other cases you might think are good.

What you describe is a code smell.

You should not ever need to unit test a complex idea. Unit tests are for small ideas. The test at the smallest levels, a single unit. If a single unit of your code is complex, it is a good sign you need to stop what you are doing and figure out a better approach.


An excellent book on the subject is "xUnit Test Patterns". It is a large tome, but it breaks down many different common patterns you can use. Also there are many good resources out on the Internet to learn about Test Driven Development.

There are many possible issues, and not knowing the code, we can only guess at what your issues are. You may have violated the Single Responsibility Rule for your class or your function; a class should have a single responsibility and a function should do only a single task of that single responsibility. You may have too tight of coupling between classes. You may need to separate steps of logic into smaller internal units. You may have a poorly written interface that requires too much external configuration. You may not be writing unit tests at all, but be writing a different type of tests.


Of all of those, writing automated tests other than unit tests and calling them unit tests is the most frequent culprit for beginners.

If you really do need to do something complex then it probably is not a unit test. A unit test is to verify the smallest unit of work, hence the name: you are testing a singleunit of work. A unit test should run nearly instantly. A unit test should not touch any other systems, and if a function needs to know about another systems those should be handled with mock objects, fake objects, proxies, or other similar alternative. If running a test means the system under test touches a file system, a database, a network, or any other hardware, then that test is not a unit test. Typically a unit test is a very short set of 2-3 lines setting up parameters to pass to a single function, then the single function call being tested[\b], then a set of assertions about that function call. They should be as close to instant as is possible, measuring tens of thousands of unit tests run per second. If a test takes microseconds or longer to run, it is far too slow to be a proper unit test.

Generally component tests are able to run a small system through its paces by making many calls within the system to exercise the component as a whole, but still use mocks and fake objects for other layers. These should be a little slower than instant. If it takes more than a few seconds to run a suite of component tests on a component, it is generally too long. Component tests should not cross any boundaries, not touch networks or file systems or otherwise hit anything other than the component.

Integration tests tend to be the slow beasts of automated tests. They hit the outer edge of a system and trigger all the real work to be run. They can be written to hit disks and networks and other hardware, since their job is to ensure the end-to-end functionality, and consequently generally take many minutes to run, or on large mature code bases, several tens of minutes to run.

Tests are code. Learning to write good tests is a skill that needs to be developed and honed, just like learning to write any other kind of code requires effort to learn and improve at.

And that's why I haven't described what those 2 bugs are; to demonstrate how hard/unreliable it is to think we can predict every mistake in an algorithm.

Nobody expects that. That is generally not what unit tests are for.

Tests can have many purposes.

The most typical purpose is to ensure permanence, or in other words, to ensure that if somebody maintaining the code modifies its expected behavior the tests will fail. In longer-term development this is an amazing thing. It allows people to modify the underlying code, to swap out functionality with faster internal details, and the collection of tests will show that it behaves the same way. There may have been defects in the existing code, and some of those defects may even have tests written to ensure the behavior remains exactly the same.

Next, tests generally are written as an example of how to use the code. In test-driven development the developer must first establish how the interface is going to be used, then fills in the detail of what it does. When code is mindfully-written this helps ensure the interface is easy to use rather than coding up a system and discovering in practice it is difficult to use and needs to be rewritten on day zero.

Ensuring correctness is a common purpose of unit tests, but as you point out, we all know some cases will slip through. Well-written tests will test and catch many of these. Good practices mean that the developer initially writes a few ideal path and failure path tests that cover the obvious things they are thinking about, then over time when a defect is discovered it will result in a new test that exposes the defect, then the defect gets corrected and the test is modified to become new ideal-path test... except this time the test starts with a comment linking it to a bug number.

Over time, as the code becomes mature and gets maintained for years, there will be an ever-growing suite of tests that ensure permanence of the correct behavior. The tests will automatically regress the thousands of bugs in the bug database every time the code is built, and if someone makes a change that would re-introduce the bug the test will fail because it broke the permanence requirement.

This topic is closed to new replies.

Advertisement