Is this really Test Driven Development or am I barking up the wrong tree?

Started by
6 comments, last by markr 11 years, 6 months ago
Recently I've been reading up on TDD in response to a job posting, and I have to say it's very interesting. After looking at some online sources and also the "Test Driven Development by Example" book I thought I'd try some on a different example by myself. I have summarised the proceedings here http://www.lloydcrawley.com/are-you-down-with-the-tdd-test-driven-development-experiments/ and was wondering is this really the 'TDD Method'? Or am I just kidding myself?
Advertisement
That's the theory. In practice there is almost no one that uses it, but people like to talk about it. It drives your attention away from good code structure and logic, you begin to cheat in order to make the test pass and that is not good at all.



In practice there is almost no one that uses it, but people like to talk about it. It drives your attention away from good code structure and logic, you begin to cheat in order to make the test pass and that is not good at all.


This is just not true. Granted there is a lot of talking about TDD and it's thrown around a lot in managment-circles, but TDD is used in real life. Maybe not so much in the gaming industry, but in b2b market it's alive and kicking. Designing and writing tests is hard work and needs a lot of skill, but once you and your team is used to it it enhances the quality of your code. "Cheating" around a good set of tests can be almost as hard as writing clean code that fulfills all the tests. On the other hand the creed of TDD is that once you pass all the tests your task succeeded, so there is technically no cheating possible. Since you will extend your test set in the future your cheats will fail to pass all the tests at some point which will force you to fix your hacks.
TDD is all about getting correct behavior of a program, it doesn't prevent any kind of ugly code or design.

That's the theory. In practice there is almost no one that uses it, but people like to talk about it. It drives your attention away from good code structure and logic, you begin to cheat in order to make the test pass and that is not good at all.


If I've got the basic theory down then I'm happy. I've started looking into it in response to a job ad I've applied for that has "proficient with Test Driven Development" as a requirement. The job ad is here: http://jobs.gamasutra.com/jobs/131931-31402/Kojima-Productions-in-Japan-Animation-Engine-Programmer-Kojima-Productions-Japan-HQ-Konami-Group-To-JPN?keywords=japan
There's a subtle problem behind the philosophy of test driven development. It reminds me of set theory, so I'll use that to illustrate the issue.

Suppose you create a set of tests which your code needs to pass:

TDD = {A,B,C,D,E,F}

You write code which passes each of the tests in the set. However, the set was an incomplete set of tests. You also needed to add tests {G,H,I, J, ???}. Unfortunately, these tests are part of another set called the "unknowns" which will always exist (which is composed of the two sets "unknowable unknowns" and "knowable unknowns"). Sometimes, these "unknowns" are simply the result of a lack of imagination on the part of a single test writer. Getting multiple people to give inputs on possible test cases can mitigate this, but there can still be unimaginable test cases which occur in reality. The size of the unknown set and the contents of the set is a complete mystery.

So, the complete set of tests is a combination of sets:

Complete Set = {TDD set} + {??? set}

The result is that even if you religiously follow your TDD practice with perfectly writen test cases, it's not a guarantee that the developed software will be without errors. In practice, it yeilds good results and is a good methodology for generating quality code, but don't drink too deeply of the kool-aid.

Real world example: One of the space shuttles burned up in the atmosphere on re-entry because chunks of foam damaged the heat tiles during lift off. Nobody imagined that foam travelling at high velocities would have any effect on the shuttle chasis.

The result is that even if you religiously follow your TDD practice with perfectly writen test cases, it's not a guarantee that the developed software will be without errors. In practice, it yeilds good results and is a good methodology for generating quality code, but don't drink too deeply of the kool-aid.


That is exactly why TDD is hard work and needs quite a bit of experience to yield good results. Having good specifications for what a software should be able to do and where it's limits are is also essential here.

The sets of tests needs extending over time to accommodate for those "unknowable unknowns" as soon as they become known. Two of the most common mistakes I've seen in teams using TDD, were either neglecting to extend the test set or instead of extending just changing the existing tests so that they passed again. Writing tests should become an ordinary part of programming for the developers and not just be done in a separate sprint or only up front.
Luckily compared to the spaceshuttle-example software is usually not destroyed on a critical error and can be patched to work smile.png



I've started looking into it in response to a job ad I've applied for that has "proficient with Test Driven Development" as a requirement.


Quick word of warning here, as with many development-topics getting the theory right might not yet make you "proficient with Test Driven Development".
Also beware that many business types misuse test driven development. In software circles, it means "write the tests first, and do no more work than is necessary to pass those tests."

In business circles, it very often means "developers write automated unit tests".
In industry, we DO use TDD. Just not exclusively for everything.

In some respects, it's more about thinking about how your code will be used. Writing the tests first can help, because it lets you design the API cleanly in a way which works (from the POV of the test-harness).

Unfortunately, unit tests are not a universal panacea:

  • The best set of well-designed, well thought-out, peer-reviewed unit tests, will never find a serious bug which appears instantly once a customer uses the software
  • Some engineers are tempted to "over engineer" their unit tests - mocking lots of other components etc, and end up with a fabulously complicated, brittle unit-test which really does very little testing.
  • Unit tests CAN slow down development. Chiefly, because every time an engineer breaks them, he has to spend time diagnosing why they are broken. This is especially true if they are brittle or unreliable.

This topic is closed to new replies.

Advertisement