Unit Tests and Test-Driven Development

Started by
26 comments, last by OrangyTang 16 years, 2 months ago
While I appreciate the defense, he wasn't quoting me.

In any case, threading does get mentioned eventually in them as a something you need to watch for in the unit tests when you actually need multiple threads. There's a character in the series who actually gets quite snooty about it near the middle-endish of the current batch.
Advertisement
I've read up to Craftsman 31, and as far as I can tell, my real problem is that, at heart, I work like I a hacker - that is, I start with a vague idea of what I want to do, and the requirements are made up as I notice the code doesn't do something I want it to do.

For example, I started the Lisp project by writing the two core objects of Lisp - the Atom and the Pair (cons cell). Since Pairs store pointers, I needed to make some way to allocate new objects, so I wrote MakeAtom(string) and MakePair(LispObject, LispObject). I needed some way to delete objects at the end of the program, so I made the two functions add new objects to a list. Then I wrote Evaulate and noticed that it can potentially create a lot of objects with all it's recursion, so I decided to write a stack-based memory manager that could clean up as calls to Evaluate returned. Then, I started testing Evaluate and did TONS of debugging based on a few example programs in the paper it was all based on. Too much of the debugging was of the tests themselves, and I noticed I was having a hard time testing it because I would often accidentally construct lists incorrectly, so I created a class that made it easier to construct lists (and spent a bit of time debugging a case where the copy constructor was called when I expected a conversion and regular constructor). Then I remebered that I probably wanted to implement a 'Read' function as well, that could convert a string into one or more LispObject in order to construct a REPL, and I did that, and then testing became trivial because I could write the tests as text instead of convoluted series of MakePair and MakeAtom. Finally, I spent a ton of time debugging trying to figure out why evaluate would occasionally recurse indefinitely, and found that an error condition wasn't being handled correctly, and that it should basically throw an exception to get out of the recursion. I still haven't finished the project, but I did the whole "make it work" thing and now it just needs about ten times as many man-hours spent on refactoring as I've already spent creating it.

It's rather obvious that in order to use unit tests and test-driven design properly, you really have to know what (most of) the requirements are up front, but I have no idea how to break a large idea into small, testable requirements. I'm so used to "design by accident"(I think that is the proper term for 'not really designing') that I just don't see the trees that make up the forest =-/

I have a specific hobby project in mind that I know I'm going to need this kind of thing for (because it being absolutely correct is vital for it to be worth anything). What is the best way to tackle this mental block?
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote:Original post by Extrarius
I have a specific hobby project in mind that I know I'm going to need this kind of thing for (because it being absolutely correct is vital for it to be worth anything). What is the best way to tackle this mental block?


I found myself in this situation a while ago. I found a quite simple solution: I decided not to code anything. Instead, I spent 4€ on a small blue notebook and went outside in a park for a few hours every time I could, so I could think about the program. I wrote the pseudocode, complexities and proofs of correctness of my algorithms, looked for the shortest and smallest designs that could achieve what I wanted, and tried to isolate a few typical use cases to see if improvement was possible. Then, I wrote the corresponding program in a single afternoon, and it worked on the first try (aside from a few typos that messed things up, and situations where I translated badly the concepts in the notebook, but these came up within the first hour of testing).
Quote:Original post by Extrarius
It's rather obvious that in order to use unit tests and test-driven design properly, you really have to know what (most of) the requirements are up front, but I have no idea how to break a large idea into small, testable requirements. I'm so used to "design by accident"(I think that is the proper term for 'not really designing') that I just don't see the trees that make up the forest =-/


I had a much longer post, and then a server error killed it. Dammit.

Anyway, this is not true at all. The entire point of TDD is to guide the development process. You *have* to change how you write code, that's what drove you to TDD in the first place, your old of writing code was leading to you writing bugs. You need to rethink your development process to emphasize pre- and post-conditions so that you can test those conditions.

The key is that, once a test is written, it doesn't go away. You can use tests as reminders of code that you need to write, or tests that you need to write but haven't figured out how to do it yet. You can use tests as reminders of how to *use* code. You can use tests to show where big changes in code break old functionallity. It sticks around and still serves a purpose, even after you think the thing it tested is long since solved.

Here is an article I wrote on TDD a while ago. It's short, it's meant to be only one page. I use it for evangelism in the office.

[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]

The ideas for my projects work kind of like a wave function - as long as it exists in it's natural state, there are a vast number of possibilities that form a nebulous cloud, and it's not until I'm actually coding a feature that the the idea collapses into a single value that is the implementation I actually go with.

The problem is that I don't know how to describe the nebulous entity in any way that is testable, and once I get to the collapsed version, "design by accident" is already taking place.

On top of that, since the project in my mind is a comples type of game, I have a hard time visualizing how it could be easily broken down into automated tests even if I did have the whole thing worked out in a concrete system, because the number of possible states is practically unlimited and the number of actions in each states is very large. How can I "design-by-testing" a specialized scripting engine when the small pieces don't do anything special but the system as a whole does something unique? I can certainly test the foundation to see that it works "as advertised" but that still doesn't show the higher levels work as they must.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote:Original post by Extrarius
I've read up to Craftsman 31, and as far as I can tell, my real problem is that, at heart, I work like I a hacker - that is, I start with a vague idea of what I want to do, and the requirements are made up as I notice the code doesn't do something I want it to do.

For example, I started the Lisp project by writing the two core objects of Lisp - the Atom and the Pair (cons cell). Since Pairs store pointers, I needed to make some way to allocate new objects, so I wrote MakeAtom(string) and MakePair(LispObject, LispObject)......



So to convert this to Test Driven development make it read as such:
I needed to make some way to allocate new objects, so I wrote tests for MakeAtom(string) and tests for MakePair(LispObject, LispObject). Then implemented them.... TDD means just before you decide to implement X you stop and write Tests for X.
Quote:Original post by Extrarius
The ideas for my projects work kind of like a wave function - as long as it exists in it's natural state, there are a vast number of possibilities that form a nebulous cloud, and it's not until I'm actually coding a feature that the the idea collapses into a single value that is the implementation I actually go with.

The problem is that I don't know how to describe the nebulous entity in any way that is testable, and once I get to the collapsed version, "design by accident" is already taking place.
Well you can write it the way you currently do, but only define the methods on a class, not write the code. You can still have a mutating design but it could evolve much quicker, and then when it settles down you can actually write the code.

Quote:Original post by Extrarius
The ideas for my projects work kind of like a wave function - as long as it exists in it's natural state, there are a vast number of possibilities that form a nebulous cloud, and it's not until I'm actually coding a feature that the the idea collapses into a single value that is the implementation I actually go with.

The problem is that I don't know how to describe the nebulous entity in any way that is testable.

When doing TDD I find I get much better results by deliberately ignoring the design I've already formed in my head and focusing on the tests for small yet core pieces of functionality first. You don't say "I need to write a Parser class to split up expressions", you say "I need a way of splitting up a string into individual expressions" even if something in the back of your head is screaming at you that you'll need a Parser class. Then you write a test for that and see what it wants - maybe a new class, maybe you can get away with some free functions, maybe something completely different. Then you build on this and keep an eye out for possible refactorings with other small, related tests.

To borrow your wave analogy - you use the tests to collapse the cloud down to a specific wave, which may or may not give you the same design and overall implementation that you were thinking of originally.

I find that if the problem and area are familiar then when I've "collapsed" my ideas down into something concrete via TDD then it'll be pretty close to what I was originally imagining. If it's something new and unfamiliar then the result can often be radically different. But it's almost always a better design (plus it's got tests now!).

If you just try and stick to your original design and throw some tests around it then you're going to have a hard time. You've got to be willing to ignore the big picture and focus on the current tests, confident that if you go down a dead end temporarily you can refactor your way out of it (with the tests ready to tell you if you mess up your refactoring).

This topic is closed to new replies.

Advertisement