TDD and predefined models

Started by
11 comments, last by Dospro 7 years, 6 months ago

Hi everyone.

I'm starting to get more involved in test driven development (TDD) but i have this question.

Suppose you want to build a lexical parser. Using TDD approach i start with a test the tests the there is a lexical parser (a class).

Then maybe test for a function which given some text it returns the most simple token. Refactor. And so on...

Now. Most of us know that one way to create a lexical parser is to implement a finite state automata.

So... When will TDD gets me with a tested FSA?

Using TDD when and how can you decide to use an already established pattern intead of reinventing the wheel?

Advertisement

I'm not sure I understand your question. TDD has nothing to do with using an already defined pattern or not. You wright your tests to test for expected outcomes against a mock. They should all fail. Then you write your code weather its a FSA or not to make the tests pass.

So... When will TDD gets me with a tested FSA? Using TDD when and how can you decide to use an already established pattern intead of reinventing the wheel?
One of the problems of development is the decision when to stop. Requirements are underspecified, or partly overspecifed, design documents may go into irrelevant details, or define parts that you don't actually need.

TTD in its pure form (I have no experience with it, so I am explaining things from a theoretical perspective) aims to solve this development problem by using tests as reference. You write tests that cover all functionality that you must have. Then you write your software, as usual, using whatever you need to make it happen. TTD then defines that you should stop development as soon as all tests pass.

(Tests cover all required functionality, all tests pass, ergo, you have all required functionality. You're done!)

So TTD (in pure form) doesn't say how you should solve your software problem, it only defines how to decide when you should stop writing code.

Ok ok. I got that.

I asked this because the TDD approach specifies 3 rules:

1. Don't write production code, unless it is for making a test pass.

2.Write a minimal failing test (the least code to make a test fail)

3. Write just enough code to make the test pass

Of course it is missing the "after writing production code that passes, refactor".

So using that approach, I have to write one test, make sure it fails, then write the production code that passes that test, refactor and repeat.

Being strict, i cannot write a bunch of failing tests at once (i don't know if this is more practical), just one test at a time verifying a single functionality.

Anyway. Thinking about this, it occured to me that maybe after having a bunch of test passed then build the FSA as a refactoring step.

What do you think?

You create the test that fails first. This ensures that your test actually tests the thing. Without that rule, people write tests that don't actually test the thing they expect, it passes the first time they run it because the test is wrong, and it always passes forever because it tests the wrong thing.

The "minimum" amount for tutorials about how to use TDD tend to be oversimplified. The goal I aim for personally is a 3-5 minute window. If it is functionality I can write in 3-5 minutes, and it is a single piece of unit testable code, that makes a good test. If it isn't unit testable for various reasons --- maybe it should be an integration test or acceptance test or something else -- then don't do that.


Just keep in mind the purpose of testing and the goals of TDD.

Tests make code permanent. You test for the things you want to remain unchanged over time. TDD works best when you know what you want that permanent behavior to be. Unit tests should only test individual 'unit' behavior, generally a transformation of input to output, or input to event, or similar.

If you are writing code that has no permanent behavior, perhaps you are building an experimental system to try a game mechanic, then don't write automated tests for it. That doesn't make sense for TDD.

If you are writing code that has no long life, perhaps it is throw away or once-off code, then don't write automated tests for it. That doesn't make sense for TDD.

If you are writing code that is longer than a unit, something that takes multiple calls or multiple events or works with data from disk or from the network or from a database or from a user, that is not a unit test. That is one of many other types of tests. While it may work for TDD if you are building a suite of acceptance tests, it generally doesn't work well for the Red-Green-Refactor style of TDD.

That was a bit enlightening.

But:

Tests make code permanent. You test for the things you want to remain unchanged over time. TDD works best when you know what you want that permanent behavior to be. Unit tests should only test individual 'unit' behavior, generally a transformation of input to output, or input to event, or similar.

I have read that it is quite the opposite. TDD makes change easier and safer. If you want to change some behavior, then you first change the related tests but the entire suite should still pass...... Writing this, it sounds more like acceptance tests.

Now, i think my real question is: how does this testing disciplines (TDD, Acceptance tests, Integrations tests, CI) relate to Software arquitecture and design. As far as i understand, when you are in a design phase, you choose most of the patterns that might implement the use cases. But in the other hand there are people who says that testing disciplines can drive and even improve software design.

What is your experience?

I have read that it is quite the opposite. TDD makes change easier and safer. If you want to change some behavior, then you first change the related tests but the entire suite should still pass....

Yes, but do you realize how much work writing tests actually is? Pretty soon you'll have a few hundred of them.

Also, how do you know you changed all the tests that needed to be changed? How do you know you changed them all in the right way?

You'll need meta-tests to tests the tests, I think! Now we only need to solve how to make sure meta-tests test the right thing....

For small scale changes, this may be feasible, but you're doubling the workload at least, since tests and code have to stay in sync.

Writing this, it sounds more like acceptance tests.

Acceptance tests are tests where the client agrees the result does what you agreed on, and hands over the money. The main interest of the client is typically not whether your lexical parser recognizes a tab character as white space.

As far as i understand, when you are in a design phase, you choose most of the patterns that might implement the use cases. But in the other hand there are people who says that testing disciplines can drive and even improve software design.

These two sentences both use "design" but they don't agree what "design" actually is.

"design phase" in the first sentence refers to the process of designing, deciding how to make happen what the requirements document promised, out of 'nowhere'. What's the overall structure, what are the responsibilities, etc.

"software design" in the second sentence refers to the result of designing, its internal structure, and the exact operations laid out in your source code, all the *.xyz files.

While both things are strongly related, they are not the same thing. TTD doesn't cover how you invent software structure. It won't tell you to use a FSM for a lexical parser.

What it does do, is give you concrete test cases (lots of very small ones) against the latter. Since you write them before you write code, you are not influenced by the software structure of the software, and the tests are better (it would be even better if the tests were not written by the person writing the code). In general, this improves the code. Good tests tend to cover edge cases you didn't think of while writing the code, and possibly even uncover bugs in the design. For example, there may be a test where it appears you need information that the code doesn't have at that point.

The big question here is, is the effort of writing all these tests worth its time? The opinions differ on that. A blog that points out some weaknesses:

http://pythontesting.net/strategy/why-most-unit-testing-is-waste/

Personally, I only use unit tests for cases where I want to be really really really sure the code does what I think it does. This is either in the core of an algorithm which must be really correct, or it is a piece of software that is buried under a zillion layers, but everything rests on its proper functioning. Ever debugging the latter is such a nightmare, I want to avoid that at all costs.

In all other cases, the yield of unit tests is too low for me. (Any test that fails to find an error is wasted effort!) As a result, after being finished, people use it, and find a few bugs that are usually simple to fix. After a while, reported bugs are not bugs in my software, but bugs in the code of the people using it.

That does not mean my software is bug-free, it's just that the usual software paths that everybody uses, have been tested sufficiently to be bug-free. In all the other paths, there are still bugs (with a high likelihood), but nobody uses those paths, so they are never encountered by anyone.

Note that having tests does not mean bugs won't happen. My former colleague did like tests, so we have a big set of tests for a simulator. This year, people found a bug in the dependency calculations, which apparently is not covered by the tests.

What is your experience?

My experience is:

  • unit testing is great. It's especially good for ensuring you don't break things during refactoring.
  • not everything lends itself easily to unit testing - but probably more does than you think.
  • make as much of your codebase unit testable as is practical - but no more. Don't break otherwise perfectly good encapsulation and interfaces to make something more testable.
  • test-first approaches seem like a waste of time, except in the small subset of areas where you're writing something well-understood where the boundary conditions are clear. They also seem to me like they give a false sense of security, given the emphasis on the pass/fail state of the test rather than the correctness of the code it is testing.
  • Wherever possible I like to write a new unit test when I discover a bug, and fix the bug to make the test pass. This way I know that the time on the test was worthwhile because it is covering code that was proven to be tricky!

Regarding how tests make things permanent:

You may write a unit test for code. Say it gives an input of 3, 5, and 9, and the output is "M". That becomes the permanent behavior. The test will always pass as long as 3, 5, and 9 produce "M".

As your test suite grows, you have thousands of tests for thousands of different pieces of code. If you change something and the behavior is modified, the tests fail. If your change means 3, 5, and 9 now produce "X", it will show up in the test.

Once you have a comprehensive suite of tests, every functional change you make to the existing software should change the test results. No matter what the functionality is you are modifying it should be covered by something in the tests.

Hence, tests help make code permanent.

Note also that they don't necessarily make code correct, only permanent. The correct result for 3, 5, and 9 might have been "Q", but if the test says it is "M" then that is the result being tested. Tests should be designed to help minimize defects, but sometimes bugs exist both in the main code and the test code.

TDD with an existing suite of tests means you modify the test first so the test fails, then modify the code to make the tests pass. In this example you may modify it so 3, 5, and 9 should result in "Q" and the test immediately fails. Then modify the main code to the new version, which should make the test pass. Finally you can clean up anything that needs cleanup, which should not cause any test to change results, verified by rerunning the tests and having them still pass.

This is why automated tests for experimental code or for systems with design in flux is usually a bad idea. Any time your system changes you need to change both the test code and the main code.

However, for engine code, core code, shared code, anything where the system needs to stay permanent, that is were automated tests are amazing.

Whoa. Thanks a lot.

This clarifies quite some things.

So for example, in the case i mentioned about a lexical parser.

I know a good approach is using a FSA. So i write a test thats tests mmm identifiers.

So i go, implement a minimal FSA which can get identifiers.

Then, new test to test an operator.

Then implement the part of the FSA which recognizes that operator.

...

It's interesting how pure TDD doesn't always drives you to the correct design.

Thanks for sharing your experience and knowledge.

This topic is closed to new replies.

Advertisement