Beware the inputs... of March
It seems pretty common when doing TDD that you forget some of the inputs for the CUT (class-under-test) that you are working on.
I learned this lesson:
Dependency injection works. Think about it. If your class needs some data to operate, where is it going to get the data? Either it is born with it, you give it to it through other means, or it finds it itself.
Now if the class is an atomic structure; that is, it has few external dependencies (if any), it can't usually find the data for itself... unless you've gone and been ugly and used global data or a singleton.
So dependency injection is the way to go.
H8 teh settors
Now if you're like me, you haet setters. Haet 'em. The philosophy behind hating setters is "if you need to set the data, it should have been there in the first place".
But considering the scenario I just described... what's wrong with a setter?
Class objects should have a single responsibility, yet no class is an island. Classes in isolation don't do very much.
So the thought process was this:
- If instances of this CUT are created on the fly, use constructor injection
- If this CUT is going to persist through the life of the application, and has no external dependencies, use dependency injection through the public interface
- If this CUT depends on other aggregated or otherwise depended on components (DOCs), it's possible for the CUT to call one of them for its data (dependency lookup)
I don't hate setters so much now [smile]
Just do what it tells you to do
Moving along and refactoring the resultant code proved much easier... some instance variables cleared up and disappeared, and the public interface for the CUT and some of its collaborators simplified greatly.
This is the code speaking to you. If you trust it, the design of your overall framework will be happier for it.
Tests multiplying like little rabbit buggers
Another interesting observation is that I'm always coming up with "new things to test" in the middle of testing. An idea occurs... "Oh! What if that function gets called twice in succession? Shouldn't I test that?"... and then it is written down.
When this used to happen before TDD... panic would set in, because it seemed like a destabilizing force was trying to rock the ship. Now I just recognize it as a force of emergent design... where the requirements are not fully known, nor is the architecture at test time. You have to go on discovering things... possibly way into the future... but the existing network of tests, and your greatly decoupled code make it trivial to just write one more test at any time.
Look me up
I glossed over it earlier in this... but let me tell you that "Dependency Lookup" was a watershed discovery yesterday.
Obviously when doing TDD, you have to try to write tests in such a way that you come up with the expected behavior when you're done.
I got stumped on a single test on a class that would convert a Bitmap Header into a header that matched the current screen resolution.
I knew an impending hardware call was coming... because how else would you discover the current screen resolution? So how was I going to "test this in" so that the code could get the required bitdepth?
Step back and let the data handle it
First of all.. I decided to write a specific case...
TEST ( _HeaderConverter_Retrieves32AsScreenBitdepth )
... and then it just sort of fell into place when I realized that the object that KNEW what the screen depth was... in this case... was the stub for DirectDraw.
Exclamation points. SO all I had to do was stuff the data into the stubbed DirectDraw object, and then there would be tension between the test and the hardware.
Initial Test: failed. Expected 32, but was 0;
- change TEST code so that the stubbed direct draw object held "32" in its DDPIXELFORMAT member
- change production code to call IDirectDraw::GetDisplayFormat()
A hard-coded "32" wouldn't have worked in this case to make the test pass, because I was using a "test-specific subclass", which is a subclass of the class you are testing... which makes it trivial to peek at the class variables and such.
I could have written about 30 different tests passing bitdepths from 0 to a million in there, but in one way or another, the class would have had to make a call to the hardware which is what we wanted.
Key understandings: Go from Specific To General
So the key understandings I've pulled from the last few days' experience are:
- To write tests:
- Outline a bunch of specific cases and what you expect the results to be
- Then write a test for each
You'll find that you go from a specific result (like returning "32" or something) to a much more general approach... shaped by the inputs you either send to the CUT, or by the inputs it gets from somewhere else.
As an overall "mindset" when starting a TDD session... I'm trying to think this thought:
"What resource do we have that could provide the inputs
we are looking for? If we have it, could we mock/fake it?"
Hah... what a ramble. A lot of words that say basically two things. Hopefully this "newb to TDD" road is useful for other clueless ones in the end.