About this blog
The life and death struggle of a pantomime horse and pantomime Princess Margaret
Entries in this blog
How many of you are juggling right now? Not tossing balls around, but juggling school and work. Oy, this stuff is tough. Gevalt, even.
Next rhetorical question: how stupid is it to take 19 credit hours with a full-time job?
On the plus side, I'm a full CS major now; going through CS3500 (software practice). It's funny to see all the poor java-lings struggling with C++ [smile]
All right... you want TDD? You got TDD. I've written a semi "tutorial"... (more of an "example") of a TDD session.
Parts I & II of my new TDD Example are now online. Sample code, galore!
Please tell me if you find any errors, or parts that don't make sense to you. There will be at least two more parts to the example, when I can work them in between work and school.
Thanks a million!
Soon I'll finish a RANT about new school games begun on a different developer's journal, as well...
Well as promised, I recorded my last TDD session on tape. Unfortunately NOT as promised... it is undeliverable as is... er... the words were too fuzzy (analog VCR tape... shoulda known).
I was hopin to have a nice y00t00b link or something today... oh well [smile]
If I weren't getting ready to go back to The U of U CS School, I'd have this done already.
What I will do is watch the tapes and put everything down on paper ... and then HERE... for you instead. In written form.
Some Big, Fat Policies
We're done with the "Tile Loading" iteration, and ready for some integration testing now. Just to give you a TASTE of the pain that is "policy-based" design... here is the exact rough-up of the typedefs we'll have to use to get our newly TDD'ed objects off the ground:
typedef MS::dx::ddraw::Tile tile_t;
typedef bawd::graphics::_TileSet tileset_t;
typedef MS::dx::ddraw::TileImplCreator tile_impl_creator_t;
typedef MS::win32::graphics::bitmap::GetSourceBits get_source_bits_t;
typedef MS::win32::graphics::bitmap::ParseHeader parse_header_t;
typedef MS::dx::ddraw::bitmap::ConvertHeaderToScreen convert_header_t;
typedef MS::win32::graphics::utility::ConvertPixelToX1R5G5B5 convert_pixel_16_t;
typedef bawd::graphics::utility::colors::Bit24Alpha bit_24_alpha_t;
typedef bawd::graphics::utility::colors::OpaqueAlpha opaque_alpha_t;
typedef MS::win32::graphics::bitmap::GetPalette get_palette_t;
typedef MS::win32::system::api::callers::LoadLibrary_ load_library_t; // not tested
typedef MS::win32::system::api::callers::FreeLibrary_ free_library_t; // not tested
typedef MS::win32::system::api::callers::memcpy_ memcpy_t; // not tested
typedef MS::win32::system::api::callers::FindResource_ find_resource_t; // not tested
typedef MS::win32::system::api::callers::LoadResource_ load_resource_t; // not tested
typedef MS::win32::system::api::callers::LockResource_ lock_resource_t; // not tested
typedef MS::win32::system::api::callers::DeleteObject_ delete_object_t; // not tested
typedef MS::win32::system::policies::_OpenDLL open_dll_t;
typedef MS::win32::system::policies::_CloseDLL close_dll_t;
typedef MS::dx::ddraw::_DataBlit data_blit_t;
typedef MS::win32::graphics::bitmap::_CreateEmptyBitmap create_bitmap_t;
typedef MS::win32::system::memory::_GetAvailableMemoryint> get_available_memory_t;
typedef MS::win32::graphics::utility::_ConvertPixelToA8R8G8B8 convert_pixel_24_t;
typedef MS::win32::graphics::utility::_ConvertPixelToA8R8G8B8 convert_pixel_32_t;
typedef bawd::graphics::bitmap::_BitmapBuilder bitmap_builder_t;
typedef MS::win32::system::_ResourceOpener resource_opener_t;
typedef bawd::graphics::bitmap::_Parse bitmap_parser_t;
typedef bawd::graphics::bitmap::_ExtractData data_extractor_t;
typedef MS::win32::graphics::bitmap::transform::_Flip bitmap_flip_t;
typedef bawd::graphics::bitmap::_PixelConverter pixel_converter_t;
typedef bawd::graphics::bitmap::transform::_BitdepthToScreen bitdepth_to_screen_t;
typedef bawd::graphics::bitmap::_ConvertLoadedBitmapvoid *> convert_loaded_bitmap_t;
typedef MS::win32::system::resource::_GetBitmapResourceunsigned int> get_bitmap_resource_t;
typedef MS::dx::ddraw::_TileExtractorunsigned int, void *> tile_extractor_t;
typedef bawd::graphics::_TileLoader tile_loader_t;
Validated After a Battle
Isn't it extremely weird that sometimes... that you can battle mightily for something... expecting the worst... and, after all you can do... you are proven right in the end?
It was like that today when trying to "instantiate" the main monstrosity shown above (the "tile_loader_t" type).
Loki en la cabeza
I've been using Andrei Alexandrescu's "Loki Library" for some time now. Even found a bug that they ended up fixing once.
Well... today... it seemed like Loki's "ClassLevelLockable" wasn't about to let me get away with instantiating a "tile_loader_t" without bitching and popping up one of them ugly dialog boxes (ASSERT)
After tracing through... and noticing that I had had plenty of success using very similar code for Loki in the past... I felt like the problem had to be inside Loki.
Divide and Divide
So yeah... divide and conquer... got down to the smallest pieces I could (policy classes) and found that the assert was happening when instantiating a Loki::Functor template that used "Loki::ClassLevelLockable" as a policy.
But it didn't make sense. Then one of my dumb mistakes made me realize that I wasn't really testing it right.
I was actually putting test code at the end of a source file as if it were global data...... I actually tried using an "if()" statement outside of a function block... and the compiler complained.
That woke me up to what Loki was actually doing. Maybe the assert inside Loki would happen if and only if you used a Functor in the global namespace.
Proof positive So I mocked up a trivial example that tested that theory... which can be found here... and tried it out.
Yep. Vindicated. The assert happened only in the global namespace outside of a function definition... in "global" variables.
So I happily went back and fixed my test code so that it ran inside of a small, trivial test... and was able to get a "tile_loader_t" instantiated (woo hoo!) without any asserts.
The real treat was in realizing that none of my code had to change one IOTA. I did spend a helluva lot of time tracking down Loki's dumb bug, which I submitted to them... but my TDD'ed code kicked ass all the way.
On to integration!...
(psst... this means I've now found TWO bugs in LOKI all by myself. I must be hella awesome [wink])
Class reunion was fantastic. I may marry that one chick...
Well, the spies have it.
You'll see in the responses to the last Verg-o-nomics entry that I'd screwed up the implementation of the spies, and that's what's caused all the pain.
Stubbing your foot on the inheritance tree
It stands to reason that you should use 100% configurable spies/stubs so that they return 100% predictable results for the indirect inputs to your SUT ... you can even set the stubs to fail on purpose and exercise a different facet of your SUT.
If, on the other hand, you're just lazy (like I was) and directly subclass the real components or "DOCs" to create your spy/stub classes, you may have to roll real test data, or your tests may crash!
Tests have been much smoother so far... though I've been mostly refactoring to remove key components from classes who were delegating to too many delegatees... and consolidating them by overall responsibility into larger delegatees.
You can take a peek at the actual refactoring on the Death Star whiteboard... it's the PixelConverter template being extracted from the PixelDepthToScreen template... which will be renamed 'BitdepthToScreen'.
Overall, TDD has been a calming influence and steadying force throughout... because refactoring is painless, and I know right away if I've broken something when the tests squeal.
It shouldn't be too long before this is done.
I'm going to experiment with putting up a step-by-step TDD session (warts and all) soon... as soon as I can figure out how to record the session (S-Video out to VCR?) and then translate it.
Who knew "Tile Loading" would be the major component, in contrast to "Tile Rendering", or even "Tile Map Scrolling"?
Sometimes things just get ugly.
I'm TDDing a larger (delegating) class, and it got fugly.
First of all... I tried to set up too complicated a fixture, I think, and it took way too long.
The reason why? I think the cause of the problem was I was trying to use "pseudo-real" data to drive the tests. But it doesn't seem to work when talking about "delegator" types, because the inputs and outputs for delegators should actually be simpler...... I *think*.
I believe what ended up happening was the creation of Integration tests instead of unit tests, because in the end the data I was feeding in was meant to elicit proper outputs from delegatees instead of just the SUT (the delegator).
Who's driving this bus?
I'm confused, though, as to what to think. TDD is supposed to drive the development of your "system under test". In this case, I'm talking about a delegating class that either aggregates, or has as policies previously TDD'ed classes/components.
What does this mean about using "specific examples" to drive development, in this case? In other words... what type of "specific example" would be needed to TDD a delegator type?
Do I need to just fall back and test that the "depended-on components" are called, through stubs/spies/mocks?
? ? ?
I realize the end result *should be* an algorithm in code form, which utilizes the delegatee classes. There's just a disconnect in my brain somewhere about what type of "specific example" would drive this development.
And I leave for Philadelphia tonight at 10PM.
This stuff is starting to sink into the old coconut...
Beware the inputs... of March
It seems pretty common when doing TDD that you forget some of the inputs for the CUT (class-under-test) that you are working on.
I learned this lesson:
Dependency injection works. Think about it. If your class needs some data to operate, where is it going to get the data? Either it is born with it, you give it to it through other means, or it finds it itself.
Now if the class is an atomic structure; that is, it has few external dependencies (if any), it can't usually find the data for itself... unless you've gone and been ugly and used global data or a singleton.
So dependency injection is the way to go.
H8 teh settors
Now if you're like me, you haet setters. Haet 'em. The philosophy behind hating setters is "if you need to set the data, it should have been there in the first place".
But considering the scenario I just described... what's wrong with a setter?
Class objects should have a single responsibility, yet no class is an island. Classes in isolation don't do very much.
So the thought process was this:
If instances of this CUT are created on the fly, use constructor injection
If this CUT is going to persist through the life of the application, and has no external dependencies, use dependency injection through the public interface
If this CUT depends on other aggregated or otherwise depended on components (DOCs), it's possible for the CUT to call one of them for its data (dependency lookup)
I don't hate setters so much now [smile]
Just do what it tells you to do
Moving along and refactoring the resultant code proved much easier... some instance variables cleared up and disappeared, and the public interface for the CUT and some of its collaborators simplified greatly.
This is the code speaking to you. If you trust it, the design of your overall framework will be happier for it.
Tests multiplying like little rabbit buggers
Another interesting observation is that I'm always coming up with "new things to test" in the middle of testing. An idea occurs... "Oh! What if that function gets called twice in succession? Shouldn't I test that?"... and then it is written down.
When this used to happen before TDD... panic would set in, because it seemed like a destabilizing force was trying to rock the ship. Now I just recognize it as a force of emergent design... where the requirements are not fully known, nor is the architecture at test time. You have to go on discovering things... possibly way into the future... but the existing network of tests, and your greatly decoupled code make it trivial to just write one more test at any time.
Look me up
I glossed over it earlier in this... but let me tell you that "Dependency Lookup" was a watershed discovery yesterday.
Obviously when doing TDD, you have to try to write tests in such a way that you come up with the expected behavior when you're done.
I got stumped on a single test on a class that would convert a Bitmap Header into a header that matched the current screen resolution.
I knew an impending hardware call was coming... because how else would you discover the current screen resolution? So how was I going to "test this in" so that the code could get the required bitdepth?
Step back and let the data handle it
First of all.. I decided to write a specific case...
TEST ( _HeaderConverter_Retrieves32AsScreenBitdepth )
... and then it just sort of fell into place when I realized that the object that KNEW what the screen depth was... in this case... was the stub for DirectDraw.
Exclamation points. SO all I had to do was stuff the data into the stubbed DirectDraw object, and then there would be tension between the test and the hardware.
Initial Test: failed. Expected 32, but was 0;
change TEST code so that the stubbed direct draw object held "32" in its DDPIXELFORMAT member
change production code to call IDirectDraw::GetDisplayFormat()
A hard-coded "32" wouldn't have worked in this case to make the test pass, because I was using a "test-specific subclass", which is a subclass of the class you are testing... which makes it trivial to peek at the class variables and such.
I could have written about 30 different tests passing bitdepths from 0 to a million in there, but in one way or another, the class would have had to make a call to the hardware which is what we wanted.
Key understandings: Go from Specific To General
So the key understandings I've pulled from the last few days' experience are:
To write tests:
Outline a bunch of specific cases and what you expect the results to be
Then write a test for each
You'll find that you go from a specific result (like returning "32" or something) to a much more general approach... shaped by the inputs you either send to the CUT, or by the inputs it gets from somewhere else.
As an overall "mindset" when starting a TDD session... I'm trying to think this thought:
"What resource do we have that could provide the inputs
we are looking for? If we have it, could we mock/fake it?"
Hah... what a ramble. A lot of words that say basically two things. Hopefully this "newb to TDD" road is useful for other clueless ones in the end.
About this TDD business...
I suppose to most of you this concept would seem obvious; but I'll try and lay it out simply.
When you start working on something using Test-Driven Development, you're trying to capture the behavior of an entire system, or subsystem at a time... at least that's where you start.
You focus hard on the responsibilites... or "what you want the code to do". So, you pick a responsibility, or "responsibility set" and start off. These are often called "user stories".
First things last
I'd been working this way, at a high level, and was noticing that several of the requirements were always breaking down, themselves, into more "atomic" steps. So I'd write out the sub-requirements... but afterword, I'd go back and try to keep implementing this now "larger" responsibility set FIRST.
The difficulty I'd been running into is that to work with these "larger concepts" (which usually map directly to a "class"), you have to "stub out" the dependencies (or "smaller classes") so that you could keep working. What I call "stubs" for these "smaller classes" or responsibilities are called "Test Doubles".
What I realized tonight, after working on a much narrower responsibility set with no dependencies, is that the smaller, more atomic stuff, is much easier to test! D'oh!
Here's the problem in a nutshell:
To write a "Test Double" (or Stub/Spy/Mock/Fake), you have to know something about what you are stubbing... or what your SUT (system under test, or "tested class") will need. In other words, you have to break away and do a little design on THAT... at least the new interface.
It takes extra time to create these Test Doubles
Often as you're working along, and you finally get to start testing the smaller depended-on classes, or DOCs, "into existence"... you'll find that you didn't capture enough of this class' responsibilities when you test-doubled it for the larger class... so you have to retro-fit your larger class for new functionality
Pain, the greatest teacher
The last one in the list is usually the most painful; but it's not that bad, ironically, because you will have tests in place ... so that if you "screw something up" trying to make your larger class work with the new requirement, you can just make the tests pass again... and you're good to go.
So maybe... I don't know... I need shot in the face for my refusal to see this clearly from the start.... but now this concept is out there... and maybe it can help you avoid shooting yourself in the face.
I've updated my whiteboard... for comparison with the old one... if anything ... it's interesting to me [smile] ... but I do think it's keeping me on-track.
Well... here we go. The beginning of the end =)
THE END IS THE BEGINNING IS THE END
I decided to join up with gamedev.net because it's been an incredible resource (read "free") for quite some time, and felt it time to contribute.
This tale will mostly chronicle the development efforts of our side-scrolling 2D platform "Ascent".
Tee Dee Dee
I'm really starting to dig into TDD, or "Test-Driven Development", which emphasizes development by creating atomic tests before writing the production code it is supposed to test.
I'm having the usual "learning pains" as I go; however, I think I'm starting to nail it a bit.
WHERE IS THE DEATH STAR?
My current trouble is trying to keep track of where I'm at, and where I need to go. The solution to that was a bit hard to figure out.
Last night, it "came to me in a vision", of sorts... a vision of the "Death Star". I call it the "Death Star" because it feels like I'm building something that large atom by atom... and it is hard to see "what's done" and what isn't.
I call it "Mapping Out The Death Star". It's really just a whiteboard that shows where the development/testing cycle is at any one moment. I'm wondering if there are any commercial tools that make this sort of tracking easier.
The whiteboard was created in MS Word, then translated to HTML, because VS2005's HTML editor is a joke.
At any rate, hopefully I can start ripping through new tests and not chugging to a stop because I don't know where I'm at.
Maybe this whiteboard idea will be useful to you. Tell me what you think.
Some additional observations about TDD.