Unit Testing ftw?

Started by
35 comments, last by Ronnie Mado Solbakken 10 years, 6 months ago

The thing with unit testing is if you use it you almost have to do TDD then, otherwise it's sort of hypocritical. And to do unit testing/TDD correctly you'll have at a min about 4 tests per unit.

I dont get that -- unit testing was around for decades before the TDD methodoligy appeared. What's hypocritical about writing tests after you write code?

No, unit testing doesn't imply TDD. Although TDD does imply unit testing.

As for the number of tests, 4 seems pretty arbitrary..? In my engine, the internal assertion macro is hooked up to the testing framework, so I just write one test per 'unit' that calls all methods / demonstrates each usage, and asserts he results are as expected. If these test assertions or the units internal assertions fail, the test fails.

The number of tests, generally, should be more dependent on the number of branch conditions you wish to test than some random number picked out of a hat. Code coverage is the main thing you want to ensure with your unit tests.

Like in TDD, I hook my tests up to the compile button, so a test failure is reported the same as a compilation failure (with file/line numbers), but I usually write them after the code, or concurrently, instead of before like in TDD.

That's fine by me. I've been doing unit testing for quite a long time now (last decade or more?) I've tried TDD as well, and for some projects it has worked out great. It DOESN'T tend to work out so great though when you're dealing with a pre-existing code base.

This means you are bouncing back and forth from your test project to your production project about every 30 seconds to a min. Some tests you might have to stop and think about what you want to write.

There's no reason you can't write your tests in the same source file as the unit if you feel like it ;P
As above, a IMHO a large part of your tests should be taken care of by assertions that are internal to the unit anyway, and not part of the test.

Error checking, and similar behaviors SHOULD be part of your code, not part of the unit tests. But unit tests should be able to be extracted from the code, mainly for use with integration testing suites, and for use with the rest of your documentation.

The point of TDD is that it makes you stop and think about the use case (what to write for the unit) before writing anything, but often this is true anyway. E.g. You're writing some code that requires a hash-map lookup, but you don't have a hash-map class (for some odd reason) - at that point, you've already got a production use-case for what the hash-map class is required to do, much like what your TDD code would give you.

Exactly.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

Advertisement

"QA hours are cheap."

Tell that to the people I used to contract for.

Depends on what is being tested, etc...

I usually had more programming expertise than most of the people doing the programming (I also used to fix code produced by others, dealt with things they forgot/ignored/never knew to do)

Sloppy companies dont have the programmers test their own code first and foremost (a practical problem also because many programmers have never learned how to do it properly). Even things like code review get skpped.

Ive also worked for companies who WANTED automated testing, but for much of the products development the code kept changing faster than the automation team could keep up (and the product programmers and their managers often were not cooperative in putting features into the code itself to facilitate testing).

Often it was so bad that I could find vast quantities of bugs faster than whole teams of programmers could fix them (assuming they werent already busy writing MORE buggy code/features)

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

Unit tests would prevent so much grief where I work. I get occasionally asked to investigate why some of my code isn't working right. After the initial panic of "oh shit, what edge case did I overlook", I investigate and find out that someone else has tried to tweak something and ended up breaking the ENTIRE system. I partially want to blame myself for creating such complexity in the first place, but that's not something you can avoid in game development. Code is going to get horrifyingly complex whether you want to or not, and nobody else is going to be able to understand what the code does in its entirety.

The first reason we don't have unit tests is that they aren't prioritized. We're barely given enough time to implement what the product managers want in the first place. The higher-ups never consider the fact that we're going to get overwhelmed by interruptions to "go figure out why X doesn't work anymore" as time passes and people accidentally break existing code.

The second reason we don't have unit tests is because, since we've never written them, we're inexperienced when it comes to architecting our code in a test-friendly manner. So, instead of seeing code and saying to ourselves "I could write tests for that", we look at the code and say "There is no freaking way to write a meaningful unit test for this." And somehow, this is the accepted reality of development where I work. It makes me so incredibly frustrated knowing that we could be saving ourselves future trouble if we weren't so inexperienced with unit testing.

I guess, at the end of the day, it's all about how sloppy the individual programmer (or the company that he/she is working for) is. Different tests for different situations, but the important thing is that you don't let code slip by untested? Sounds about right.

- Awl you're base are belong me! -

- I don't know, I'm just a noob -

I guess, at the end of the day, it's all about how sloppy the individual programmer (or the company that he/she is working for) is. Different tests for different situations, but the important thing is that you don't let code slip by untested? Sounds about right.

Lack of testing is often just one problem. Getting the managers to prioritize a bug (or even except it is a bug) so that it actually gets fixed is another fight. More than a few times a user interface bug I found and reported (and demonstrated) was then subsequently ignored (sometimes I was told "the customer wont do that") and quite soon after the customers started using it they were reporting the problem.

Long ago a QA manager told our QA group that there is a psychological impairment to QAs effectiveness (first and foremost is funding/prioritization and then cooperation) -- that there is a big negative aspect because the result of our work is to tell the programmers (and often designers) that their code/design is crap. So many times, Ive had to show them their code was buggy - actually demonstrate it before their eyes before they HAD to believe that THEIR code had a problem.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Keep in mind that QA and automated unit tests are very different things.

Automated tests can tell you when a function that used to work is suddenly giving different results. It can tell you when a component is giving unexpected values that will propagate elsewhere in the system. Automated acceptance tests can do some fairly elaborate stress tests if they are well written, and they can help you tune and adjust things during development.

QA can tell you that a key feature is missing, but unit testing will not. QA can tell you that a feature is confusing, where unit testing just verifies the status quo. QA can tell you that features are unnecessary or redundant. QA can run creative negative tests that you will never normally consider writing unit tests for. QA can give feedback about how a human uses and abuses the system in ways automated tests never will.

We've all heard stories about the crazy things users do. There are anecdotes about people who are less technically savvy who do things like take screen captures, paste them in word to crop them and scale them, and then email the whole word document because that's they only process they understand for sending a screen capture. It is things like that where programmers rarely think to write automated tests because that isn't what the user is supposed to do. A good QA team will understand that users follow extremely strange paths through software, and find amazingly creative ways to break the system.

Sure they'll find very weird bugs that you won't fix. I've seen some "press down on the game console and listen as the disc grinds to a halt, repeat until read errors occur", and once even saw "pause the game and alt+tab out, run the uninstall program skipping the warnings, then alt+tab back to game, game will crash." But better for QA to test the crazy and blatantly invalid scenarios than for your customers to discover obscure flaws when the system is in use. There are so many unexpected real life situations that no amount of automated unit tests will ever discover.

Automated tests are great where they make sense. They can augment testing and make development safer (with a tradeoff of additional initial development costs). But automated tests will never be able to replace a high-quality QA team.

In addition, some of the most iconic features found in games are made from unintended bugs. Would be a shame if an automated system just fixed the bug that later became the Creeper in Minecraft, without considering its potential use in that game. wink.png

- Awl you're base are belong me! -

- I don't know, I'm just a noob -

This topic is closed to new replies.

Advertisement