Process from the trenches

Started by
10 comments, last by liquiddark 19 years, 6 months ago
The old advice is that every time you take another step away from the development room, the cost is multiplied (the exact factor ranges from 2x to 10x). BUT we've had, over the years, a lot of "bug fixes" which didn't really fix the bug, and a lot that fixed things that weren't actually bugs. This is where the trenches come in: now a bug MUST be recorded in our defect-tracking software, by a QA resource, in order to be worked on. It sounds insane, but it solves the problem of having bug "fixes" go astray. Anyone else seen these types of pragmatic final-measures style process changes?
No Excuses
Advertisement
You have no regression test so such draconian measures are appropriate. If you had regression test, then you shouldn't (wouldn’t?) have the quality problems with bug patches, thus wouldn't need the strict monitoring and control of bugs.

There's the classic school of thought which monitors bugs during development and prioritizes them (as though it was already a maintenance project), and the "agile" method is to test-to-hilt and stop new development whenever any single bug is detected.

For a project in a maintenance phase bug tracking is essential. It gives you a concrete to-do list and also provides a decent metric of progress. It's not uncommon to add (minor) features during maintenance, but "bugs" that are really feature request should be noted and appropriately (de)prioritized.


I don't think such policy are pragmatic; they're last-resort of a grossly inept system. QA needs to be integrated into the development and maintenance effort to be highly effective, not something tacked on at the end-of-the-line. Classic process requires the QA department to be a seperate unit of business for political reasons... The problem "solved" is managers pressuring the developers not to waste time on QA, not giving them the time and tools for QA, and/or pressuring QA-specific personal not to "hold-up" the software release so they score higher on thier metrics. To me the problem here is inept process and incompetent management.

You have to be on the other end of the spectrum; with a small company working with high-powered developers that actually write few bugs to begin with and can fix them within a day or less the vast majority of the time. The management layers are no more than 3 tiers away from the President (unlikely to have a CEO) of the company, so everyone in management knows eachother and all the developers know each other.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
Quote:Original post by Magmai Kai Holmlor
There's the classic school of thought which monitors bugs during development and prioritizes them (as though it was already a maintenance project), and the "agile" method is to test-to-hilt and stop new development whenever any single bug is detected.


Oddly enough, it's what I've done, instinctively (though in moderation; I get more thorough about testing as the featureset expands and the product seems closer to completion), since I started programming, without anyone telling me to do so. It actually bothers me to think that there are people out there who lack that instinct o_O

The strange thing is that I'll go for such a long time without compiling (generally considered a no-no) because I catch and fix bugs while I'm writing the code, and hop around all over the place before I'm satisfied that I have something that's worth trying to compile. But as soon as I get that thing to compile, I'll run it through all the tests I had planned.

Quote:QA needs to be integrated into the development and maintenance effort to be highly effective, not something tacked on at the end-of-the-line. Classic process requires the QA department to be a seperate unit of business for political reasons... The problem "solved" is managers pressuring the developers not to waste time on QA, not giving them the time and tools for QA, and/or pressuring QA-specific personal not to "hold-up" the software release so they score higher on thier metrics.


Well said. I've always Just Done It before; but now that I'm actually working on a project big enough to justify having more than one person (i.e. myself) working on it, I find that my own capacity to test is insufficient - but there is no way in hell I would *stop* doing my own testing in order to code more.
Quote:Original post by Magmai Kai Holmlor
QA needs to be integrated into the development and maintenance effort to be highly effective, not something tacked on at the end-of-the-line. Classic process requires the QA department to be a seperate unit of business for political reasons...

I agree with everything else you said, but this is pure chaff. QA is most properly a functional testing area, and this should never be confused with a development-level concern. QA resources should usually be experts in the use of the system at hand, except in the specific case where you are trying to determine the intuitiveness of a system. Development resources neither are nor have any reason to be functional experts in a system under development.
No Excuses
Please note that I'm NOT trying to say that QA shouldn't be part of iterative development; it definitely should. But it needs to be made clear that classic QA has a role to serve which is completely divorced from that of a team of programmers and artists. Good classic QA serves to enforce adherence to and quality of design at the user level, and it would be an extremely pathological action to try to stack this role onto the development resources' plates.
No Excuses
If it wasn't part of your original process, how can you be sure of quality?
I don't think I understand the question. What do you mean by "part of your original process"? QA is part of the cycle, they're just supposed to be an independent unit precisely because programmers don't try to emulate users, which is what this particular type of QA does best. Of course, programmers shouldn't write bugs, but then designers shouldn't write vague specs either, and it happens all the time. The job of QA is not to ensure that the program doesn't crash, although this is a valuable function. If the program crashes or has some similar major problem, however, it shouldn't make it out of the programming room. QA is required as a separate unit in order to make sure that what was programmed actually does what is required, and that "what is required" - determined by analysts, not QA or programming resources - makes sense from a user perspective.
No Excuses
I should respond to this as well, as it didn't register with me first time through:

Quote:Originally posted by Magmai Kai Holmlor
There's the classic school of thought which monitors bugs during development and prioritizes them (as though it was already a maintenance project), and the "agile" method is to test-to-hilt and stop new development whenever any single bug is detected.

For a project in a maintenance phase bug tracking is essential. It gives you a concrete to-do list and also provides a decent metric of progress. It's not uncommon to add (minor) features during maintenance, but "bugs" that are really feature request should be noted and appropriately (de)prioritized.

I've never worked in a software organization with this type of organization. Feature requests, in my professional life, are those things that drive the development cycle and power my paycheck. If the above describes a maintenance-phase application, I've never seen a maintenance phase, and the app I'm currently working with is in its 5th year of life with thousands of users paying good money for it. Packaged software simply can't survive unless it continually responds to its customers' needs, and new features are the #1 way to add value. Users can and will work around problems if they have to - although of course there's a limit to their tolerance - but they want to see your product improve as time goes by, because that's the only thing that makes them feel they're going to keep getting value for years to come.

I guess that's a big difference between industrial applications and retail packages - the average game player can't, by themselves, request a new feature & expect it to be implemented, but an industrial user has enough individual financial leverage to ask for and receive their heart's desire.
No Excuses
Now this is something i can comment on! [grin]

I'm a Technical QA, i've previously been a Games Tester and a Compatibility Tester at various companies.

The classic QA role is very important, it's usually an end phase job that is performed by a fairly large number of tester, by large i mean upto 20 for a normal game, although publishers vary. They provide you with constant bug reports, and sometimes gameplay reports although again, that varies publisher to publisher. They're also often quite seperate from the programmers which means that there can be quite a bit of lag in submitting a bug report and it getting assigned, and fixed.

Compatibility Testing is also rear-end of development and usually only for the PC, it means that you test on a very wide range of hardware to see what breaks [smile]

Technical QA is where you sit with the programmers, take constant code updates and do a feature checklist thats basically a cutdown version of the regression test. Currently our products at pre-alpha which means stability isn't quite what it'll be at beta but we'e getting towards being feature complete, each new feature is a source of more instability etc, it's a daily struggle.

The purpose of the Technical QA usually goes beyond code into ensuring that whatever data build processes you use haven't generated a load of crap. You provide a buffer between Programmers who're busy implementing features and Artists/Designers who are busy trying to use them, a bug might go in, but hopefully it will never impede the productivity of the users because i'll see it first.

Not everywhere calls us Technical QA, i think thats the name our Technical Lead chose because i was taking over some of the crap that he'd been landed with over time [grin].

Normally there's someone like me doing some semi-programming role, i do get to code too, so the first loop is always the shortest. We do maintain an active bug database, it's just good practice regardless of anything else, and i personally do chase them.

Trenches sounds like a good idea, but it has to come from above, i can't dictate those kinds of measures from down here (in the trenches) [smile] but yeah sometimes that kind of thing is necessary.

To quote a friend who was in charge of the QA team for a publisher a few years ago, when asked why he wouldn't sign off the game which had numerous bugs and gameplay problems:
Quote:"i'm not signing it off, until it isn't shit" - anon


Andy

"Ars longa, vita brevis, occasio praeceps, experimentum periculosum, iudicium difficile"

"Life is short, [the] craft long, opportunity fleeting, experiment treacherous, judgement difficult."

at my work we document everything. we have active trackers for alsorts of QA stuff.

regression test trackers, problem form trackers and patch trackers. its kinda of useful to see where bugs came in and to see where they have gone out.

also we prioritse them 1 to 5 so we always know whats what.

No Point in working on a 5(tiny display issue) if there is a 1 thats there(really fecked up something on a rc caused by client config). saying that if you have targets for fixing each priority level it helps.

I am going to be writing a system to manage it all soon so we have one resource for monitoring, adding to, working on and signing off. hopefully this will make for a even better system than we have now.

This topic is closed to new replies.

Advertisement