Sign in to follow this  
liquiddark

Process from the trenches

Recommended Posts

The old advice is that every time you take another step away from the development room, the cost is multiplied (the exact factor ranges from 2x to 10x). BUT we've had, over the years, a lot of "bug fixes" which didn't really fix the bug, and a lot that fixed things that weren't actually bugs. This is where the trenches come in: now a bug MUST be recorded in our defect-tracking software, by a QA resource, in order to be worked on. It sounds insane, but it solves the problem of having bug "fixes" go astray. Anyone else seen these types of pragmatic final-measures style process changes?

Share this post


Link to post
Share on other sites
You have no regression test so such draconian measures are appropriate. If you had regression test, then you shouldn't (wouldn’t?) have the quality problems with bug patches, thus wouldn't need the strict monitoring and control of bugs.

There's the classic school of thought which monitors bugs during development and prioritizes them (as though it was already a maintenance project), and the "agile" method is to test-to-hilt and stop new development whenever any single bug is detected.

For a project in a maintenance phase bug tracking is essential. It gives you a concrete to-do list and also provides a decent metric of progress. It's not uncommon to add (minor) features during maintenance, but "bugs" that are really feature request should be noted and appropriately (de)prioritized.


I don't think such policy are pragmatic; they're last-resort of a grossly inept system. QA needs to be integrated into the development and maintenance effort to be highly effective, not something tacked on at the end-of-the-line. Classic process requires the QA department to be a seperate unit of business for political reasons... The problem "solved" is managers pressuring the developers not to waste time on QA, not giving them the time and tools for QA, and/or pressuring QA-specific personal not to "hold-up" the software release so they score higher on thier metrics. To me the problem here is inept process and incompetent management.

You have to be on the other end of the spectrum; with a small company working with high-powered developers that actually write few bugs to begin with and can fix them within a day or less the vast majority of the time. The management layers are no more than 3 tiers away from the President (unlikely to have a CEO) of the company, so everyone in management knows eachother and all the developers know each other.

Share this post


Link to post
Share on other sites
Quote:
Original post by Magmai Kai Holmlor
There's the classic school of thought which monitors bugs during development and prioritizes them (as though it was already a maintenance project), and the "agile" method is to test-to-hilt and stop new development whenever any single bug is detected.


Oddly enough, it's what I've done, instinctively (though in moderation; I get more thorough about testing as the featureset expands and the product seems closer to completion), since I started programming, without anyone telling me to do so. It actually bothers me to think that there are people out there who lack that instinct o_O

The strange thing is that I'll go for such a long time without compiling (generally considered a no-no) because I catch and fix bugs while I'm writing the code, and hop around all over the place before I'm satisfied that I have something that's worth trying to compile. But as soon as I get that thing to compile, I'll run it through all the tests I had planned.

Quote:
QA needs to be integrated into the development and maintenance effort to be highly effective, not something tacked on at the end-of-the-line. Classic process requires the QA department to be a seperate unit of business for political reasons... The problem "solved" is managers pressuring the developers not to waste time on QA, not giving them the time and tools for QA, and/or pressuring QA-specific personal not to "hold-up" the software release so they score higher on thier metrics.


Well said. I've always Just Done It before; but now that I'm actually working on a project big enough to justify having more than one person (i.e. myself) working on it, I find that my own capacity to test is insufficient - but there is no way in hell I would *stop* doing my own testing in order to code more.

Share this post


Link to post
Share on other sites
Quote:
Original post by Magmai Kai Holmlor
QA needs to be integrated into the development and maintenance effort to be highly effective, not something tacked on at the end-of-the-line. Classic process requires the QA department to be a seperate unit of business for political reasons...

I agree with everything else you said, but this is pure chaff. QA is most properly a functional testing area, and this should never be confused with a development-level concern. QA resources should usually be experts in the use of the system at hand, except in the specific case where you are trying to determine the intuitiveness of a system. Development resources neither are nor have any reason to be functional experts in a system under development.

Share this post


Link to post
Share on other sites
Please note that I'm NOT trying to say that QA shouldn't be part of iterative development; it definitely should. But it needs to be made clear that classic QA has a role to serve which is completely divorced from that of a team of programmers and artists. Good classic QA serves to enforce adherence to and quality of design at the user level, and it would be an extremely pathological action to try to stack this role onto the development resources' plates.

Share this post


Link to post
Share on other sites
I don't think I understand the question. What do you mean by "part of your original process"? QA is part of the cycle, they're just supposed to be an independent unit precisely because programmers don't try to emulate users, which is what this particular type of QA does best. Of course, programmers shouldn't write bugs, but then designers shouldn't write vague specs either, and it happens all the time. The job of QA is not to ensure that the program doesn't crash, although this is a valuable function. If the program crashes or has some similar major problem, however, it shouldn't make it out of the programming room. QA is required as a separate unit in order to make sure that what was programmed actually does what is required, and that "what is required" - determined by analysts, not QA or programming resources - makes sense from a user perspective.

Share this post


Link to post
Share on other sites
I should respond to this as well, as it didn't register with me first time through:

Quote:
Originally posted by Magmai Kai Holmlor
There's the classic school of thought which monitors bugs during development and prioritizes them (as though it was already a maintenance project), and the "agile" method is to test-to-hilt and stop new development whenever any single bug is detected.

For a project in a maintenance phase bug tracking is essential. It gives you a concrete to-do list and also provides a decent metric of progress. It's not uncommon to add (minor) features during maintenance, but "bugs" that are really feature request should be noted and appropriately (de)prioritized.

I've never worked in a software organization with this type of organization. Feature requests, in my professional life, are those things that drive the development cycle and power my paycheck. If the above describes a maintenance-phase application, I've never seen a maintenance phase, and the app I'm currently working with is in its 5th year of life with thousands of users paying good money for it. Packaged software simply can't survive unless it continually responds to its customers' needs, and new features are the #1 way to add value. Users can and will work around problems if they have to - although of course there's a limit to their tolerance - but they want to see your product improve as time goes by, because that's the only thing that makes them feel they're going to keep getting value for years to come.

I guess that's a big difference between industrial applications and retail packages - the average game player can't, by themselves, request a new feature & expect it to be implemented, but an industrial user has enough individual financial leverage to ask for and receive their heart's desire.

Share this post


Link to post
Share on other sites
Now this is something i can comment on! [grin]

I'm a Technical QA, i've previously been a Games Tester and a Compatibility Tester at various companies.

The classic QA role is very important, it's usually an end phase job that is performed by a fairly large number of tester, by large i mean upto 20 for a normal game, although publishers vary. They provide you with constant bug reports, and sometimes gameplay reports although again, that varies publisher to publisher. They're also often quite seperate from the programmers which means that there can be quite a bit of lag in submitting a bug report and it getting assigned, and fixed.

Compatibility Testing is also rear-end of development and usually only for the PC, it means that you test on a very wide range of hardware to see what breaks [smile]

Technical QA is where you sit with the programmers, take constant code updates and do a feature checklist thats basically a cutdown version of the regression test. Currently our products at pre-alpha which means stability isn't quite what it'll be at beta but we'e getting towards being feature complete, each new feature is a source of more instability etc, it's a daily struggle.

The purpose of the Technical QA usually goes beyond code into ensuring that whatever data build processes you use haven't generated a load of crap. You provide a buffer between Programmers who're busy implementing features and Artists/Designers who are busy trying to use them, a bug might go in, but hopefully it will never impede the productivity of the users because i'll see it first.

Not everywhere calls us Technical QA, i think thats the name our Technical Lead chose because i was taking over some of the crap that he'd been landed with over time [grin].

Normally there's someone like me doing some semi-programming role, i do get to code too, so the first loop is always the shortest. We do maintain an active bug database, it's just good practice regardless of anything else, and i personally do chase them.

Trenches sounds like a good idea, but it has to come from above, i can't dictate those kinds of measures from down here (in the trenches) [smile] but yeah sometimes that kind of thing is necessary.

To quote a friend who was in charge of the QA team for a publisher a few years ago, when asked why he wouldn't sign off the game which had numerous bugs and gameplay problems:
Quote:
"i'm not signing it off, until it isn't shit" - anon


Andy

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
at my work we document everything. we have active trackers for alsorts of QA stuff.

regression test trackers, problem form trackers and patch trackers. its kinda of useful to see where bugs came in and to see where they have gone out.

also we prioritse them 1 to 5 so we always know whats what.

No Point in working on a 5(tiny display issue) if there is a 1 thats there(really fecked up something on a rc caused by client config). saying that if you have targets for fixing each priority level it helps.

I am going to be writing a system to manage it all soon so we have one resource for monitoring, adding to, working on and signing off. hopefully this will make for a even better system than we have now.

Share this post


Link to post
Share on other sites
Liquiddark, how does the new process prevent "bug fixes" that don't really fix the bug? and/or prevent fixing things that aren't really bugs?

...
You're right, that was mostly chaff; I'm disillusioned with the company I currently work for and it negatively influences my views on process. I do think the developers need to be involved with the end-users; even here we have sit-down meetings with the grunts that use the systems we build, otherwise you have the "telephone" game effect.

If QA is checking high-level user requirements, why do they need to wait for a finished product from the development team? Are they checking that the software team actually built what they were told to (this is the mickey mouse testing that our QA department does) or checking that the system engineering work (that produced the requirements given to software) meets user needs (which can/should be done before given them to software)?


I think I've work on projects like yours, every week or two you release the software an bump the version and at that point in time all previous version become obsolete? If there's a problem, the first recourse is to update to the latest and greatest. (Surf mentality). Big companies don't seem work like that (drop-anchor mentality). New features mean contract mods and money needs to be secured to fund the development; they go to the next major version. If there's a problem you fix it in the old code and release a bug-fix version (aka patch level).


* My sister is a mechanical engineer and she calls things 'mickey mouse' for what we typically use more vulgar terms; the quality department doesn't sufficently verify the functionality, they just do cursory demonstrations. It's almost insulting really, as someone above mentioned if it doesn't work at all why it leaving software?

[Edited by - Magmai Kai Holmlor on October 4, 2004 1:43:20 PM]

Share this post


Link to post
Share on other sites
AP:
That sounds pretty much exactly like our system, except that the practical consequence is that upper management is continually trying to push for more severe bugs to be allowed in a release. I'm not even kidding, this is what a lot of companies do in my experience, but it has been refined to almost an art form here.

Quote:
Original post by Magmai Kai Holmlor
Liquiddark, how does the new process prevent "bug fixes" that don't really fix the bug? and/or prevent fixing things that aren't really bugs?

We're very capital-A Agile in this regard - we rely on developer courage and responsibility. Problem being, of course, that this is pretty much the only agile process tool in use right now, and it's one of the worst to have in isolation.


Quote:
You're right, that was mostly chaff; I'm disillusioned with the company I currently work for and it negatively influences my views on process.

I understand, sir. As I've tried to make clear above, our process is pretty much pathological.

Quote:
I do think the developers need to be involved with the end-users; even here we have sit-down meetings with the grunts that use the systems we build, otherwise you have the "telephone" game effect.

The more the better. In my line of work, sadly, what usually happens is that there are a bunch of "stakeholder meetings" wherein the actual stakeholders - that is, the users of our product at the client corporation - aren't usually present.

Quote:
If QA is checking high-level user requirements, why do they need to wait for a finished product from the development team? Are they checking that the software team actually built what they were told to (this is the mickey mouse testing that our QA department does) or checking that the system engineering work (that produced the requirements given to software) meets user needs (which can/should be done before given them to software)?

They're doing both, hopefully. This is the multi-mindset - the product should meet the specification, the specification should meet the requirements, and the requirements should make sense from a user PoV. In the trenches, the list gets prioritized, and consequently the third part almost never gets mentioned. However, a few developers who actually care about the product and/or hate admitting that bugs are actually bugs can go a long way in provoking discussions of this topic in the course of fulfilling the other two goals.


Quote:
(Surf mentality)...(drop-anchor mentality).

You've pretty much hit the nail on the head. We play by both sets of rules, except that we're perfectly willing to mod any version of the system for some ready cash. We might even make money on the work. Maybe.

Quote:
the quality department doesn't sufficently verify the functionality, they just do cursory demonstrations. It's almost insulting really, as someone above mentioned if it doesn't work at all why it leaving software?

Let me take this from the PoV of our QA department, since our development effort is self-contained and far enough from management to be able to see their external problems better than their internal ones. Basically, QA here does exactly what you just mentioned. We have an automated testing kit capable of exercising our entire application, but the testers aren't allowed to develop tests with it because that is viewed as intrusive to the primary goal, testing the application. Every version has thousands of regression tests that must be run, so QA has to decide what to test first, and they always start with functional "smoke tests" - forms open, user and admin-level tasks work, data can be sent in and out. Clearly, this should be automated to the largest extent possible, but it isn't because they're not allowed to do so.

Meanwhile, our programming process suffers from some classic big-design-up-front flaws - vague specifications (beautiful oxymoron, that), even vaguer requirements, zero user feedback to the development office, etc. Our IT guy likes to say we're all mushrooms - kept in the dark and fed bullshit to keep us warm and happy.

So we continue to produce bugs, QA continues to find them, they post anything that they really really want fixed as a high-priority bug and then leave anything else to wallow in the hell that is low priority defect existence.

Does that sound familiar? I don't blame QA, although certainly a little more courage on the matter wouldn't hurt. Management decisions are the killer, and they're also the reason why Agile development is such a powerful paradigm, because Agile cuts management's heart out and leaves them pulling rather thinner puppet strings, whereupon they can barely fuck things up at all.

That's process in the trenches, from my perspective. I work to improve our situation, and nothing sticks. Everyone here would love to see a positive outlook for an upturn, but there are external forces at work, and internal resources haven't revolted sufficiently to make the changes that need to be made.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this