What is your definition of done?

Started by
12 comments, last by Orymus3 9 years, 11 months ago

Hi all,

Just our of curiosty, do you have a written down definition of done or acceptance crieteria in your team/company? And if so, how is is formulated and what is in there?

I'm talking about the criteria when you or your team deem a feature or requiremet to be "ready" excluding any bugs that might be reoported much later from somewhere.

Advertisement

That's a vague question, but I can tell you how I've done it at various jobs in the past. Features are listed in a tool like Jira, and there is enough documentation and/or description of the problem that the developer can work on the problem or find out who to talk to about it if it doesn't make sense. Depending on how many developers are available, they either choose the next group of features to go into the next version, or a project manager decided. Either way, once the bug is fixed, or the feature is added, the job is marked as "done." It isn't actually "closed" until QA has tested the feature, and even then it can just be reopened if there is a problem.

At some point there are enough new features in the software that a new release is created. This contains all the features up to that point. Anything not in the release will be in the future release. This release is tested, and only bugs are fixed. Eventually, once it has been tested and seems stabled, its released. But during this time, development on the main branch continues.

The software is never really "done", it is always evolving. At some point the customers may decided to quit paying for new features, or the group decides to quit working on it because any future work isn't going to add much value, but it is never finished. Like a novel, you just have to quit working on it.

I think, therefore I am. I think? - "George Carlin"
My Website: Indie Game Programming

My Twitter: https://twitter.com/indieprogram

My Book: http://amzn.com/1305076532

That's a vague question, but I can tell you how I've done it at various jobs in the past.

...

Either way, once the bug is fixed, or the feature is added, the job is marked as "done." It isn't actually "closed" until QA has tested the feature, and even then it can just be reopened if there is a problem.

I try to clarify a bit: I'm trying to get a feeling on how other teams handle it when they say "We're done with this feature" what prompts them to define that something is done? In your example you say you're "done" when you hand something off to QA. My guess is the QA has some requirements for accepting a feature to test; for example, they want the developer to describe a test-case before handing it off, or have some automatic unit-tests that need passing etc. For instance if your code doesn't build QA probably wont be able to test, so they hand it back because you are not "done"

And these rules are excatly what I try to get to. I'm wondering if other places have a checklist when stuff is ok to hand on to QA (or the end-user if QA is done by the devs)

I'm pondering about this because I was talking with a friend who is in QA and usually they have pretty clear rules when a feature is "done" in the term that it passes QA or not. But he was also a bit venting off on developers who deliver a "fix" without actually having run a program once or even telling what QA has to look at and then expecting them to handing out a "QA-Passed" label for it.

I guess it's done when QA stops bitching about it ;)

Just half joking, in places where I've worked, there havn't existed a strict definition of "done", in part I think, because this is very hard to define from a technical point of view.

And as a developer, you know, and have to accept, that you never know if something is 100% bugfree or not. It can be observed to have passed all tests, and code reviews, but that doesn't really mean it is guaranteed to be OK.

It's "done" when whoever you deliver to has accepted the feature as working. (knowing he still could come back any time and complain it now isn't working)

Sometimes it is as simple as the game designer or project manager saying, "Okay, people, it works. Now we do...," and he documents it by the protocol of the game development company, often in version control or software control documentation. This is how it is done in one of the companies for which I work.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

i'm both dev and QA. its done when it s right, or as right as you can reasonably get it.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php


I'm pondering about this because I was talking with a friend who is in QA and usually they have pretty clear rules when a feature is "done" in the term that it passes QA or not. But he was also a bit venting off on developers who deliver a "fix" without actually having run a program once or even telling what QA has to look at and then expecting them to handing out a "QA-Passed" label for it.

All these things are going to be different anywhere you work. When I worked on software used by customers, QA would run the software and test the feature by using the app, and trying to break it or running through the steps that broke it to see if it was fixed. For tasks that can only be tested in a Unit Test or something like that, we marked that in a special way because QA can't test those things.

As for programmers "fixing" the bug and then firing it off to the testers, I understand. A lot of programmers have worked long and hard to perfect their craft, and this can lead to arrogance. They can feel that they get paid too much and are too important to test the software. That job is for the semi-skilled workers. Sometimes this is correct. If a feature needs hours of testing all the edge cases, then this is what QA is for.

But not running the code to test the fix before checking it in is just lazy. I'd be complaining too...

But back to your question, you really need to trust that your developers and testers know what they're doing. If they say its fixed, then they think it is. Of course there will be other problems and closed tasks will be reopened. That happens every day.

I think, therefore I am. I think? - "George Carlin"
My Website: Indie Game Programming

My Twitter: https://twitter.com/indieprogram

My Book: http://amzn.com/1305076532

Thanks for all the responses, It is interesting to hear different points of view. I guess it is not as easy as I thought to formulate a "done" definition.


i'm both dev and QA. its done when it s right, or as right as you can reasonably get it.

the "it's right" and "reasonably" part is exactly what I would like to explore more here.


But back to your question, you really need to trust that your developers and testers know what they're doing. If they say its fixed, then they think it is. Of course there will be other problems and closed tasks will be reopened. That happens every day.

It is not about trust or about somebody not doing their job properly, but about that the view wheter something is fixed or not may differ depending on the view of the person. I just think that it might help to work together if all persons concerned share a common view when a piece of work is done. My experience is that there are sometimes some gaps between developers, QA and product managers when a feature is done and this can lead to some frustration on every side.

I see a lot of discussions like this which i think could be prevented by a definition when something is done:

PM: Is the login screen done?

Dev: Yes

PM: Cool, then we deploy it next week

Dev: Oh no, it's not back from QA and the definitive layout and images are still missing

PM: So it's not done?

Dev: No, it's done. I closed the ticket and handed it over to QA

QA: We're not testing without the layout and images, because else we're testing twice

Dev: So open a new ticket for the layout

PM: But the layout is part of the login screen, do why wasn't this added before QA

....

So in this case a definition of "done" as "Functionally complete, with all images and layout" would have prevented the whole discussion.

These kinds of issues are sorted by leadership, help, or both. As mentioned earlier, the individual company policy and culture control this. What is considered finished in one category often is a principle which does not work in practicality in another. Other factors are the stage of development, investor involvement (A lot tends to emphasize accountability and appearances.), business model, development software used, skill level of the team members, and so forth. There really is much variance.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

I can't add more to the "it depends on lots of things", but just wanted to point something out.


My guess is the QA has some requirements for accepting a feature to test; for example, they want the developer to describe a test-case before handing it off, or have some automatic unit-tests that need passing etc. For instance if your code doesn't build QA probably wont be able to test, so they hand it back because you are not "done"

First, never let a developer who has wrote a feature to write a test case for it. The test case must be written for someone else before the developer starts to work. If the developer tells someone what to test, then the tester is useless. The developer is biased and will never think of testing some stuffs (say, put a 10000 letters text inside a "Username" input field). It's not because he's a bad programmer or he's stupid, but the developer is trying to make something work and all his effort is into that.

Second, the other part about the code that doesn't build is too basic and it's an unwritten rule. As a developer you are required to test what you've wrote, at least to see if it works. If the code doesn't build probably QA will never know, because that code will never be sent to them.

This topic is closed to new replies.

Advertisement