Jump to content

  • Log In with Google      Sign In   
  • Create Account

What is your definition of done?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
13 replies to this topic

#1 doeme   Members   -  Reputation: 706

Like
0Likes
Like

Posted 14 April 2014 - 02:05 AM

Hi all,

 

Just our of curiosty, do you have a written down definition of done or acceptance crieteria in your team/company? And if so, how is is formulated and what is in there?

I'm talking about the criteria when you or your team deem a feature or requiremet  to be "ready" excluding any bugs that might be reoported much later from somewhere.

 

 



Sponsor:

#2 Glass_Knife   Moderators   -  Reputation: 4426

Like
1Likes
Like

Posted 14 April 2014 - 10:22 AM

That's a vague question, but I can tell you how I've done it at various jobs in the past.  Features are listed in a tool like Jira, and there is enough documentation and/or description of the problem that the developer can work on the problem or find out who to talk to about it if it doesn't make sense.  Depending on how many developers are available, they either choose the next group of features to go into the next version, or a project manager decided.  Either way, once the bug is fixed, or the feature is added, the job is marked as "done."  It isn't actually "closed" until QA has tested the feature, and even then it can just be reopened if there is a problem.

 

At some point there are enough new features in the software that a new release is created.  This contains all the features up to that point.  Anything not in the release will be in the future release.  This release is tested, and only bugs are fixed.  Eventually, once it has been tested and seems stabled, its released.  But during this time, development on the main branch continues.  

 

The software is never really "done", it is always evolving.  At some point the customers may decided to quit paying for new features, or the group decides to quit working on it because any future work isn't going to add much value, but it is never finished.  Like a novel, you just have to quit working on it.  


Edited by Glass_Knife, 14 April 2014 - 10:23 AM.

I think, therefore I am. I think? - "George Carlin"
Indie Game Programming

#3 doeme   Members   -  Reputation: 706

Like
0Likes
Like

Posted 16 April 2014 - 02:46 AM

That's a vague question, but I can tell you how I've done it at various jobs in the past.

...

Either way, once the bug is fixed, or the feature is added, the job is marked as "done." It isn't actually "closed" until QA has tested the feature, and even then it can just be reopened if there is a problem.
 

 

I try to clarify a bit: I'm trying to get a feeling on how other teams handle it when they say "We're done with this feature" what prompts them to define that something is done? In your example you say you're "done" when you hand something off to QA. My guess is the QA has some requirements for accepting a feature to test; for example, they want the developer to describe a test-case before handing it off, or have some automatic unit-tests that need passing etc. For instance if your code doesn't build QA probably wont be able to test, so they hand it back because you are not "done"

 

And these rules are excatly what I try to get to. I'm wondering if other places have a checklist when stuff is ok to hand on to QA (or the end-user if QA is done by the devs)

 

I'm pondering about this because I was talking with a friend who is in QA and usually they have pretty clear rules when a feature is "done" in the term that it passes QA or not. But he was also a bit venting off on developers who deliver a "fix" without actually having run a program once or even telling what QA has to look at and then expecting them to handing out a "QA-Passed" label for it.



#4 Olof Hedman   Crossbones+   -  Reputation: 2824

Like
0Likes
Like

Posted 16 April 2014 - 03:22 AM

I guess it's done when QA stops bitching about it ;)

 

Just half joking, in places where I've worked, there havn't existed a strict definition of "done", in part I think, because this is very hard to define from a technical point of view.

 

And as a developer, you know, and have to accept, that you never know if something is 100% bugfree or not. It can be observed to have passed all tests,  and code reviews, but that doesn't really mean it is guaranteed to be OK.

 

It's "done" when whoever you deliver to has accepted the feature as working. (knowing he still could come back any time and complain it now isn't working)



#5 3Ddreamer   Crossbones+   -  Reputation: 3156

Like
0Likes
Like

Posted 16 April 2014 - 05:07 PM

Sometimes it is as simple as the game designer or project manager saying, "Okay, people, it works. Now we do...," and he documents it by the protocol of the game development company, often in version control or software control documentation. This is how it is done in one of the companies for which I work.


Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software.  The better the workflow pipeline, then the greater the potential output for a quality game.  Completing projects is the last but finest order.

 

by Clinton, 3Ddreamer


#6 Norman Barrows   Crossbones+   -  Reputation: 2125

Like
1Likes
Like

Posted 16 April 2014 - 07:19 PM

i'm both dev and QA. its done when it s right, or as right as you can reasonably get it.


Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 


#7 Glass_Knife   Moderators   -  Reputation: 4426

Like
1Likes
Like

Posted 17 April 2014 - 06:23 AM


I'm pondering about this because I was talking with a friend who is in QA and usually they have pretty clear rules when a feature is "done" in the term that it passes QA or not. But he was also a bit venting off on developers who deliver a "fix" without actually having run a program once or even telling what QA has to look at and then expecting them to handing out a "QA-Passed" label for it.

 

All these things are going to be different anywhere you work.  When I worked on software used by customers, QA would run the software and test the feature by using the app, and trying to break it or running through the steps that broke it to see if it was fixed.  For tasks that can only be tested in a Unit Test or something like that, we marked that in a special way because QA can't test those things.  

 

As for programmers "fixing" the bug and then firing it off to the testers, I understand.  A lot of programmers have worked long and hard to perfect their craft, and this can lead to arrogance.  They can feel that they get paid too much and are too important to test the software.  That job is for the semi-skilled workers.  Sometimes this is correct.  If a feature needs hours of testing all the edge cases, then this is what QA is for.  

 

But not running the code to test the fix before checking it in is just lazy.  I'd be complaining too...

 

But back to your question, you really need to trust that your developers and testers know what they're doing.  If they say its fixed, then they think it is.  Of course there will be other problems and closed tasks will be reopened.  That happens every day.


I think, therefore I am. I think? - "George Carlin"
Indie Game Programming

#8 doeme   Members   -  Reputation: 706

Like
0Likes
Like

Posted 17 April 2014 - 07:35 AM

Thanks for all the responses, It is interesting to hear different points of view. I guess it is not as easy as I thought to formulate a "done" definition.

 


i'm both dev and QA. its done when it s right, or as right as you can reasonably get it.

 

the "it's right" and  "reasonably" part is exactly what I would like to explore more here.

 


But back to your question, you really need to trust that your developers and testers know what they're doing. If they say its fixed, then they think it is. Of course there will be other problems and closed tasks will be reopened. That happens every day.

 

It is not about trust or about somebody not doing their job properly, but about that the view wheter something is fixed or not may differ depending on the view of the person. I just think that it might help to work together if all persons concerned share a common view when a piece of work is done. My experience is that there are sometimes some gaps between developers, QA and product managers when a feature is done and this can lead to some frustration on every side.

 

I see a lot of discussions like this which i think could be prevented by a definition when something is done:

PM: Is the login screen done?

Dev: Yes

PM: Cool, then we deploy it next week

Dev: Oh no, it's not back from QA and the definitive layout and images are still missing

PM: So it's not done?

Dev: No, it's done. I closed the ticket and handed it over to QA

QA: We're not testing without the layout and images, because else we're testing twice

Dev: So open a new ticket for the layout

PM: But the layout is part of the login screen, do why wasn't this added before QA

....

So in this case a definition of "done" as "Functionally complete, with all images and layout" would have prevented the whole discussion.



#9 3Ddreamer   Crossbones+   -  Reputation: 3156

Like
1Likes
Like

Posted 17 April 2014 - 08:11 AM

These kinds of issues are sorted by leadership, help, or both. As mentioned earlier, the individual company policy and culture control this.  What is considered finished in one category often is a principle which does not work in practicality in another. Other factors are the stage of development, investor involvement (A lot tends to emphasize accountability and appearances.), business model, development software used, skill level of the team members, and so forth.  There really is much variance.  


Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software.  The better the workflow pipeline, then the greater the potential output for a quality game.  Completing projects is the last but finest order.

 

by Clinton, 3Ddreamer


#10 DiegoSLTS   Members   -  Reputation: 1373

Like
-1Likes
Like

Posted 17 April 2014 - 10:07 AM

I can't add more to the "it depends on lots of things", but just wanted to point something out.

 


My guess is the QA has some requirements for accepting a feature to test; for example, they want the developer to describe a test-case before handing it off, or have some automatic unit-tests that need passing etc. For instance if your code doesn't build QA probably wont be able to test, so they hand it back because you are not "done"

First, never let a developer who has wrote a feature to write a test case for it. The test case must be written for someone else before the developer starts to work. If the developer tells someone what to test, then the tester is useless. The developer is biased and will never think of testing some stuffs (say, put a 10000 letters text inside a "Username" input field). It's not because he's a bad programmer or he's stupid, but the developer is trying to make something work and all his effort is into that.

 

Second, the other part about the code that doesn't build is too basic and it's an unwritten rule. As a developer you are required to test what you've wrote, at least to see if it works. If the code doesn't build probably QA will never know, because that code will never be sent to them.



#11 Glass_Knife   Moderators   -  Reputation: 4426

Like
2Likes
Like

Posted 17 April 2014 - 10:08 AM


So in this case a definition of "done" as "Functionally complete, with all images and layout" would have prevented the whole discussion.

 

These kinds of things are communication issues.  If the project tasks are completed but the feature isn't really "done" then there may be some room for improvement in the task description and requirements gathering and documentation.  For something like this, depending on the size of the project, of course, it may help to have a developer that is the final say on tasks.  They know everything about the software, so they know what every feature should do.  Call it "Scrum Master" or "Point of Contact" or "bellybutton", but they are responsible for closing the task when finished.  

 

I worked for one boss long ago who was obsessed with the word "done."  No one could ever figure out what his definition was, but he wielded like Thor's hammer.  If software that had been deployed for six months broke and had to be fixed, he'd scream "I though you said it was done!  Why are you still working on it."  

 

This process of gathering requirements and creating software out of nothing, while managing so many different people, personality types, and areas of expertise, takes time to evolve.  Make sure you have AARs (after action reviews) to evaluate your process.  If something wasn't actually done, but everyone thought it was, figure out why.  Was there something you could have done to see the problem?  Is there something that could be done differently next time?


I think, therefore I am. I think? - "George Carlin"
Indie Game Programming

#12 Glass_Knife   Moderators   -  Reputation: 4426

Like
0Likes
Like

Posted 17 April 2014 - 10:13 AM


First, never let a developer who has wrote a feature to write a test case for it. The test case must be written for someone else before the developer starts to work. If the developer tells someone what to test, then the tester is useless. The developer is biased and will never think of testing some stuffs (say, put a 10000 letters text inside a "Username" input field). It's not because he's a bad programmer or he's stupid, but the developer is trying to make something work and all his effort is into that.

 

This is not possible if  you have a really small team.  I worked for years where there were many projects and only 5 programmers.  Each person was completely responsible for multiple projects.  Most of the developers didn't have tests for anything.  The just hacked the code and prayed it would work. 

 

Also, test driven development is based on doing exactly the opposite of what you're describing.  


I think, therefore I am. I think? - "George Carlin"
Indie Game Programming

#13 DiegoSLTS   Members   -  Reputation: 1373

Like
1Likes
Like

Posted 17 April 2014 - 11:53 AM

 


First, never let a developer who has wrote a feature to write a test case for it. The test case must be written for someone else before the developer starts to work. If the developer tells someone what to test, then the tester is useless. The developer is biased and will never think of testing some stuffs (say, put a 10000 letters text inside a "Username" input field). It's not because he's a bad programmer or he's stupid, but the developer is trying to make something work and all his effort is into that.

 

This is not possible if  you have a really small team.  I worked for years where there were many projects and only 5 programmers.  Each person was completely responsible for multiple projects.  Most of the developers didn't have tests for anything.  The just hacked the code and prayed it would work. 

 

Also, test driven development is based on doing exactly the opposite of what you're describing.  

 

You're right, I forgot about TDD, but even in TDD you can forget some conditions, you code what's needed for you're tests to work, but you might miss some tests. And even then, you can't test everything with TDD, you can't do integration tests to see the whole flow of a user registering, receiving an e-mail, confirming with a link and loging in for the first time. The developer can do it, but there sould be someone else testing. I know it may not be possible in small teams, but OP was asking if a developer writes a test-case for QA impling that there's people available, maybe I shouldn't have write "never", but I thought it was clear what I was traying to say.



#14 Orymus3   Crossbones+   -  Reputation: 8988

Like
1Likes
Like

Posted 30 April 2014 - 10:09 AM

Depending on the task type, there's a different "definition of done".

I've also found myself redefining that definition in some different project contexts (often tech-dependant).

 

A feature is generally done only when all necessary art is integrated (if that's relevant) and it has been QAed. Basically, it's done when the client/PO can see it, or when the user could use it.

But we also have intermediary steps of done, which we have a custom (visible) workflow for. It basically tells us who owns a specific task and what's left.

The plus to that approach is that the PM won't bug the programmer asking if it's done, when clearly, all programming has been implemented successfully.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS