Hi, my name is Done and I’m conflicted

If there is ever one word that highlights the difference between a tester and a developer’s mindset it has to be the word done.

Developers¹ tend think of done as a form of sign off. A story is done when the code is complete. For many developers, it’s when the code is complete, tested and deployed. Done is a sign of completion, of moving onto something else.

Testers² look at Done as the beginning of a quest, an exploration. Done is to be prodded, probed, explored and dissected. Under what conditions can ‘done’ fail? What data tests the limits of Done? How about if two Done’s are put together, how will they behave?

With these two different mindsets it’s no surprise that at times there is conflict and disagreement.

Most testers (most people for that matter) shy away from conflict in order to maintain team harmony. Instead, they try to gain agreement upfront on Done. The goal is clarity and scope definition. Make done clear upfront (before coding begins) and it lessens the conflict later.

In many companies, conflict is seen as a ‘bad thing’. Being in conflict suggests that you’re not a team player. But conflict is a not always bad. According to the 5 dysfunctions of a team by Patrick Lencioni conflict is healthy indicator of trust and ought to be encouraged.

And testers must not forget that our primary role in a team is to raise uncertainty about Done. This has to come ahead of wanting to be everyone’s friend and being accepted. The reality is, if we are doing our job we will be testing the limits of preconceptions of Done.

Because as appealing as it is to think of Done in terms of black and white, zero and one, the reality is a lot different. Done is subtle and muted with hidden dimensions and unknown crevices. Exploring and shining a light³ on these dark corners generates new information. That information helps mould and develop our understanding of Done. The more we test, the better this becomes. Whether this new information turns out to be accepted or rejected is to some extent irrelevant, both contribute to fleshing out our understanding.

So sure, have your discussions on ‘Done’ before code is written, but let’s realise that it’s only the beginning not something final written in concrete. Allow testing and testers to expands that understanding.

In fact, I would go as far to say that true harmony is having a mutual goal of building our understanding of what Done means. Who knows? Maybe through that mutual goal can real trust and respect be built within a team.

¹ ² generalisations

³ kudos James Bach

7 Comments

  1. Hello Ann-Marie,

    The agile tag of your post triggered a quick response of me on Twitter. In this comment I will try to elaborate why I think that one of the concepts I believe you used in your post only has only limited validity in an agile context. The concept that I mean to address is the ‘controversy between developers and testers’. As I understood it in your post this controversy leads to developers having a ‘definition of done’ that considers done as code complete, (unit) tested and deployed. Which in reality only is a starting point for the testers to investigate the software. This then leads to controversy and potential avoidance of conflict.

    I am aware that especially in non-agile environments and in starting agile environments this duality and controversy of what done is does exist.

    One of the key concepts in agile is however that the team, that is programmers/developers, testers, business analysts, architects, support staff, etc. share the responsibility to deliver software qualitatively fit for production. This particular concept brings teams to define a definition of done that incorporate programmers, testers, business and other stakeholders’ criteria. I have observed and have been told on several occasions by members of the agile community that this is not only theory. As such it eliminates the ‘controversy’ more and more up to a point were teams do not perceive the duality to be problematic or, at least are unaware that, it exists.

    This does not mean that programmers and testers now have the same idea on what their part of done means or entails. It means that the discussion has moved from viewpoints that oppose each other to a mutually supportive search of finding solutions on how both can meet their criteria in the definition of done. And to do so within the same span of time (sprint) for each of the backlog items they are working on.

    Regards,

    Jean-Paul

    Reply

    1. Hi Jean-Paul,

      thanks for your response. The reason why I put the agile tag is because I’m referring predominantly to the agile context.

      Let me clarify:
      1) The tested I refer to is broader than unit testing, it’s testing as an activity
      2) Implicit in the above is that done is a shared responsibility (I should have probably made that explicit)

      I’m not sure how having a shared definition of done, removes controversy though. Have you never been in a situation where there is disagreement about what is/is not a bug? Paired back, is that not a conversation about what done means?

      Think of a painter and a critic. One creates the masterpiece the other observes and comments on it. The painter might describe the painting in terms of the subject (Think of Andy Warhol and ‘Marilyn’) whilst the critic may look at multiple dimensions not excluding how the painting is felt by the audience, the use of colour in the painting. It’s not that the painter is is wrong in their definition but its likely to be more limited than the critics view. The critic has the luxury of time and perspective to offer other viewpoints. The role of the critic is different to a painter but they are both discussing the painting.

      Similarly tester and developer have different concepts of done. They approach done differently. It’s not that tester wait for a developer to say its done and then start exploring it, but the mindset behind the word is different.

      Does that help a little more?

      The point I’m trying to make is not when we decide something is done (though that is important too) but more the mindset behind done. Do you view done as a concrete fixed set of definitions, or do you see it as something that’s iterative, evolving and open to change.

      In my view, this mindset is not so common. Think about BDD, ATDD and the concept of 3 amigos. While the goal is communication, it seems to suggest that communication and deciding on what is Done only happens up front. The idea we can change our mind about Done through testing is less common – at least that’s been my experience. My argument is that upfront conversations are only the start of understanding what is done. The team needs to be open about re-evaluating done as information trickles in from testing.

      Reply

      1. Hi Anne-Marie,
        let me start to say that I agree with the point you’re making that Testers bring a differnet mindset to the table that more often than not can lead to conflict. To me that’s the point of bringing a tester to the table.
        I’d like to branch out a bit though and distinguish between agile and more traditional approaches. What you are suggesting
        “Do you view done as a concrete fixed set of definitions, or do you see it as something that’s iterative, evolving and open to change. ”
        is anathema to any traditional (read waterfall) Project Manager and is likely to get you burned on the cross. There’s conflict right there because the idea of testing being open ended does (understandably) not fit with a Project Managers world view. So the clash is not with the creator but the one overseeing the work being done. I have a high regard for PMs, I’m acknowledging their difficulties here, not generalising.

        [Anne-Marie]: It’s not harder to facilitate change in what’s done, but in waterfall it’s harder to incorporate that feedback quickly. In my experience what ends up happening is these bugs are acknowledged and placed into the next release. So yes, the consequences of being more flexible on done are harder to incorporate quickly, making conversations on the topic perhaps more tense?

        I believe that the more experienced the developers the more do they anticipate a testers questions and are open to critique – fallible but true in many cases.

        [Anne-Marie] – Yeah that’s my experience too.

        When working in an agile context I agree with Jean-Paul – the upfront agreement of the definition of done takes a lot of the conflict out.

        [Anne-Marie] – it takes some of the conflict out I agree.

        In disagreeing with Jean-Paul the definition of done is usually so high level, that it often comes down to the tester of when when they say they’re done. We get into the “when to stop testing” territory here which is rarely agreed upfront and to me throws the whole definition of done into question. Interestingly enough the approach seems to mostly work even though it’s fallible if digging a bit deeper, I see it as a pragmatists approach.

        The crux to me is here:
        “Exploring and shining a light³ on these dark corners generates new information. That information helps mould and develop our understanding of Done. The more we test, the better this becomes. Whether this new information turns out to be accepted or rejected is to some extent irrelevant, both contribute to fleshing out our understanding.”
        There is always a cost involved with information. To me the question is “At what point does the gathering of information become too costly for us to continue?”. Gaining a better understanding is likely to get paid for without problem at the beginning of a project, it’s a different matter when the shipping date looms.
        Don’t get me wrong, I wholeheartedly agree that only with enough understanding can we make informed decisions and that done shifts with growing understanding. At the moment this truth only lives in some testers heads but needs to move into the whole team, especially PMs and/or decisionmakers.

        [Anne-Marie] – As time goes on, I’ve become less concerned about hitting release and stuff going live. To me its an simply a line that gets crossed, I don’t alter my testing around it. But perhaps that’s just me.

        Reply

  2. Why the mixed use of “Done” and “done”?

    [Anne-Marie] – its a slopply and misapplied appropriation of Agile and agile. I guess I wasn’t done when I hit the release button – doh!

    Is this a rant against the practice of defining “Done” upfront via acceptance criteria or a philosophical point that software, like Zeno’s race between Achilles and the tortoise, is never “done”?

    [Anne-Marie] – I’m flattered! A rant! Usually I’m told I’m too quiet for that type of stuff :), and yes, it’s more a philosophical introspection than anything else. I’m not against acceptance criteria per se, but its the absoluteness that irks me to rant….

    You said “testers must not forget that our primary role in a team is to raise uncertainty about Done”, but did you actually mean the opposite? i.e. A tester’s primary role is to raise certainty about being done (often by replacing incorrect preconceptions of done with better ones)?

    [Anne-Marie]I like your twisted logic Andrew…and if I had thought of that I would have probably added it because it emphasises the wonderfully paradoxical nature of what we are trying to achieve.

    Reply

    1. Just to be clear I didn’t use the word rant in a perjorative sense. IMHO the absoluteness of acceptance criteria is something definitely worth ranting against :)

      Reply

  3. Hi Anne-Marie,

    I can only agree that any given concept has multiple perspectives, and there are multiple truths in almost anything. Even the concept ‘1 + 1 = 2′ is actually not always true. And so applies to “Done”.
    In essence, developers and testers are coming from opposite approaches:

    Developers look at the problem of what a system needs to do, largely by way of functional requirements; testers look broader at whether the system solves a problem by using test and use cases, which is more embracing than a simple, causal system behaviour. Eg I need to reduce my lighting energy bills, not reduce the number of lights.

    Developers test and make sure the system behaves as spec’d; Testers make sure that it does not do less than, and more importantly, more than what it is meant to do, rather, not do what it is not meant to do. Eg I need a field to range from 1 – 10, not 0 or less, nor 11 or more.

    Testers should collaborate more with developers, and testing should be led by the BA who did the *business* (not functional) requirements in the first place.

    Reply

Leave a Reply to Anne-Marie Charrett Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>