The fine art of being precise

Jon Bach this morning wrote a post about how we need to be precise in our thinking. Thank you Jon, its a lovely honest piece with lots of wisdom.  But it got me thinking how sometimes precision can let us down too.

For instance, we can get fooled into thinking that being precise always matters. There are many situations where vagary(what a wonderful word!) is incredibly useful.

When my husbands asks me how my day was, I don’t reply with “What do you mean by day?”, instead I typically respond with ‘fine’ or something equally inane.  What’s important here is not the precision of the question or even the precision of the answer. My husband’s not that interested in my day at all but it’s his way of asking “are you ok?”.  My answer though perhaps a little short, is important too, though it’s not really the answer that matters, its the tone of my answer that he’s listening out for.

You see this vagary in software teams that work closely together.  Over time, these teams have developed their own language and don’t feel the need to question every definition. Team members pick up cues from body language and follow unwritten rules without much thought. I see this ability to follow such rules without question as a way of building trust. Often teams that work together for a while just ‘know’. They’ve built up a certain amount of tacit knowledge which doesn’t need to be openly discussed.

Unfortunately many of us have, at one point in time, worked in situations where this culture (for want of a better word) is not so healthy. I worked in one such company where open questioning was implicitly discouraged to the point where a developer worked on the wrong story for a whole iteration. I’ve seen many a tester battered and torn from attempting to pull down those unwritten walls of silence and ambiguity.

But what’s  important here is we recognise that in certain situations its appropriate for us to be loose in our language. In fact, I often hold off from being precise especially if I’m new to a team or client. Instead, I sit and listen, waiting for ambiguity to bubble up and emerge. This intentioned act of silence allows me to witness rather than be told where implicit assumptions may fester.

So while being able to be precise  is an important testing skill, another important one is the ability to identify when and where precision is most required, and when and where we can allow ourselves to be a little more accommodating.

 

 

Are you serious?

Seriously, how serious are you about testing? Lets presume you study the craft of testing. Does that make you a serious tester?

If you answer yes to this question, congratulations. You are on your way to becoming a serious tester. Now ask yourself this question:

“Do you take yourself seriously?”

If you want to be taken seriously about testing, you need to take yourself seriously. Note the difference here. Taking yourself seriously is much larger than being a serious tester.

Taking yourself seriously means you avoid behaviour and thinking such as:

“I will put myself down in front of others to make them feel better about themselves”
“I resort to behaving childishly when placed in pressure situations”
“I hand over power in order to avoid conflict”
“I feel like a fake even though my actions demonstrate otherwise”

I know this, because its only recently I realised that I haven’t been taking myself so seriously.

When you stop speaking to yourself in such a way and start taking yourself seriously, a wonderful thing happens. You start to believe in yourself. In fact, you have to. You owe it to yourself to do so.

I’ve discovered a new strength in this self belief. It means I have courage and strength to stand up for what I want. In doing so, I give myself the respect and honour for all the hard work I have put in.

So, let me ask the question again: Do you take yourself seriously?

Expression epitomised in Rage comics following a David Silverman interview of Fox News by Bill O’Reilly 

Yu Han on what Software Testing means to me

Last year I had the honour of teaching post graduate students the subject of software testing at the University of Technology Sydney. I asked students if they would like to write a post on testing on my blog. Yu Han did, and here it is. 

The subject of Enterprise Software Testing demonstrated the existing and interesting aspects of testing. Those lectures and the project in Suncorp are unforgettable experiences.

During the classes, I felt that, most of time, I was acting like a child, playing different kinds of games and drawing pictures with colorful pens. But at the end, there would be some serious discussions, analyses and reporting always broke my sanity.

Those reflections make me realize that my actions and thoughts were more complicated than I could realise, which encouraged me to keep reviewing what I did and exploring how I thought. By doing that, I better understood the purpose behind those activities, and I am being able to sense the essence of what a good testing could be.

I am aware of that my mind prefers to use memory or imagination to fill the gap between the reality and the information once I have received. By linking their similarities, I can better understand the information. But as soon as I came to this stage, my thinking could stop going deeper. I may feel satisfied with the simple explanation, or could be distracted by other information which will draw my attention to something else. Then I may lose the chance to find out the depth meaning of the information or even misunderstand it.

Now I am interested in paying more attention on my flow of thinking. By questioning appeared ideas could be a way to slow it down, which may clarify the understanding or identify potential obstacles hiding behind. It could also be possible to bring back those ideas or considerations which were once brought up then been omitted or developed during the incredible thinking speed. Those ideas and considerations could be questionable as soon as I try to challenge them.

This could turn out that there are missing facts behind the imagination which I feel reasonable but not actually practical. Once I try to gain the evidence to support those ideas, I could realize they are incorrect or unproved. I may need to confirm them before I go further, especially when the goal is based on such ideas. Like those assumptions made in the Big-Track exercise, our group was needed to test our ideas of what other buttons can do, before we finally test how the targeted button works. We questioned our ideas but hard to move forward until we know which ones were correct. We came up with many hypothesis, but failed to prove them at once in practice. After we had some sort of understanding, we found out that we should confirm those assumptions before we continued, which led us to come up a debugging strategy to process our tests. It appears that questioning the information not only could encourage us to seek the truth but also could inspire us to develop our ideas. In this sense, critical thinking could help one to wisely seek information and to reorganize the information into knowledge for idea development.

This now also reminds me the concept of Exploratory Testing. In the previous example, we learned the mechanism, built the tests and proved the ideas all by interacting and exploring with the actual system. It seems that Exploratory Testing could be a natural way to build and run tests while we also need to learn about the system. By understanding how it works, we could come up with how to do the test and find bugs. However, it’s difficult to tell how much time I need to finish the task.

Thanks from the experience in Suncorp, I see the efficacy of Scripted Testing. Our team only performed the testing within several hours. We didn’t need to worry about anything else, knowing other parts of the system and even the depth of the targeted section. It did take a couple of weeks for us to learn the background information and to write the strategy and test scripts, but we could save most of the time if now we are going to test another section. It makes me feel that with a good development and a clear testing goal, the job can be easily done by writing and following test scripts. But it is true that I also feel difficult to understand the system by reading documents, having meetings and even watching demonstrations. It seems easier for me to concentrate and memorize such information when I can apply it, which means learning the system by actually using it.

I am inexperienced to conclude what a good testing is or which testing method is more superior, but running the system to get reliable information, questioning the information to dig insightful connections, finding actual evidence to support the thinking seem to be the correct manners in testing.

Thanks for this subject which let me feel the enjoyment of testing, and provided a real business environment to gain practical experience. I will be happy to get involved in such field and learn more in the future.

I’ll be teaching this subject at UTS again in February next year. The course is open to students and practitioners alike .

Knowing the unknown

As a teenager, I remember being struck by the poignancy of the tomb of the unknown soldier. I’m not sure which tomb it was, in which city. I don’t recall there being one in Ireland, so perhaps it was in London. I remember feeling sad and perhaps a little understanding of the horror of war crept into me. For such a simple monument, the message was powerful.

For me, the tomb itself signified a lot more than missing soldiers. Somehow it symbolises those untold stories. Who was that person, why did they die? Was it painful, did they have a wife, children? it makes me think about history too, and how really its a story told by the victor. We rarely hear the story of the defeated. So many stories untold.

Roll on many, many years and I realise that we in software development, particularly in agile, we have many untold stories that only make the light of day when we find bugs in software. We fail to hear the stories that stakeholders wanted to say, but fell aside because of time pressure. We fail to hear stories that stakeholders have not realised exist. We fail to hear stories because some stakeholders weren’t seen as needed. We failed to tell the story because we simply didn’t think it was important enough.It’s a wonder software works at all!

Testing to helps us uncover and tell these untold stories. How? Each bug we find, has a story behind it. It may be story of why the bug came into being in the first place. Or the story may turn out to be unimportant. Often not only do we uncover stories,we also end up adding meat the stories that exist.

I find this particularly true of agile. The simplistic approach to stories that start with “As a <insert oversimplified description of user here>, I want <some oversimplified goal> so that I can <insert oversimplified ambiguous reason>” . I do understand that the point of these stories is to initiate conversation, but it’s my experience that often conversations are skipped and this skeleton like sentence ends up becoming the story. And as Allister Scott pointed out these stories would totally fail the bedtime story test performed by any 5 year old. Regardless these skeleton like stories then become converted into automated acceptance tests which significantly influences the decision to release or not. Well, in my experience anyhow.

As I understand it, one of the reasons for these simplistic stories is to achieve the goal of “dealing with the problem at hand”. By focusing on only what needs to be done now, we avoid over engineering a product. Now believe me, after working on some large scale telecommunications switches in the 80’s and 90’s, I totally appreciate this sentiment. However, the drive to simplicity has I think led to deficiencies in how we model our systems.

For example, we fail to recognise that firstly, some systems cannot (and perhaps should not) be modelled from a user perspective. (I’m happy to be proven wrong in this, my experience in testing r&d products and layer3 protocols leads me to believe this is the case though). Also by focusing on only the problem at hand we fail to appreciate the subtleties and impacts of the unknown unknowns.

I’d like to see software development attempting to re-address this balance. When I went to Agile Australia lots of people were talking about systems thinking and my first response was “Brilliant!” but it seemed that the systems thinking was focused primarily on business systems, as opposed to products. Perhaps its me with too narrow a focus, but I’d really like to see more discussion on how we can become better at modelling software in agile.For one thing, lets move away from telling stories ONLY from a user perspective. There are many ways to model a system and greater diversity of models may bring about deeper appreciation and understanding of the problem were trying to solve.

And so to the unknown solider. I’m glad no-one has tried to simplify his story into a sentence beginning with “As a soldier…”.  The monument conjured up more questions than answers, questions about the unknown. Questions that can never be answered. He was the unknown solider and it was fitting. i guess one question to ask is this:  is the unknown story a fit for agile?

The title for this post is inspired by Colin Cherry’s talk at KWST3 about Johari windows. [thanks for the spelling check Srinivas]

Big Trak Robots at KWST3

I had a fantastic time at KWST3 this year. There were a lot of firsts for me. The first time I was in New Zealand, the first time I had been to KWST and the first time I met Brian Osman, plus a whole heap of other testers.

I don’t think I’ve been to such a learning event for a while. It was truly a place of inspiration, introspection and challenge. Brian and Colin have already written about their thoughts and I will add mine too in a different post, but first I want to talk about Robots. Oliver asked me to bring over by Big Trak Robots after seeing them in action at CITCON in Sydney.

I split the group into two teams and set them a testing challenge, which involved running experiments in order to determine one of the buttons. What ensued was an hour of fantastic learning, mostly for me, as I watched a group of highly skilled testers apply their minds to the exercise. The testers did some really interesting work. One team started performing fairly complicated tests, which turned out to be a real blessing for them. They then made a model of the functionality and made a hypothesis on what the solution might be.

IMG_1008

Team Two took a different tactic. They started with simple tests (a common focusing technique and one I often use) but the tests didn’t offer enough information to the testers. In the debrief that followed at the end, we had a discussion about how though simple tests are quick and easy, sometimes they fail to offer sufficient and meaningful information. Team two, recovered by defocusing and both teams offered their solution.

IMG_1010

There’s so many lessons to learn from this exercise, and depending on the group, different lessons are learned. This was one of the most enjoyable times I’ve run it, I think because the testers were skilled and could apply themselves with confidence. They also were very self-aware and so debriefing was more about letting the testers volunteer information than asking probing questions. Or maybe I’m getting better at handing over the learning to those really in charge.

Andrew Robbins and Richard Robinson are running the test lab at Tasting Lets Test and the Robots are going to have their moment in the spotlight there too! See you all there!

Software Testing at University of Technology Sydney

I’m lecturing at the University of Technology this year. I’ve revamped it the course from last year, placing more of an emphasis on the testers skill. This is essential if we are going to improve the quality of testing within organisations, and its why large organisations are excited about and want to work with UTS to make this course happen. As far as I’m aware, this is the first course in Australia of this kind.

The course is part online part tutorial. There is a big emphasis on learning by doing, with lots of opportunity to practice testing. There’s also an opportunity to work with companies, performing testing and reporting to real stakeholders, giving you a real opportunity to experience what software testing is about.

Here’s an overview of the course content:

Software Testing

This course teaches postgraduates the essential skills required in software testing. Learn how to test software in a way that offers stakeholders valuable and insightful information on the quality of the product by asking useful questions of people and the product you are testing.

To do that, you will learn the principles of context driven testing, critical thinking skills, test strategy, test planning, collaboration and communication and documentation.

Subject Objectives

  • On completion of this subject, the student will have the potential to:
  • Understand how context drives how software testing is performed
  • Think critically in software testing
  • Creating Test Strategies
  • Know how to model a product for the purposes of software testing
  • Become proficient in bug finding
  • Learn how to test effectively
  • Create test reports that provide relevant information to software testing stakeholders

On completion of this subject, the student will improve

  • Their understanding of what software testing is and how it relates to other roles in product development
  • Their ability to think critically by asking useful questions.

Teaching and learning strategies

This course is based on the principles of experiential learning with an emphasis of understanding through doing. Each topic will be taught through a practical exercise. Students will have the opportunity to work individually and in groups to complete assignments throughout the course. There will be a final group assessment where students will work in a company to create a test strategy, test software and develop a test report on the software tested. 

This course is aimed at postgraduate students who wish to learn how to test software in an applied and thoughtful way. Some degree of technical understanding is beneficial but not essential.

Content

This subject will cover the following topics:
1) Critical Thinking in Software Testing
2) Test Strategy: Modelling
3) Test Strategy: Coverage
4) Exploratory & Scripted Testing
5) Oracles and Bug Finding
6) Test Reports and Bug Reporting
7) Testability: Tools in Software Testing
8) Advanced Modelling – State Machines
9)  Communication with Stakeholders

Though the course is part of the post graduate program, they’ve agreed to allow people to take the module as a course in its own right. If your interested in taking part in this course,  go to the UTS website

How I use modelling in software testing

My testing can look like  ‘fly by the seat of my pants” testing at times.  There’s nothing more I like to do than grab a new product and jump straight into testing. No namby pamby reading of requirements by me! No, I want to get straight to the source of truth. I want to find out what the product *actually* does. (This approach may seem seem rash, but its not. Its a considered decision, read on).

But this type of testing only takes me so far. I get to the point where my testing starts to be limited. There seems to be nothing new about the information I’m getting and my learning about the product takes an exponential dive down.

I take this as a sign to stop and take a step back. I’ve obtained as much information as I can from playing around, but now its time to get my hands really dirty. Its time to start studying and researching the product. I start learning more about the products structure such as the database, the products architecture and the interfaces. I explore the intent of the product and find out who the users are. I start talking to product owners & developers to gain information about the product.

I then go back and test more, but this time my testing has taken a different turn.  With new information and ‘new eyes’ I’m looking at the product in a different way. I start learning new things again and the curve of learning and finding new information goes up again.

All this time I’ve been modelling and testing.  In software testing I model a product to understand it. This might be a little different to the way architects model a building. They model to demonstrate to others what the final product will look like, though I can imagine creating a physical model helps to clarify thinking.

Modelling isn’t always explicit. We all have a mental model – a representation of the world in our head and testing makes use of it heavily (Go to a Rapid Software Testing Class taught by James Bach, Michael Bolton or Paul Holland to find out more on this). Sometimes I find it helpful to make the models explicit though. I do this to help me reason through the information. Ordering  information through drawing it or writing it down seems to help me recognize gaps in my thinking.

When I jump in and test, I’m actually creating a model of what the product does. I prefer to model the product first *before* reading requirements etc so I have good understanding of what product really does. My understanding of the product is unfettered by any biases and assumptions that I might gain from speaking to people or reading requirements. Having a solid model of what the product does grounds my testing in the reality of what is. As a tester, I want to bring a different perspective to the table.

Once I’ve modeled what the product does, its time to find out more. I model how people perceive the product to be. I read the requirements and any other documentation I can find. I talk to people and create squiggly and messy diagrams on whiteboards that normally only I can read (and sometimes struggle to understand!)

All the time I’m modelling to understand the product better.

I’m still testing though. I don’t perform modelling in isolation to other cognitive activities. In fact, I test and model, model and test. This might appear counter-intuitive. After all, how can you test without knowing what people want? How will you know if there’s a problem without requirements?

That goes back to oracles (you know, those things that help you recognise problems). When I test, I purposefully use a diversity of oracles to help me recognise different problems. When I use “Plunge In & Quit”* heuristic, I am testing. My oracles of choice are: Previous Testing Experience, Knowledge of Similar Products, World Experience. You don’t have to have explicit requirements to recognise problems.

So I model and test, test and model.  For example, as I’m creating models, I’m testing them. Think of whiteboard scenario where you are formulating models with a developer. As the model is being created, its being tested. That’s how gaps get recognised. When I’m testing, I’m  challenging my models to see if there’s a problem.

I’ve consolidated my thinking on models into this short video.

Here’s what I’ve learned so far:

1) Modelling is integral to testing regardless of it being performed consciously or unconsciously

2) You can have mental, formal and physical types of models.

3) Creating formal models can help reason through a product

4) Modelling a product by first “playing around” can help bias your testing in a good way

5) Modelling and testing take place simultaneously.

6) Different Models are a source of new questions to ask the product

I’m going to wrap up with George Box’s advice:

“all models are wrong, some models are useful”

Creating this model of models has proved useful to me, but its not complete. What are your ideas on modeling and software testing?

*” The Plunge in & Quit” heuristic was identified and named by James Bach. It’s an approach many experienced testers use to quickly learn about a product.

 

 

Courage in Exploratory Testing

Exploratory Testing takes software testing skill. It also requires the tester be courageous. Let me explain.

Exploratory testing is an approach, not a technique. Exploratory Testing is simultaneous learning, design and execution. What information we learn about a product, helps dictate our tests providing us with information that we can share with people around us.

Exploratory Testing is tester centric, meaning the tester is central to the testing taking place. The tester has the autonomy and the responsibility to make decisions about what to test, how to test, how much to test and when to stop. This may seem blatantly obvious to some, but its surprising the number of test teams where this is not the case.

Its all powerful stuff, generating an environment where the tester must constantly reflect upon the changing project and product environments, enabling the tester to be engaged and mindful as they test.

I honestly can’t think of a better approach to software testing.

But there is a catch. Exploratory Testing also demands great courage.

When you start to take responsibility for your testing it’s not done in isolation. Testing requires interaction with many different people such as developers, project managers, scrum masters, product owners, customer support and business people. We share information that’s not always good news. Its in the form of bugs found and the possible impact of this information on business. The resulting consequences of the information we share often leads to delays in releasing, changes in work load, context switching and revising strategies.

In scripted testing, testers have artifacts which they measure and count giving an illusion of of certainty but really this is smoke and mirror reporting and generally offers little genuine information. “We have reached 78% test coverage, with a DDR of 85%”

Exploratory Testing doesn’t have ‘dutch courage’ to rely on. It requires us to have conversations about our information in potentially hostile environments. Sometimes we can feel like the lone fish swimming against the tide of the silent majority. It can be tough and as testers we need to learn to develop how to speak about our testing, how to tell a story (James Bach and Michael Bolton have both written on this). courage in Exploratory Testing

Here’s a list of ways that have helped a quaking knock kneed tester like myself discover her backbone:

Speak to someone you trust about your concern. Vocalisng a fear helps to make it tangible and sometimes gives strength when you discover its a shared concern.

Be coached or mentored on how to speak about testing with confidence

Take small steps. Speak to people sympathetic to your cause, sound out ideas. See if other people can help.

Try not to lose faith, be persistent. Keep your eyes on the goal, even if sometimes you fail to speak out.

Emotions are your toolbox. Anger and frustration can be very useful emotions! Use your emotion to give you courage to speak out. (I learned that at PSL this year..thanks to Jerry, Johanna & Esther)

Sometimes you need help. Be humble enough to know that sometimes change is out of your capabilities. See if you can find help through the testing community or see if you can bring someone in to help affect the change.

But mostly, its about practice. Courage breeds courage. Standing up to little things helps give you courage to stand up to greater things in the future. Be brave. Be strong.

What drives me most of all is that I want to be able to walk away from a situation with my head held high in the knowledge that I may not have changed the world, but I’ve had my say.

Now that’s a good days work.

 

 

 

 

Growing your own

So, I’m sitting here in my office on a beautiful  Australian spring day. The sun is shining brightly, the air is slightly fresh sending wafts of scent from the spring flowers. Its a good time to be alive and its a good time to be thinking of growth and change.

Potatoes in Spring

True to form, I trotted down to the local garden center and bought back a truck load of seeds and ideas on what I can do in the garden. I feel good this year, the potatoes are growing well, and I’ve managed to grow snow peas for the first time. Having invested a good amount of time in the garden, I feel content enough to sit in my office and allow myself to explore ideas on software testing and training.

And I’ve come up with the crazy idea, wonderful idea. For some time I’ve wanted to invest in online training in software testing. Its a model that I think will suit me well. I live far far away from the rest of the world and as much I as enjoy meeting new testers from around the globe, continuous travel is not for me.

To date, I’ve struggled with the concept of online learning. When I learn, I like to get my hands gritty, experience the stuff I’m wrestling with. With my online coaching, I make sure I include a task of some sort but thats one on one. Is it possible to offer online experiential learning to many people?

And then I read about these guys at Venture Lab. The courses are highly experiential & require collaboration to succeed. I sniffed a model that I could possibly work with.

But I’m taking it one step further. I want to make the students the designers of the course. Its the students who will work out what needs to be learned and how that will be achieved, and how they will know they achieved it. There will be some external structure, perhaps in the form of exercises, some philosophies to abide by, but basically, its the students who will dictate the content & the pace. In fact a lot of these ideas are from the coaching model James Bach & I have worked on holding onto the concept that learning requires real desire from the student, and to do that the student needs to dictate the learning (with the teacher offering space and direction to learn).

I see this courses as being a permission giver. We’re so drilled to think of learning as something we have to sign up for, like its impossible for us to learn outside a course. In this way, I’m helping overcome that little hurdle and get into some real meaty learning.

I’m very excited about these ideas and what will come of them. I’m not sure where they will lead and I guess that’s half the fun! If you want to join me on the crazy, wacky journey, feel free to contact me on Skype at charretts, or else add a comment below.

I’ll be adding information on this model as it progresses.

Upcoming Software Testing Courses

I’m heading to CAST 2012 this year (July 16th – 18th) and so I’ll be in the San Jose/ San Francisco area if anyone is looking for an impromptu workshop on Exploratory Testing or Coaching Software Testers around that time.

I’ll also be near Albuquerque for PSL in August this year (August 24th -August 31st), so if anyone is interested in workshops in those general locales contact me on annemarie@mavericktester.com or Skype me at charretts