https://pbs.twimg.com/media/C5xvGA9VAAAtxPy.jpg

Test Leadership is here to stay

In his article “What leaders do”, J.P Kotter makes the following distinction between management and leadership:

Managers promote stability, leaders press for change

For example:

  • Management involves planning and budgeting. Leadership involves setting direction.
  • Management involves organizing and staffing. Leadership involves aligning people.
  • Management provides control and solves problems. Leadership provides motivation.

The left hand side sounds a whole lot like what test managers traditionally do or have done in the past.

But, as organisations move to a more agile methodology tasks associated with these responsibilities are becoming redundant.

For example:

No more Test Team

Testers frequently sit within an agile  team alongside UX people, a product manager, developers and operations. Often they report to a senior person within the team. Large independent test teams no longer exist. The need for a test manager from a people management perspective is no longer required.

No more Big Test Strategy

Small agile teams working on small stories reduce the size and complexity of work involved. Any decisions on testing are typically made at a team level as autonomy and decision making is pushed down into the teams. There’s little need for a formal test strategy for work at a team level, and little need for a test manager to develop a large test plan or test strategy that outlines testing and resources required over an extended period of time.

No more Formal Test Process

You have 10 days to think and complete testing for a story. If you’re lucky, five days of that may be actually testing. Who the hell is going to worry about writing test cases and a formal test report? The nearest you get to a formal test process is a Jira Workflow. So bye bye formal test process. Bye Bye Test Manager to enforce test process.

No more Formal Testing Metrics

Metrics are becoming more team focused. Teams are using metrics such as MTTR (Mean Time to Recover), or LTTR (Lead Time to Deployment). Interesting ideas, and I’m glad to see a shift perceptions of quality. Again though, we don’t specifically need a test manager to report on these metrics.

No more Gatekeeping

Testing has traditionally (and unfairly) been the Gatekeeper to ensuring quality.  The business sees the testing department as a method of control. I suspect the entire testing community is heaving a quiet sigh of relief at the demise of this one!  Ensuring quality is a fool’s game, as there is no way you can win. It’s akin to forcing someone to sign a pre-nup to ensure love.  Good luck with that.

No more Test Managers then?

Yes and No. The role of test manager as we know it is dissapearing. There’s evidence of the demise of these roles in many of the large enterprise organisations I work with.

But while there’s less need for test managers that doesn’t mean that the thorny challenge of testing has disappeared. And while deploying in bitesized pieces reduces risk, it doesn’t it remove entirely. Often the risk is displaced to somewhere we are not used to investigating.

Something AWS discovered much to their dismay.


Some of the testing challenges many companies are facing:

  1. How do we test large scale systems and architectures (think Microservices)
  2. How can testing be an activity that all perform?
  3. How can we develop the testing capability within teams?
  4. How can we get better diversify our thinking in our testing?
  5. How can we think about testability as part of systems design and architecture?
  6. What part do testers and testing play within our teams?
  7. What skillsets do I need testers to have? Is it just one type of skillset?
  8. How can you test in production?
  9. How can we better understand and provide information on quality?
  10. How can we better respond to change, that is either in and out of our control?

 

Let me explain that final point.

Software Testing doesn’t have a great reputation for being flexible and adaptive to change. The way we test is deterministic. Think gated process and scripted test cases. But then neither is Test automation.

Like it or not, change happens. Sure, the extent to which change happens and the nature of that change may depend on the industry and technology you work with. What becomes important though, is how we handle that change. And, how we equip our teams with the ability to adapt to change becomes the task of a test leader.

We need confident self possessed software engineers, equipped with the ability to think critically through whatever challenge crosses their path. We need to help them work together to solve whatever testing problem comes their way. This requires a positive and safe environment where teams become ‘test-infected’ and have a culture of continuously thinking about improving their testing.

Test Leaders provide this breathing space for a testing culture to grow. They achieve this by placing an emphasis on vision, strategy and motivation and alignment. Examples of this are coaching, training, knowledge sharing and making information visible to the rest of the organisation.

A testing culture that provides this space itself generates test leaders. That’s important, because to some degree every tester is an advocate of quality and that in itself requires a degree of test leadership.

The skill set required in test leadership is different to test management. It’s more aligned to coaching, connecting people and driving a shared vision of quality for your company. It requires revisiting ideas on why we test and what software testing actually is and how we can continue to deliver value in a different context.

But I don’t see uncertainty going away, in fact I only see it increasing. And so, neither is test leadership.

Yu Han on what Software Testing means to me

Last year I had the honour of teaching post graduate students the subject of software testing at the University of Technology Sydney. I asked students if they would like to write a post on testing on my blog. Yu Han did, and here it is. 

The subject of Enterprise Software Testing demonstrated the existing and interesting aspects of testing. Those lectures and the project in Suncorp are unforgettable experiences.

During the classes, I felt that, most of time, I was acting like a child, playing different kinds of games and drawing pictures with colorful pens. But at the end, there would be some serious discussions, analyses and reporting always broke my sanity.

Those reflections make me realize that my actions and thoughts were more complicated than I could realise, which encouraged me to keep reviewing what I did and exploring how I thought. By doing that, I better understood the purpose behind those activities, and I am being able to sense the essence of what a good testing could be.

I am aware of that my mind prefers to use memory or imagination to fill the gap between the reality and the information once I have received. By linking their similarities, I can better understand the information. But as soon as I came to this stage, my thinking could stop going deeper. I may feel satisfied with the simple explanation, or could be distracted by other information which will draw my attention to something else. Then I may lose the chance to find out the depth meaning of the information or even misunderstand it.

Now I am interested in paying more attention on my flow of thinking. By questioning appeared ideas could be a way to slow it down, which may clarify the understanding or identify potential obstacles hiding behind. It could also be possible to bring back those ideas or considerations which were once brought up then been omitted or developed during the incredible thinking speed. Those ideas and considerations could be questionable as soon as I try to challenge them.

This could turn out that there are missing facts behind the imagination which I feel reasonable but not actually practical. Once I try to gain the evidence to support those ideas, I could realize they are incorrect or unproved. I may need to confirm them before I go further, especially when the goal is based on such ideas. Like those assumptions made in the Big-Track exercise, our group was needed to test our ideas of what other buttons can do, before we finally test how the targeted button works. We questioned our ideas but hard to move forward until we know which ones were correct. We came up with many hypothesis, but failed to prove them at once in practice. After we had some sort of understanding, we found out that we should confirm those assumptions before we continued, which led us to come up a debugging strategy to process our tests. It appears that questioning the information not only could encourage us to seek the truth but also could inspire us to develop our ideas. In this sense, critical thinking could help one to wisely seek information and to reorganize the information into knowledge for idea development.

This now also reminds me the concept of Exploratory Testing. In the previous example, we learned the mechanism, built the tests and proved the ideas all by interacting and exploring with the actual system. It seems that Exploratory Testing could be a natural way to build and run tests while we also need to learn about the system. By understanding how it works, we could come up with how to do the test and find bugs. However, it’s difficult to tell how much time I need to finish the task.

Thanks from the experience in Suncorp, I see the efficacy of Scripted Testing. Our team only performed the testing within several hours. We didn’t need to worry about anything else, knowing other parts of the system and even the depth of the targeted section. It did take a couple of weeks for us to learn the background information and to write the strategy and test scripts, but we could save most of the time if now we are going to test another section. It makes me feel that with a good development and a clear testing goal, the job can be easily done by writing and following test scripts. But it is true that I also feel difficult to understand the system by reading documents, having meetings and even watching demonstrations. It seems easier for me to concentrate and memorize such information when I can apply it, which means learning the system by actually using it.

I am inexperienced to conclude what a good testing is or which testing method is more superior, but running the system to get reliable information, questioning the information to dig insightful connections, finding actual evidence to support the thinking seem to be the correct manners in testing.

Thanks for this subject which let me feel the enjoyment of testing, and provided a real business environment to gain practical experience. I will be happy to get involved in such field and learn more in the future.

I’ll be teaching this subject at UTS again in February next year. The course is open to students and practitioners alike .

Courage in Exploratory Testing

Exploratory Testing takes software testing skill. It also requires the tester be courageous. Let me explain.

Exploratory testing is an approach, not a technique. Exploratory Testing is simultaneous learning, design and execution. What information we learn about a product, helps dictate our tests providing us with information that we can share with people around us.

Exploratory Testing is tester centric, meaning the tester is central to the testing taking place. The tester has the autonomy and the responsibility to make decisions about what to test, how to test, how much to test and when to stop. This may seem blatantly obvious to some, but its surprising the number of test teams where this is not the case.

Its all powerful stuff, generating an environment where the tester must constantly reflect upon the changing project and product environments, enabling the tester to be engaged and mindful as they test.

I honestly can’t think of a better approach to software testing.

But there is a catch. Exploratory Testing also demands great courage.

When you start to take responsibility for your testing it’s not done in isolation. Testing requires interaction with many different people such as developers, project managers, scrum masters, product owners, customer support and business people. We share information that’s not always good news. Its in the form of bugs found and the possible impact of this information on business. The resulting consequences of the information we share often leads to delays in releasing, changes in work load, context switching and revising strategies.

In scripted testing, testers have artifacts which they measure and count giving an illusion of of certainty but really this is smoke and mirror reporting and generally offers little genuine information. “We have reached 78% test coverage, with a DDR of 85%”

Exploratory Testing doesn’t have ‘dutch courage’ to rely on. It requires us to have conversations about our information in potentially hostile environments. Sometimes we can feel like the lone fish swimming against the tide of the silent majority. It can be tough and as testers we need to learn to develop how to speak about our testing, how to tell a story (James Bach and Michael Bolton have both written on this). courage in Exploratory Testing

Here’s a list of ways that have helped a quaking knock kneed tester like myself discover her backbone:

Speak to someone you trust about your concern. Vocalisng a fear helps to make it tangible and sometimes gives strength when you discover its a shared concern.

Be coached or mentored on how to speak about testing with confidence

Take small steps. Speak to people sympathetic to your cause, sound out ideas. See if other people can help.

Try not to lose faith, be persistent. Keep your eyes on the goal, even if sometimes you fail to speak out.

Emotions are your toolbox. Anger and frustration can be very useful emotions! Use your emotion to give you courage to speak out. (I learned that at PSL this year..thanks to Jerry, Johanna & Esther)

Sometimes you need help. Be humble enough to know that sometimes change is out of your capabilities. See if you can find help through the testing community or see if you can bring someone in to help affect the change.

But mostly, its about practice. Courage breeds courage. Standing up to little things helps give you courage to stand up to greater things in the future. Be brave. Be strong.

What drives me most of all is that I want to be able to walk away from a situation with my head held high in the knowledge that I may not have changed the world, but I’ve had my say.

Now that’s a good days work.

 

 

 

 

Speed Kills

Some of my testers have become embedded on a newly formed agile team. Its been a roller coaster ride for sure. Lots of fun, thrills and a few scary points in time where I thought for certain we were not going to make it, or we were heading in the wrong direction.

From a testing perspective, we were quietly confident. Our testing is exploratory by nature, and so it lends itself easily to being flexible and adaptable.

We’re testing before the acceptance criteria are developed by attending the workshop and asking questions about the upcoming feature, learning why its needed and what problem its trying to solve.

And when it comes to testing the feature, we all jump head first  into the deep end, swimming with long powerful strokes through the water of doubt and complexity and only surfacing for air to congratulate and high five each other on another completed testing charter.

Its been a wet wild ride, but not without some uncomfortable moments.

While we’re revelling in the warm waters of upfront information and involvement, we’ve also noticed the cooler waters of reduced timeframes. Shorter iterations has placed a lot of pressure on the testers. There’s a feeling that we’re unable to pause and reflect during our testing, the pressure to be done within the iteration lends to the temptation of minimalist testing.

We had to change something to make sure we were testing well enough.

So, we slowed down.

We made sure we spent time to think critically about our testing, coming up with test ideas, understanding what done meant, and most importantly sharing these ideas with each other (including developers).

Now we test like this:

We still jump into the deep end, splash out, learn some stuff about the product but before we continue with some serious swimming we stop and confer. We reflect on our learning so far, we discuss our ideas and how we know we are done. Only then do we continue with our swimming, our strokes confident and strong.

Its been a revelation for some testers that are new to exploratory testing. They thought the objective was to test as fast as you can without stopping for a breath.

We still have the same time pressure on us, but that stop, that moment of pause and reflection has been sufficient to gain confidence in our approach and confident in delivering the best testing that we can.

 

 

Crossing the bridge over no-man’s land

Indiana Jones in the Temple of Doom  has a “moment of indecision” when half way across a bridge he realises he’s been cornered by Mola Ram(the baddy) and his henchmen.

He looks back and there’s a hoard  of lusty savages baying for his blood, no luck there. He looks ahead only to find an equal challenge ahead. What is poor Indy to do?

Anyone who has introduced change into a corporate environment will empathize with his situation. You know the decision to break away from the past is the right one, you run eagerly and embrace change, only to find half way through the journey, your path to success becomes blocked.

I’ve been working with my test team for a while now moving from a process driven approach to a more of an Exploratory one.

Its not been without its challenges. Some concepts I’ve introduced have been welcomed warmly but the reception to others has been a little icy. In particular I’ve tried to move the team away from a test case management system. This was met with real concern and there was quite a resistance to the idea.

This troubled me as while I understood their concerns, I knew the system was limiting the generation of new testing ideas.

But how could I overcome this resistance? And really was it worth it? Perhaps the changes I had already made would be enough? The company was already more than impressed with the changes I had made so far.

I felt like Indy at the foot of rope bridge, how the hell was I going to solve this one?

So I stood at my crossroads and dithered. Oh God, did I dither. I ummhed and ahhed and pondered what to do. . But worse, I  knew my indecision was making the situation worse, and that the more I dithered, the harder it would be to rid ourselves of the dust bag of tired and well worn ideas.

Indy at this point, decides his only move is to cut the bridge leaving everyone to hang on for their lives.

Fortunately, unlike Indy I had a reliable and trustworthy sidekick.  Together, we setup a task force within the team to attack the problem. After some discussion we decided our approach needed four cornerstones. They were:

1) Creativity.

However we tested, our approach needed to enable us to foster and encourage creativity. With creativity comes new ideas, new models, new tests and so discover new bugs.

We’re covering this one with a number of approaches. One is to improve tester skill through practice and coaching. I’ve also created a folder of ideas for people to draw upon to help trigger new ideas.

2) Visibility

We wanted to be able to provide reporting on any testing we do. The reporting has to be simple yet with sufficient detail to ensure that our stakeholders understand what we have tested and why.

We have our trusty whiteboard which mostly hits the spot. We need to be able to pull up our actual testing including results in an easy to manage way. We’re looking into BBExplorer to handle that.

We will also track any essential test results on wiki in the form of a test report at the end of each iteration.

3) Coverage

We wanted to have some way of ensuring that key functionality/key features are always tested.

We most likely will rely on our test case management system for this, but we’re cleaning out all the dead wood and making the tests lighter and less authoritative.

4) Knowledge

We wanted to create a knowledge base. Our system is complex and it requires in-depth knowledge to test some areas. We want to store that information and knowledge. We also have a serious amount of test data we want everyone to be able to access, modify and improve.

We’ll use our internal wiki for this.

What I really like about what’s happened here is that the team came up with a solution to solve the problem. It’s a team decision which has got to mean easier implementation.

I think a couple of really powerful things have come out of this. I’m listing them here:

1) Change can be scary. Not changing is worse. Get on with it.

2) Use people around you to help bring about change.

3) Never lose site of your goal. This reminds me of Scott Barber’s email signature: “”If you can see it in your mind…you will find it in your life.”

I feel good. I hope my team does too. We faced a challenge. We examined it, questioned it and overcame it and we’ve all come out sharper, enlightened and positive about the changes ahead.

Now that’s what Exploratory Testing is all about.

 

On conferences and insomnia

For me, the sign of a good conference is insomnia the night after the conference.

Its as if my brain is unable to let go the new ideas and discussions I’ve had with other testers. Ideas that haven’t had an opportunity to be digested and reflected upon and usually around 2am after the close of a good conference my eyes snap open, my brain alive and alert ready for action.

Ideas and discussions from the day merge and meld into a boiling cauldron of fizzling synapse and bubbling endorphins and, as much as I try to breath deeply relax and let it all go, I know deep down its all pointless.

I’m going to have to get up and write down my thoughts and ideas.

STANZ Melbourne is one such conference and its given be a double dose of insomnia resulting in frenetic writing at 12, 2 and 4 am until finally my brain exhausted became compliant and allowed my poor weary body to sleep.

STANZ is sponsored and hosted by SOFTED. These folks at SOFTED really understand and ‘get’ software testing and it shows.

As well as  hosting this conference and getting some pretty impressive speakers in(I urge anyone who has the opportunity to hear Goranka Bjedov speak to do so) they also sponsor the Sydney Testers Meetup by supplying thirsty and hungry testers with drinks and nibbles at networking events.

Whats more they host peer workshops. The last one they hosted in New Zealand which was a huge success so much so that next year they hope to host one in Sydney.

Watch this space.

For me, STANZ gave me two core learnings.

The first was Goranka’s talk about the future of quality. I think this was really insightful and gave me much food for thought. The concept that quality is dead and that as testers we need to reflect how this will impact us. I’m not sure yet what this means for me(I need some more 4 am thinking on that one!) but  somewhere deep down, this struck a real chord.

The second re-enforced to me the power of sharing problems and getting ideas from your network of testers. In 30 minutes, Trish Khoo had a plethora of new ideas and suggestions for me to take away. Many thanks Trish.

Now off to order a double shot espresso….

 

An affidavit of sorts

The last three months I’ve worked specifically with the goal of my testers taking responsibility for their work.

I’m a strong believer that each person is responsible for their own lives. I try to  live by it and I expect others to do the same. Its one of the reasons why I endorse and believe Exploratory Testing is so powerful. The tester becomes centre to the testing. The tester is the decision maker responsible for their decisions(good or bad) and must be willing to stand by their choices and defend them where necessary.

Its a powerful concept, and I think somewhat alien to the way we are brought up and perhaps bring up our own children. Instead we are protected or we try to protect, wanting to prevent harm to those we are close to. Actually, I think its impossible to totally protect people, much better to teach survival skills.

I often hear people saying: “A great test manager removes obstacles so that their team can test” and its true. A good test manager will do that. However, I think a good test manager will also allow their testers to fail. Allow them to make their own decisions and learn to stand by those decisions and then defend them.

If we don’t do that, are we really helping testers to learn and grow?  I wonder.

I’ve had the luxury of procrastination over the past two days. Yesterday, I spent a glorious few hours at the Seattle Art Gallery. It was the perfect antidote to CAST 2011, which was exceptional yet mentally exhausting.

I also missed my flight back to Sydney, which meant I had a second day of whiling away hours at Seattle airport.

Our brains are so fascinating, aren’t they? Just as I’m about to board the flight, a burst of insight and determination hit me. I guess all that procrastination culminated in my powerful thought.

Its this.

As we learn and grow as testers and human beings, we constantly need to revisit our beliefs, values and motivations. I realised mine needed a revisit. (Incidentally, my tutorial at CAST was on this topic, another example of “if you want to learn something, teach it!”)

I needed to rework my ideas, goals and what was important to me. I needed to put myself in the centre of my testing career. I’m responsible for what I do and what I learn. Me. No-one else. Not mentors, not other testers, not thought leaders. Little ole me.

A few testers at CAST really inspired me to be like this. Unfortunately, I don’t know their names or else I would cite them here. But they’re not thought leaders or mentors, they’re context driven testers with a mind of their own. I like that.

I’ve always been able to think for myself but sometimes, you just have to up the anti, you know?

I don’t know what this will mean for me. I’m not sure where it will lead. What I do know that from this point on, I will continue to own my decisions and I will stand by them, just as I encourage my own testers to do.

I guess thats it. Its just something I wanted to share with you all.

QED

Keeping the lights on when your battery is running low

Most of when I test I’m thoroughly engaged and involved and enjoying the moment. Then there are those “other” times.  Typically for me they happen in the afternoon. My sugar levels get low, I get tired, and I stop testing well.

But darn it, there’s so much testing to do! How can I and my team continue to test effectively?

The biggest trap to fall into is to continue testing without changing something. Testing’s too important to be performed without full use of mental faculties.

Its up to me to ensure I keep alert.

Here’s my list:

1) Eat!
Sugar levels are low: Have you had lunch??? No….Okay, this is basic, but don’t skip lunch. Your brain needs nourishment for all that cogntive work it has to do in the afternoon.

2) Take a break.
It sounds counter-intuitive but by taking a break and doing something completely different, gives my testing brain a break. I like to do something physical,like going for a quick walk.

3) Swap a feature
Ask a tester to swap areas with you for a while. A mental change of scenery if you will.

4) Make it enjoyable
Find some way to make testing a bit fun for the team. Perhaps

3) Use the Trish Khoo method
At the start of a testing iteration, she asks the question “What can I do differently” or “how can I make testing more fun”? What a great approach!

There’s other ways to take breaks too.

4) Talk to a tester

I have to watch out for this one, as I need to take into account other testers may be “in the zone” but if someone is free its great to have a chat.

5) Talk to your peers

Again, timing is the key here. I find this one very invigorating, it also helps me take a step back from my immediate focus and see the “big picture”

6) Tweet, Blog, Share

I’m finding this one a bit hard at the moment as the company I’m doesn’t allow skype or twitter. But when I can, I use my phone to keep in touch with testers outside where I’m working.

For me its a mixture of making testing enjoyable, taking breaks and mixing things up.

What do you do to keep alert while testing?


 

The antipodes are calling

I’m heading to Sydney, Australia on 22nd January 2011.  I will be looking for test consulting work  preferably through my Australian consulting company Testing Times.

What do I offer?

I shed light on testing problems often obscured or caused by a testing process. I bring a new perspective often hard to gain when inside an organisation.

I do this by thinking outside the square, looking for solutions outside traditional process orientated ideas.

So, if you have a problem that you haven’t yet being able to solve using traditional testing approaches, or you want a testing approach based on excellence and speed* why not contact me?

I also deliver one day training workshops on testing. These workshops focus on increasing tester skill.

I offer a context driven approach to testing.

These  principles are:

1.    The value of any practice depends on its context.

2.    There are good practices in context, but there are no best practices.

3.    People, working together, are the most important part of any project’s context.

4.    Projects unfold over time in ways that are often not predictable.

5.    The product is a solution. If the problem isn’t solved, the product doesn’t work.

6.    Good software testing is a challenging intellectual process.

7.    Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

What this means to you is that the advice I offer is to ensure you the customer get the best value out of your testing.

If you like that idea then contact me at amcharrett @ testingtimes.com.au

Interesting fact on the word antipodes. “The antipodes of any place on Earth is the point on the Earth’s surface which is diametrically opposite to it.” – Wikipedia. So, technically that would mean somewhere in the Pacific Ocean. Though I suspect that not a lot of testing is done there!

* I use a rapid software testing developed & taught  by James Bach & Michael Bolton.