Why waterfall kicks ass

I read a blog post about why waterfall is NEVER the right approach and I feel compelled to respond to what’s touted as the waterfall mindset. Here’s a copy of the paragraph, but you can read the whole post on the above link to get a better sense of context.

I actually don’t believe adopting waterfall as an approach is ever a good choice.  Waterfall comes with the following mindset:

  • we don’t need feedback between the steps of requirements, analysis, design, code, test
  • we can hand work off
  • big batches are ok since they enable us to be more efficient
  • specialized skills working only on their specialty is good
  • we can understand the work to be done before we do it
  • written requirements can specify what we need

Putting aside for now, the use of absolutes, lets address this waterfall mindset:

1) we don’t need feedback between the steps of requirements, analysis, design, code, test

I’ve worked in both waterfall and agile over the years. In those ‘bad old days’ where no-one appreciated collaboration, we used to extensively review requirements. This meant that testers offered valuable input into requirements before any piece of code was developed. Since the invention of ‘agile’ almost everyone has discovered the 3 amigos, but honestly, this is not an agile concept, it existed way before agile was even thought of.

2) we can hand work off

Honestly, I don’t understand this sentence. I’m serious. I can think of many reasons why handing work to others is a good thing. For instance, if I’ve got too much work I’m in danger of becoming a bottleneck, and I hand some of my work over to someone else. If that’s a waterfall mindset, its one I like to have.

3) big batches are ok since they enable us to be more efficient

What do you mean by efficient here? Does it mean quicker, better quality, less waste? Efficient in what? Design, Writing code? Testing? Support? If I wish to carry 20 oranges from point A to point B, is it more efficient to carry them one at a time, or do I get a bag and carry them all together?Try delivering to a customer 1/4 of an IC circuit and request feedback? Sometimes delivering something in one batch IS the the more efficient way to proceed.

In waterfall, we did have teams, and work was allocated into smaller isolated tasks. It was rare one person developed the whole product. In fact, when I worked on telecommunication systems in the nineties, the concept of frameworks was being introduced, segregating data from its transportation method, allowing for people to work on separate parts of a system in parallel to each other.

4) specialized skills working only on their specialty is good

When I started working at Nortel Networks in 1994 it was all waterfall, except we didn’t call it waterfall then, we called it software development. Nortel Networks had a policy that encouraged developers and testers to spend time working in each others ‘specialisation’ area. For six months I became a developer working to deliver software. I was taught C++ and object oriented design principles, so I’m uncertain why you think this is a waterfall mindset?

5) we can understand the work to be done before we do it

Why is being able to understand work before you begin considered a ‘bad’ idea? I think this phrase is too ambiguous to be able to discuss with any merit.

6) written requirements can specify what we need

Is it the word requirement, or is it the fact that it’s written that makes this a ‘bad’ mindset? Yes we do have written requirements in waterfall. We also have written user stories in agile, so what is the point? Just because stories are written in confluence doesn’t make them any less written.

In the bad old waterfall days, I facilitated workshops with the business and IT teams to determine and understand risk. Lots of verbal collaboration, lots of whiteboard discussions.  I’ve worked in ‘agile’ teams were little communication takes place, no stand-ups nothing. In fact, one developer spent a week working on the wrong story without realising it.

I’ve worked on some fantastic waterfall projects that blew the socks of some really crappy agile teams I’ve worked with recently. In these waterfall projects, I’ve worked in harmony with great developers and testers, working closely together. We were afforded sufficient time to examine the system as a whole instead of its parts, something that these days appears to be a bit of a luxury. I’ve worked in environments that develop hardware, firmware and software where upfront design and ‘big batch’ systems thinking helped us understand that we were not merely developing code, but we were attempting to solve a problem.

Its easy to conflate poor software development practices with waterfall, just as its easy to conflate an agile approach with ‘good’ practice. To me the biggest change since those days are technological ones which allows us develop, integrate and compile it quickly and relatively cheaply. It’s the technology that has allowed us to develop in these small tasks that we are familiar with today. In the early nineties the concept of layering and isolating according to purpose was coming into play but we simply didn’t have the sophisticated systems that allowed us to develop in a way we wanted to.

A second important change was the conception of the agile manifesto. To me the agile manifesto is a stroke of genius. People who had the courage to espouse ideology that placed people over process, tools or artefacts. For many, this has changed how we think about developing software.

But lets not forget, that the agile manifesto was developed by people who worked in what you call a waterfall environment.  It seems to me these people had quite a different mindset than the one suggested above. It seems to me, those who developed the agile manifesto did so with ideas of collaboration and an emphasis on the ‘humanness’   of developing software. Do those ideas come about as a result of what was done badly in waterfall or because of what they saw was working well?

I suspect it was a little of both.

Don’t get me wrong. Lots of mistakes were made pre agile days. There was an idea of segregation between developer and tester in an attempt to avoid bias from developers. I’m glad we’ve gotten over that idea. But many of the mistakes we made in those days we’re still making now. Most software teams fail to understand testing and attempt to measure it using means that appear to have no direct correlation to quality. Many teams use measurement to lock down estimates as opposed to using them as a source of information for change. Go to an agile conference and count the number of talks on process and methodologies and frameworks. Look at today’s obsession with continuous delivery as a process. What happened to the people folks?

It’s easy to understand why. The agile manifesto is bloody hard to implement. Its much easier to point at a tool or a process and say “thats what we do” because its explicit, it’s easy to see. Programming & testing are human activities and are much harder to identify and talk about. It’s hard to describe and transfer these skills. The best way I know of is to actually perform the tasks.

We need a little more humility in acknowledging the great shoulders that agile stands on. Its simplistic to identify in hindsight a ‘waterfall’ mindset. Such a thing did not exist. Instead lets view them as the agile manifesto encourages us to do, to view them as people attempting to deliver quality software just like we are attempting to do today.

That coverage problem

Thanks mostly to the BBST classes the AST run, I’ve come to understand and appreciate the impossibility of complete testing and the challenges surrounding ‘complete test coverage’.

I find 100% testing coverage a pretty meaningless phrase. Imagine trying the calculate the amount of water in a swimming pool by multiplying the length and width of the pool. Ha! we’d snort and sniff at the stupidity of the idea. But when test managers, project managers and testers ask for “complete test coverage” that is in essence what they are asking testers to do.

In fact, getting to grips on complete testing is even harder than the pool problem, because at least with a pool, once you know the width, length and depth, the volume can be calculated.

In testing though,the number of models we use to design our tests is limited only by our imagination. To try and measure test coverage is a bit like trying to cover a black hole with a tea towel and say you’ve measured it.

Trying to test like this in a short space of time is really stressful and it seems the agile solution is to automate regression tests. But really, is this the best solution we can come up with?

It seems to me this desire to cover as much functionality as possible ends up in this Wac-A-mole game with testers frantically hitting features as fast as possible in order to be able to say its done.

Well, I’m trying a thought experiment at the company I’m consulting at. I’m challenging everyone to drop the idea of coverage and instead focus on some meaningful and valuable testing.

I’m doing this because it seems to me, we testers are far too hung up on this coverage problem and we bias our testing to try and solve it. Yes, this may mean that we fail to test all the functionality – quelle horreur! But get this, when do we *ever* test all functionality?

Dropping the desire to “test everything” is very liberating. We no longer have to fret about trying to test all the functionality in a week (we do weekly releases with multiple hot fix releases). Instead, it’s freed our minds up to reflect and ponder on what is the most beneficial testing to perform given we only have a few days to test. It’s also freed up our toolsmith, allowing them to spend time solving some testing problems through testabilty instead of frantically creating regression tests.

I’m fully expecting some bugs will get released to production, but you know? that’s nothing new! We’re finding bugs in production all the time, they’re called customer support issues.

Time will tell if this approach proves to be beneficial or not. It’s a little scary, dropping the regression test safety net, but when safety nets start obstructing your work its time to question its validity.

Speed Kills

Some of my testers have become embedded on a newly formed agile team. Its been a roller coaster ride for sure. Lots of fun, thrills and a few scary points in time where I thought for certain we were not going to make it, or we were heading in the wrong direction.

From a testing perspective, we were quietly confident. Our testing is exploratory by nature, and so it lends itself easily to being flexible and adaptable.

We’re testing before the acceptance criteria are developed by attending the workshop and asking questions about the upcoming feature, learning why its needed and what problem its trying to solve.

And when it comes to testing the feature, we all jump head first  into the deep end, swimming with long powerful strokes through the water of doubt and complexity and only surfacing for air to congratulate and high five each other on another completed testing charter.

Its been a wet wild ride, but not without some uncomfortable moments.

While we’re revelling in the warm waters of upfront information and involvement, we’ve also noticed the cooler waters of reduced timeframes. Shorter iterations has placed a lot of pressure on the testers. There’s a feeling that we’re unable to pause and reflect during our testing, the pressure to be done within the iteration lends to the temptation of minimalist testing.

We had to change something to make sure we were testing well enough.

So, we slowed down.

We made sure we spent time to think critically about our testing, coming up with test ideas, understanding what done meant, and most importantly sharing these ideas with each other (including developers).

Now we test like this:

We still jump into the deep end, splash out, learn some stuff about the product but before we continue with some serious swimming we stop and confer. We reflect on our learning so far, we discuss our ideas and how we know we are done. Only then do we continue with our swimming, our strokes confident and strong.

Its been a revelation for some testers that are new to exploratory testing. They thought the objective was to test as fast as you can without stopping for a breath.

We still have the same time pressure on us, but that stop, that moment of pause and reflection has been sufficient to gain confidence in our approach and confident in delivering the best testing that we can.

 

 

Is automation the new documentation?

Have automation tools become the new templates in software testing? The cornerstone on which all our testing hangs on? Its starting to look that way.

Think about it.

The traditional method of testing was to use a software testing process identifiable by test templates. These templates were (and in many cases still are) the focal point of all testing.

Then some new thinkers introduced concepts such as Exploratory Testing and Context-Driven Testing, placing the tester central to the testing exercise. It rocked the testing world.

Then along came agile with its focus on automation. This has lead to a heavy emphasis on automation instead of  the individual behind the tool.

Traditionally the majority of time in testing was spent writing heavy onerous documents, instead now, we are writing automated test scripts. What has gone wrong?

I was suprised at the emphasis on tools in agile testing talk I went to, as I am suprised at the interest in automation tools by all testers.

My feelings about automation are best described by this post by Kevin Pang where he writes:

The difference between a good developer and a bad developer isn’t whether they use duct tape, it’s how well they can recognize whether a situation calls for it.

There is a similar problem in automation testing.  Automation in itself is not a good or bad thing, its knowing when to use it. That’s whats been overlooked in this whole agile testing excitement.

The skills required to be a good tester are being overlooked in favour of good automation skills.  Instead of “are you a good tester”, you are being asked “what automation tools you know”.

I can understand why, its easier to quantify automation tool experience than quantifying what makes a good tester.

We need take more care about what we discuss in Agile testing. The emphasis on what tools is  too heavy and a rebalance is overdue.

More emphasis needs to be put on how a testers creates a strategy for automation, not just the type of tool they use.

It’s time to reset the balance and put the tester centric to the testing effort.

Softtest Talk on Automated Testing in Agile Environment

Softtest Ireland does a great job of holding free talks for software testers in Ireland. They held a talk yesterday on the following:

Automated Testing & Development in an Agile Environment

The two speakers were:

Sebastien Lambla from CaffeineIT is a “developer passionate about all things Agile” . He spoke about “In the life of a lean feature” and,

Ken Brennock from Sogeti Ireland, talking about “Individuals and Interactions over processes and tools”

The attendance was great, the room was packed and there was a real interest in what these speakers had to say. It was a mixed bunch of testers and test managers. Some were considering moving to Agile, others were already in the process, some like me were there to listen and perhaps pick up a few tips.

In the life of a lean feature

I found Sebastien Lambla’s talk a real challenge and I consider this to be a good thing. I like it when I hear something that I totally disagree with.  In this case, it was the concept of a “Cross Competency Team” where you ‘trust your team to be good enough to do everything”. So the example given was, a developer goes and assists with release management if work is backing up.

I did not like the sound of that!  So I asked the question “does that mean in times of need, a release manager helps out in development”.  The answer was in theory yes, if they had the skills.

And this is where I have the problem with the idea. Because in my view, a tester has special skills too which sets them apart from the rest of the team. They are testers because they think differently, have a different perspective and bring something special to the team that most other members don’t have.

But, when the chips are down and the feature is late, does the whole team help out in testing? Or are only those who have the necessary  testing skills allowed to test? I suspect not!

I’m guessing (or I’m hoping) that I am missing the point about Cross Competency Teams, mostly due to ignorance.

Probably, the intention or goal  is to promote the concept of “the whole team getting the feature over the line”  and that in reality, the developers would be perhaps helping out by using their strengths to supplement the tester instead of substituting the tester.

So, for example, the tester would hand over a bunch of code that had to be automated, leaving them to focus on perhaps exploratory testing.

Anyhow, onto the next talk.

Individuals and Interactions over processes and tools

The title of Ken Brennock’s talk was about  “Individuals and Interactions over processes and tools” which I thought was ironic considering he was talked mostly about tools and little about how a testing individual can contribute in an agile environment”.

Still his talk was very useful.

He used the waterfall process to demonstrate the types of tools to be used in Agile testing. This perhaps was not a talk for the real agile devotees, but it provided some very useful and practical tips on moving from Waterfall to Agile.

I liked how he focused on test data and test environment. He said, that when moving to Agile the priority for testers is to focus on automating the test environment and test data. I think this is so true. One agile team I know of, made sure the developers first created the install and configuration scripts before any other code was written. This way the test team could start creating the test environment and nutting out these issues, which most testers know can be an area of considerable pain.

Ken is giving this talk again in webinar format on Wednesday 7th October.

For both talks, I thought it was a shame that for a talk to testers, so much was focused on automation. I guess thats what a lot of people want to hear, but I still think there is room to discuss the value testers can provide in exploratory testing in an agile environment.

All in all, I feel I have gained much from these talks, many thanks to Softtest Ireland for organising such a good event.

Incidentally, the membership to Softtest ireland is free to all software testers.

A scrum in Croke Park

I’m attending the SQS conference on Software Testing in Croke Park, Dublin.  I thought it was appropriate to go to an Agile Testing session involving Scrum amongst other techniques in the same hallowed ground where not to recently a game of Rugby was played out between England and Ireland.

As our trainer Mike Scott was English, we tried not to gloat too much.

I won’t bore you with lots of analogies on how Agile is similar to rugby, besides after a day of Agile, I can’t think up too many, I’m sure someone out there can….

But here is what I enjoyed about Agile and its techniques

I liked the concept of the balloon pattern and testing so early that no code has yet been written, only your installation packages. I think thats really smart. You can iron out all your installation and configuration issues up front.

I like the concept that we as testers need to ask lots of questions and not make assumptions, though I think this is not unique to Agile.  A course on  Rapid Software Testing by James Bach also stresses this point.  However,  Agile demands intelligence in testing, where perhaps more traditional methods are less exacting?

There seemed to be a heavy dependency on Test Driven Development (TDD) which I am a big supporter of, though I do question the use of 100% Acceptance Test Automation.  I think in every software testing exercise there is room for both manual and automated testing. Its a question of intelligently planning out what percentage ratio works best for that particular project or environment.

Is Agile faster and cheaper as its sometimes portrayed?  I suspect not, but it does offer a customer greater flexibility and visibility and I like the sound of that!