Back To Basics: Automated Testing

Automating parts of testing has always existed since people have tested software. In my early days of testing the concept of separating testing into manual and automated testing never really existed. Well not where I worked anyhow. We tested, and sometimes we used tools to help us test.

In conformance testing we used protocol analysers and executable test scripts to verify conformance to ETSI protocols.  When testing Intelligent networks we used call generators and created SQL scripts to load data into databases. There was never a big divide between automated and manual testers.

I think this chasm only started happening with the introduction of capture and replay tools that offered the potential of allowing not so technical users the opportunity to reduce the amount of time required for regression testing.

When I talk about automated testing, I’m referring to the use of any tool that will help me test better.  If in the process, I end up testing more efficiently then that’s a bonus. Lots of tester’s think this way too.

Most of the time though, when the term ‘automated testing ‘ is used, it refers to coded scripts that help speed up regression testing and is seen as a superior substitute to manual testing offering added benefits of  faster testing and consequently greater coverage.

There is no doubt that automating your testing can deliver great benefits in improved testing, greater efficiencies (have you ever tried to manually perform load testing?) and in some cases cost and time savings.

But automated testing isn’t always the elixir or panacea that we think and if anyone approaches automated testing with a cost cutting  goal in mind, I’d encourage them to perform a cost-benefit analysis before engaging down this path.

I’ve discussed the benefits already, here are some typical costs that you might face:

1)      The development time for automating the tests.
The time involved for automating tests can be quite extensive, especially if you’ve never automated tests before.  If your automating tests inside a project, can the project afford the trade-off between delayed results (initially) and the longer term gains of repeatable and maintainable tests?

2)      Maintaining Test Scripts
Factor the cost of keeping automated test scripts up to date and relevant. If the software you’re testing has plenty of code changes, it can be safe to expect similar effort in keeping the test scripts up to date.

3)      Do you have the necessary skillset?
Do the testers have the skills to build an automated test framework? Do they know what tools to use and how much these tools will cost? If programming will be used (as opposed to capture replay tools) do the testers have the competence in the languages needed? If the testers need to learn new languages, are projects aware that a ramp up time will be required and that this may affect timelines?

4)      How do you decide what to automate?
Knowing the best area ‘s to automate is an art in itself. What is the best strategy for test automation? Which areas are best left to manual testing? The cost of getting this wrong involves starting from scratch again.

Automation can provide real savings and leave testers to go off and do more interesting stuff (while their tests are running), but needs to be implemented with intelligence and keeping the above factors in mind.

Many thanks to Evan Phelan for his input into this post.

Holding the cat by the tail

I thought Jack Margo’s interview by UTest was very interesting. What caught my eye was the following statement:

The days of specialists are mostly killed from the recession…you have to be flexible and know multiple disciplines to exist in today’s dev environment.  In web development alone, you need to be proficient with XML, DHTML, JS, a DB flavor, an OS flavor, a programming language and some semblance of UI Design to even handle front-end.  I have friends who knew only HTML or only PERL.  They are struggling to say the least

It made me think  the same applies to us as software testers.

Have specialists in software testing being killed by the recession? Is it necessary for software testers to be ‘flexible’ and know ‘multiple disciplines’?

Personally, I think so. Its not good enough these days to be a ‘manual tester’ or an ‘automated tester’. Instead you need to be able to do both. I don’t think that means you have to be ‘expert’ on both, but it does mean you have to have knowledge of both and a good knowledge in one area.

That’s why I’m excited about Nathan Bain and the free automated testing sessions he’s starting up.  As he puts it:

Come to meet fellow testers, share stories and experiences about tools and techniques which may, or may not, have solved testing problems on other Agile projects.

This is also a place of learning, where live demonstrations of tools will be given for FREE – no more expensive training courses for simple (and free) open-source testing tools.

What a fantastic opportunity to learn about automated testing!

To complement this, Rob Lambert has setup some free Exploratory Testing Sessions.

Both organisers have mentioned that these sessions could also be performed online.

I am not going to miss out on either opportunities. I would encourage those interested to sign up to both, either to contribute so others can learn, or learn from someone else.

BTW: two quotes were in contest to head this post. The first one was by Mahatma Gandhi:

“Live as if you were to die tomorrow. Learn as if you were to live forever.”

The other was:

“If you hold a cat by the tail you learn things you cannot learn any other way.”

Mark Twain

I love both for different reasons, but I thought the second one appealed to me as a tester, hence the title :)