Back To Basics: Automated Testing

Automating parts of testing has always existed since people have tested software. In my early days of testing the concept of separating testing into manual and automated testing never really existed. Well not where I worked anyhow. We tested, and sometimes we used tools to help us test.

In conformance testing we used protocol analysers and executable test scripts to verify conformance to ETSI protocols.  When testing Intelligent networks we used call generators and created SQL scripts to load data into databases. There was never a big divide between automated and manual testers.

I think this chasm only started happening with the introduction of capture and replay tools that offered the potential of allowing not so technical users the opportunity to reduce the amount of time required for regression testing.

When I talk about automated testing, I’m referring to the use of any tool that will help me test better.  If in the process, I end up testing more efficiently then that’s a bonus. Lots of tester’s think this way too.

Most of the time though, when the term ‘automated testing ‘ is used, it refers to coded scripts that help speed up regression testing and is seen as a superior substitute to manual testing offering added benefits of  faster testing and consequently greater coverage.

There is no doubt that automating your testing can deliver great benefits in improved testing, greater efficiencies (have you ever tried to manually perform load testing?) and in some cases cost and time savings.

But automated testing isn’t always the elixir or panacea that we think and if anyone approaches automated testing with a cost cutting  goal in mind, I’d encourage them to perform a cost-benefit analysis before engaging down this path.

I’ve discussed the benefits already, here are some typical costs that you might face:

1)      The development time for automating the tests.
The time involved for automating tests can be quite extensive, especially if you’ve never automated tests before.  If your automating tests inside a project, can the project afford the trade-off between delayed results (initially) and the longer term gains of repeatable and maintainable tests?

2)      Maintaining Test Scripts
Factor the cost of keeping automated test scripts up to date and relevant. If the software you’re testing has plenty of code changes, it can be safe to expect similar effort in keeping the test scripts up to date.

3)      Do you have the necessary skillset?
Do the testers have the skills to build an automated test framework? Do they know what tools to use and how much these tools will cost? If programming will be used (as opposed to capture replay tools) do the testers have the competence in the languages needed? If the testers need to learn new languages, are projects aware that a ramp up time will be required and that this may affect timelines?

4)      How do you decide what to automate?
Knowing the best area ‘s to automate is an art in itself. What is the best strategy for test automation? Which areas are best left to manual testing? The cost of getting this wrong involves starting from scratch again.

Automation can provide real savings and leave testers to go off and do more interesting stuff (while their tests are running), but needs to be implemented with intelligence and keeping the above factors in mind.

Many thanks to Evan Phelan for his input into this post.

27 Comments

  1. Great post. I agree with everything you have said here.

    I think the danger comes when people see automation as a necessity for every project rather than a technique to be considered and adapted. It’s become a popular buzzword and I’ve had both managers and clients request automated testing without really understanding what it is or how it is useful. Sometimes they have just been sold an expensive automation tool and they want to make use of it. Sometimes they just feel reassured by seeing a nice green graph of passing tests every day generated by the automation.

    The truth is, automation can be a powerful tool that really helps makes testing very efficient, and it can also be a huge waste of time, money and effort. It completely depends on the context. A while ago I wrote this post “One Automation Please” which talks about this a little bit more: http://ubertest.hogfish.net/?p=35

    Reply

    1. Thanks Trish, I know you do a lot of automated testing, so I value you comment :). I read your post and found your approach on smoke testing interesting. How successful did this approach turn out to be? Have you evolved it?

      Reply

  2. Thanks Anne-Marie. I’ve found automated smoke tests to be very successful. I believe it is the best value – a small suite that’s quick run every day, easy to maintain, with a high chance of picking up obvious bugs fast. It not only speeds up what would usually be a manual process, but it picks up any major unseen UI changes that would affect an automated regression suite (large automated regression suites tend to be run overnight).

    One thing I would add to that approach now is that it’s a very good idea to watch the smoke test as it is running, instead of just waiting for the errors at the end. This adds the tester’s eyes – a powerful tool! – to the automated tests. The cause of failures will be detected much faster this way. Watching automation is a luxury we only have with very small automated test suites, so it’s good to take advantage of it.

    Reply

  3. Anne-Marie,

    Most of what you talk about has/was realized by a lot of us early adopters of the GUI/Regression tools in the early 90’s (and some before that with DOS tools). But what we always had to deal with was the Smoke & Mirrors salesmen of the tool vendors and the damage they would do by talking to our management/C-level.

    Thankfully to some small degree the management/C-level are a little more educated now and are listening to the people who are doing the implementation work. But this has been a long road to hoe.

    Once management/C-level get the clue that automation can, and is, be a development project unto itself you have a bit of a better chance. It all boils down to managing expectations. It means being realistic about what can built , when it is deliver, what it will take to maintain and who is best resource to do the work.

    Reply

    1. Hi Jim,

      I remember those days in the early 90’s using XRunner (the linux version of winrunner) and trying to figure out exactly how this was meant to benefit me! I know those tools did little to win me over to capture replay type tools.

      You raise an interesting point about the management level expectations, however I wonder with the advent of Agile with its emphasis on automation has inadvertently set back testing somewhat. Certaintly, some startups I work with quasi agile approaches are very keen on automating the whole testing exercise in an effort to reduce cost.

      Reply

  4. All salient points. The first question I ask myself now is, Do we need test automation in the first place?

    I’ve seen full blown (capture-replay) test automation applied to an application that was updated once every few years, was it worth the effort to automate these tests?

    In this case no, the people testing the new re-release did not want to use the automated tests citing a lack of knowledge of the tool and the effort required to get the automated test suites running again.

    Infrequent releases and a separate test automation group were the contributing factors in this scenario.

    Reply

    1. Well really its a whole different ballgame. I suppose it would firstly depend on what type of protocols you wanted tested? I specialised in testing telco protocols, ISDN, ISUP, SS7 etc. There are many standards out there that you can test against. We used specialised machines (PT500’s) to create automated test suites in a language called FORTH (I don’t know if it still exists!). I found the work very interesting and very technical, but I would imagine its not for everyone.

      Reply

  5. I must mention two test tools which negate most of the concerns mentioned in the article. These tools are both from Original Software. The first tool is TestDrive Assist which helps during manual testing, This addresses the first point, test better, and test more efficiently. What TestDrive Assist does is automatically document your testing, capturing screen shots, key clicks and data entry. This allows you to concentrate on the testing without having to break focus by having to manually capture screen shots. For agile testing this is great as you don’t need to write the tests first, just test and the test is fully documented for you! It also provides a full audit trail of the executed test, and provides everything a developer needs to be able to duplicate the test which helps to fix defects found during testing I think this tool is indispensible.
    Another tool from the same vendor is TestDrive Gold. This tool can take the manual test and easily automate it once the application is stable. No coding necessary ! Once the app changes, the automated tests can be re-run, and TestDrive Gold will flag up any changes (whicjh may be valid, or may not) and you have the chance to accept the change whereupon testDrive Gold automatically updates the test, or raise a defect if the change is invalid.
    So, time to develop automated tests is reduced, script maintenance is much less onerous, and the necessary skill set does not rely upon developer-like skills in knowing a script language. The TestDrive Gold interface can apply all changes without coding. There are some things that you will need to learn in order to make your automated test robust and re-usable, but once you have done the short training course, these are easy to apply. This is a new genreration of test tool, and one that I can recommend as a customer.

    Reply

  6. Hmm, wasn’t meant to sound a like an ad, but I was just knocked out by the capabilities of these tools, having used many others over the years. I have been a tester and test manager for over 15 years and a developer for 12 years before that, and these tools are the nicest I have used, honest! I don’t work for them, I work for a Global company as a test manager in our testing centre of excellence. I just thought people would like to know about tools that are not just destined to become expensive shelf-ware due to being resource-hungry due to maintenance overhead. Check them out Anne-Marie, I’m sure you will be pleasantly surprised too.

    Reply

  7. Based on my experience of working with Automation (using QTP) we reached a point where developing and maintaining scripts was not much of an issue, since we had developed the necessary framework. What was blocking us the amount of time that was required to get the test data and expected data ready..

    Reply

  8. hi,

    I would suggest that there is an advantage in thinking of automation as applying to tasks rather than testing. This then leads to the question “what tasks do I do that could be better served by automation” From this we can then ask “what is the appropriate tool/s to automate the tasks? We can then really get some traction. I encourage testers to think about what they do in this context and they have built and use lots of tools to make tasks easier. Imagine a tool whereby you can enter information about the validation of a UI or a data file (including business rules) and the tool determines the equivalence classes and boundary conditions best suited to test that element and lists them in a test condition/test case matrix that you can then use to either create files or parameters for a test tool. You can then either create and submit files automatically from the matrix or use the matrix as parameters for automation. You can extend this further to the point that starting with the creation of the test matrix you can go from creation through execution and validation to flagging defects, creating a report and sending it to your boss! All this is has been done starting with the premis “what tasks do I do that could be better served by automation”

    Reply

  9. Hi Anne-Marie,

    I also did analysis of automation problems some time ago so I just would like to add some points from my posts.

    * Shallow coverage and false positives

    Basically, there must be a clear understanding when and what testing activities are substituted/powered by automation. And who will be overseeing it?

    * Robustness

    I used to be hired as an automation consultant specifically to revive an “accomplished” automation project that yet wasn’t usable in field conditions. Testing is NOT data entry or checking a happy path!

    * Scalability

    Some checking patterns (e.g. “input data – check response”) are very easy to automate. However “straight” approach produces huge and redundant amount of code and demands a lot of [duplicated] test data. That is, same approach that would be successful for automation of a dozen of test cases wouldn’t work for a larger scale of coverage.

    Thank you,
    Albert Gareev

    Automation Problems:
    http://automation-beyond.com/category/automation/automation-methodology/problems/

    Automation Requirements:
    http://automation-beyond.com/category/automation/automation-methodology/requirements-automation/

    Reply

  10. Hi Anne,

    Your post is really thought provoking and tells a story that is all too repeated in almost every organisation big or small every time a new project is initiated and question is asked-‘ to Automate or not to Automate’. In many cases decision to purchase an automated tool is made by the managers without consultation with the people on the ground. I have been through this road a number of times, we spent more time evaluating rather than using the tool which became shelf ware after the release. Script writers and support guys were nowhere to be seen once the project was completed. Effectively tools implementation automation requires companywide policy with management commitment which in my opinion is a hard nut to crack. Expectations are always high but achievability is not always delivered no matter what tool you deploy. The real challenge is how and what tool to deploy for regression, performance and load testing for Agile development.

    Thanks

    Mohinder

    Reply

  11. I find this article really interesting, and one i totally agree with.

    My only experience of Test Automation came about when i was asked to work on a project where RUP was the development model. Someone (project lead i think) thought it would be good to implement various elements of the Rational Toolset in order to assist in following this methodology, which in my inexperienced opinion at the time seemed crazy.

    Anyway, this experience was almost so bad, that it has put me off using automated tools for life. To summarise the main issues:

    – .Net development was relatively new, and the tool couldn’t recognise many of the objects used in the applications in particular customised objects.

    – The application was rapidly changing. Each build almost looked completely different to the last, meaning that tests needed to be re-written. More time was spent re-writing broken tests, than trying to find defects.

    – The automation tool was buggy. More often than not, i had to find workarounds in order to validate a test.

    – No training on the tool despite requests.

    – No cost benefit for using the tool.

    – Poor planning.

    Having said all this, i do feel that automation can have a part to play, particularly in simple ‘smoke tests’ in order to ensure an application is test ready, and also for simple user interface tests.

    The positive thing to come out of this is that i now know how NOT to implement Test Automation!

    Reply

  12. With respect to automated testing in the Agile world, most of what I have seen is unit level testing (e.g. JUnit). Although of value, many issues don’t show up until aggregation of the various system components takes place; usually during integration and system testing. The latter type of tests require a different approach and may include not just functional testing but also load and soak, etc. (especially at the system level).

    Reply

  13. Hi, Very useful article. Am very interested in knowing how you maintained the atuomation scripts, since maintenance of these scripts in a world that has every day change to code base is 1 challenge that I have seen and faced all life.

    Reply

    1. Personally, I think the challenge is to make those who hold the purse strings aware of the need to maintain scripts. I think the problem comes in when you or management are not aware of the necessity to continue maintaining scripts.
      I also think that its wrong to view testing scripts with the concept that they will never change. I think its good to review the validity and worthyness of scripts as the software matures.

      Reply

  14. There are a couple of areas in which I had a group doing development for a medical eye surgery company and a data processing system for a military plane in-flight testing. There were many revisions and releases of the products due to difficult developmet and multiple code releases due to piecewise requirements definition. In both cases a test environment based on a National Instruments tool, TestStand, saved our and our customers tails. We could never have been efficient in regression test, test reporting and delivery of updates without this level of automation. In this case the cost of developing and maintaining the test scripts was cheep compared to the alternatives. In other custom automation projects we rarely found need or benefit for this tool. One of the big reasons, though is these projects are relatively short (<1 year) and have a small team of people so getting their combined minds around the project, the code and the integration is already pretty efficient.

    Reply

  15. Thanks for taking the time to create the site, some nice aspects to it and advice – much appreciated. But to be honest I cannot see any benefit in just another bland – general article on automation – any google search on test automation would give more detail. This is what the test community suffers most from – people talking about a subject without actually describing how to do it – many, many careers are built of this kind of waffle. Well ….if you can get away with it ……not meant to be offensive – its just I think you could serve readers better by delving into actual case studies – how to demos etc…

    Reply

    1. Thanks Kevin, I appreciate that and its a good idea. I suppose one reason why you get lesser case studies is that they’re more work. When you want a case study, is it from a tester’s perspective or a startup’s small business perspective?

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>