In modern software development, automated testing is an important part of many projects. It is, however, a very wide area, encompassing everything from lower level unit testing through to high level UI testing using browser automation tools or similar, not to mention areas such as load and performance testing. With so many areas to consider and a plethora of tools at our disposal, it can be useful to step back and think about what we are seeking to do with our tests in order to ensure we are using automation in the most effective manner.
So why do we want to automate our tests? Test automation can have a significant impact on your development effort in a variety of ways, however there are also some limitations to be aware of.
An automated test is almost always quicker to execute than an equivalent manual test case. Whilst not all test cases might lend themselves to automation, automating even just a relatively small proportion of your high-value manual test cases can result in significant time savings for the team. This time saving is not entirely free - any automation comes with the initial cost of implementation and ongoing maintenance, but given the ease of re-running an automated test once written, automation should begin to pay significant dividends in terms of time savings quite quickly. A manual regression suite can take a team of testers hours to run even just the high value test cases, but once automated, this can reduce to minutes.
Potential time savings also result in cost savings as automated tests enable your test cases to run in less time and free up team members to focus on other activities. However we also tend to see earlier detection of potential issues, which can also result in a reduction in costs. If we discover a bug very early on in the development process, it results in much less expenditure than if that bug was discovered in a later round of testing or even in production. Automated testing enables a higher degree of coverage earlier in the development process so can contribute to cost savings in this manner as well.
Whilst time savings are a worthwhile benefit in their own right, they also enable faster feedback loops which are worthy of their own mention. Without automated testing, a developer may have to at best run their code and discover an issue manually, or even deploy it and wait for another team member to manually discover any bugs. With automated testing, this feedback loop is dramatically reduced, enabling issues to be discovered and fixed very quickly. This clearly feeds back into the aforementioned benefits of cost savings and time savings, however it also reduces context switching for team members and improves the overall development experience.
Some aspects of testing are very difficult to implement without automation. Having test automation expertise on the team will support efforts to implement further types of testing such as load testing and performance testing. These types of tests are normally heavily dependent on automation techniques and investment in functional automation will pay further dividends in these areas.
As many teams move towards Continuous Integration and Continuous Delivery practices, automated testing becomes an essential component. A CI/CD Pipeline automates the software delivery process, and generally these rely heavily on automation, including test automation, as an integral part of the stages code goes through on the way to being merged/deployed. If your team is considering moving towards a CI/CD pipeline of some sort, early investment in test automation will be an important part of the process.
If you achieve a relatively good level of coverage with automated testing, this can free up your test team to use their testing skills to discover more nuanced and complex issues, instead of conducting numerous relatively shallow checks to ensure the product is behaving as expected at a relatively high level.
It is generally inadvisable to set a goal of automating everything. Automated testing is well suited to particular types of test cases, and you can achieve a very high degree of coverage, but there will always be test cases that are esoteric enough to not merit the initial and ongoing investment of automation. Similarly there are types of tests that cannot be easily verified with automation. Automation requires a clearly defined pass state. Some tests will not have this quality - instead they will require a human to evaluate the experience or product in a less binary manner. For example, usability testing, or anything relying on assessment of aesthetics.
In a similar vein, there have been many articles over the years which detail how organisations have replaced their test team with automated tests, and a lot of column inches have been written about the inevitable demise of testers. However these claims often overlook the real transformation that is occurring. When testers are omitted from the equation, testing activities - including manual testing - are often devolved to the rest of the team. Whilst this may result in increased levels of automation as developers take ownership of testing, it tends to also mean that the rest of the team take ownership of the necessary manual testing activities that remain. This isn’t necessarily a negative transition, as team ownership of quality is a desirable outcome, but ultimately automation will not obsolete manual testing. Ideally any manual testing will complement your automation, enabling you to further increase confidence in your deliverables by exercising paths and scenarios that are less well suited to automation - for example you may use automation to smoke test a deployment to a live environment, but it would still be beneficial to conduct some manual exploratory testing as results from live environments are often less predictable and your manual checks may uncover more subtle issues which your automation is not looking for.
Whilst testing in pre-production testing environments is understandably valuable to ensure that your application code is functioning correctly, it cannot give you full confidence that your production configuration is correct. Automated smoke testing or similar in production environments can help to increase confidence here. Often teams are hesitant to apply automation in production however careful application (especially with regards to visibility of test data and/or accounts) can pay off significantly, especially when conducting frequent deployments in a continuous delivery process.
Often testing teams are tasked with writing test automation in isolation from the development team. This can often result in duplicated effort, and can result in one of the test pyramid antipatterns - where high level tests are numerous, attempting to test aspects which could be validated at lower levels - for example conducting tests against key pieces of business logic exclusively through the UI. A better approach is to ensure those implementing test automation are engaged in the process from the very start, when acceptance criteria is determined, and are in constant dialogue with the development team throughout implementation, ensuring that your various layers of automated tests complement each other, with the end goal being a high degree of coverage within a relatively short execution time.
Too often automated testing is not considered early enough in the development process. This can result in an architecture that is problematic from a testing perspective, and can hinder the addition of automated testing capabilities later on in the project. Ideally you should not be tempted to wait, and consider automated testing as early as possible — as articulated by Dan Edwards here.
Ultimately automated testing can deliver a significant return on investment, and for many modern software development teams it will already be an important part of their development process. However, for those just beginning to get started with automation or those seeking to refocus their test strategy it can be helpful to review the high level benefits and potential pitfalls of test automation in order to get the most out of it.
View this article on our Medium Publication.