Testim Copilot is here | Learn More

What Is Test Automation? A Simple, Clear Introduction

There are two kinds of testing in the world of software—manual and automated. Some types of manual testing, such as…

What Is Test Automation
Testim
By Testim,

There are two kinds of testing in the world of software—manual and automated. Some types of manual testing, such as discovery testing and usability testing, are invaluable. You can do other kinds of testing—like regression testing and functional testing—manually, but it’s a fairly wasteful practice for humans to keep doing the same thing over and over again. It’s these kinds of repetitive tests that lend themselves to test automation.

Test automation is the practice of running tests automatically, managing test data, and utilizing results to improve software quality. It’s primarily a quality assurance measure, but its activities involve the commitment of the entire software production team. From business analysts to developers and DevOps engineers, getting the most out of test automation takes the inclusion of everyone.

This post will give you a high-level understanding of what test automation is all about. There are all kinds of tests, but not all should be automated; therefore, let’s start with general criteria for test automation.

Criteria for Automation

A test needs to meet some criteria in order to be automated—otherwise, it might end up costing more than it saves. After all, one major goal of automation is to save time, effort, and money. Here are some general criteria for test automation. These are starting points, mind you. Your criteria may differ depending on your circumstances.

Repeatable

The test must be repeatable. There’s no sense in automating a test that can only be run once. A repeatable test has the following three steps:

  1. Set up the test, including data and environment.
  2. Execute the function and measure the result.
  3. Clean up the data and environment.

In the first step, we want to be able to put the environment into a consistent state. In other words, if we have a test for attempting to add an existing user, we need to make sure the user exists before performing the test. Once the test is complete, the environment should be returned to the base state.

Expand Your Test Coverage

Fast and flexible authoring of AI-powered end-to-end tests — built for scale.
Start Testing Free

Determinant

When a function is determinant, it means that the outcome is the same every time it’s run with the same input. The same is true of tests that can be automated. For example, say we want to test an addition function. We know that 1 + 1 = 2 and that 394.19 + 5.81 = 400.00. Addition is a determinant function.

Software, on the other hand, may use such a high number of variable inputs that it’s difficult to have the same result over time. Some variables may even be random, which may make it difficult to determine the specific outcome. Software design can compensate for this by allowing for test inputs through a test harness.

Other features of an application may be additive; for example, creating a new user would add to the number of users. At least when we add a user we know that the number of users should only grow by one. However, running tests in parallel may cause unexpected results. Isolation can prevent this kind of false positive.

Unopinionated

You cannot automate matters of opinion. This is where usability testing, beta testing, and so forth really shine. User feedback is important, but it just can’t be automated … sorry!

Types of Automated Tests

There are so many types of tests, many of which can be automated, that we can’t really get them all into one post. But here are enough to give you a good starting point.

Code Analysis

There are actually many different types of code analysis tools, including static analysis and dynamic analysis. Some of these tests look for security flaws, others check for style and form. These tests run when a developer checks in code. Other than configuring rules and keeping the tools up to date, there isn’t much test writing to do with these automated tests.

Unit Tests

You can also automate a unit test suite. Unit tests are designed to test a single function, or unit, of operation in isolation. They typically run on a build server. These tests don’t depend on databases, external APIs, or file storage. They need to be fast and are designed to test the code only, not the external dependencies.

Integration Tests

Integration tests are a different kind of animal when it comes to automation. Since an integration test—sometimes called end-to-end tests—needs to interact with external dependencies, they’re more complicated to set up. Often, it’s best to create fake external resources, especially when dealing with resources beyond your control.

If you, for example, have a logistics app that depends on a web service from a vendor, your test may fail unexpectedly if the vendor’s service is down. Does this mean your app is broken? It might, but you should have enough control over the entire test environment to create each scenario explicitly. Never depend on an external factor to determine the outcome of your test scenario.

Automated Acceptance Tests

There are several practices today that use automated acceptance tests (AAT), but they’re basically doing the same thing. Behavior-driven development (BDD) and automated acceptance test-driven development (AATDD) are similar. They both follow the same practice of creating the acceptance test before the feature is developed.

In the end, the automated acceptance test runs to determine if the feature delivers what’s been agreed upon. Therefore, it’s critical for developers, the business, and QA to write these tests together. They serve as regression tests in the future, and they ensure that the feature holds up to what’s expected.

Regression Tests

Without AATs in place, you have to write regression tests after the fact. While both are forms of functional tests, how they’re written, when they’re written, and whom they’re written by are vastly different. Like AATs, they can be driven through an API by code or a UI. Tools exist to write these tests using a GUI.

Performance Tests

Many kinds of performance tests exist, but they all test some aspect of an application’s performance. Will it hold up to extreme pressure? Are we testing the system for high heat? Is it simple response time under load we’re after? How about scalability?

Sometimes these tests require emulating a massive number of users. In this case, it’s important to have an environment that’s capable of performing such a feat. Cloud resources are available to help with this kind of testing, but it’s possible to use on-premises resources as well.

Smoke Tests

What’s a smoke test? It’s a basic test that’s usually performed after a deployment or maintenance window. The purpose of a smoke test is to ensure that all services and dependencies are up and running. A smoke test isn’t meant to be an all-out functional test. It can be run as part of an automated deployment or triggered through a manual step.

General Test Automation Process

Now that we’ve seen criteria for automation and enough types of automated tests to have a feel for things, here’s the general process of test automation. There are three major steps to test automation: prepare, take action, report results.

Prepare

First, we need to prepare the state, the test data, and the environment where tests take place. As we’ve seen, most tests require the environment to be in a certain state before an action takes place. In a typical scenario, this requires some setup. Either the data will need to be manipulated or the application will need to be put into a specific state or both!

Take Action

Once the state and/or environment is in the predefined state, it’s time to take action! The test driver will run the test, either through calling an application’s API or user interface or by running the code directly. The test driver is responsible for “driving” the tests, but the test management system takes on the responsibility of coordinating everything, including reporting results.

Report Results

A test automation system will record and report results. These results may come in a number of different formats and may even create problem tickets or bugs in a work tracking system. The basic result, however, is a pass or fail. Usually, there is a green or red indicator for each test scenario to indicate pass or fail.

Sometimes, tests are inconclusive or don’t run for some reason. When this happens, the automation system will have a full log of the output for developers to review. This log helps them track down the issue. Ideally, they’ll be able to replay the scenario once they’ve put a fix in place.

The Bottom Line

The bottom line is this: test automation helps improve quality with speed. But not all testing can be automated. There’s definitely an investment involved. With so many types of tests, it’s important that you get the right mix. The test pyramid is a simple rule of thumb to help get this right. It says most tests should be unit tests, followed by service tests, then UI tests.

A test automation system coordinates testing concerns, including managing test data, running tests, and tracking results. Test automation is the next step for teams that are becoming overwhelmed by the burden of repeating the same manual tests that should be automated. 

Author bio: This post was written by Phil Vuollet. Phil uses software to automate processes to improve efficiency and repeatability. He writes about topics relevant to technology and business, occasionally gives talks on the same topics, and is a family man who enjoys playing soccer and board games with his children.