Without exception, all applications and software products should undergo thorough testing before they hit the market, making the testing process an integral part of the software development lifecycle.While there are several ways to approach software testing effectively, data-driven test automation is one of the most valuable and powerful methods to adopt in the current engineering environment.
In a nutshell, data-driven test automation is a testing strategy involving test scripts that don't require any manual adjustments to work with different datasets.
For testers, it negates the tedious task of adjusting scripts after datasets are modified, helping them become more efficient when running tests on large amounts of data.
Data-driven testing operates in a loop. First, it retrieves testing data from a data source like Excel, CSV, and XML files. That data is then stored and processed in another form where the actual results are compared with the expected results. The whole test is then repeated with the next set of input data.
The information this process extracts can quickly and efficiently show developers what is working with an application and what isn't, guiding them toward the next logical steps in the project or test case and better aligning them with overall business objectives.
The main selling point of data-driven test automation, as mentioned above, is that testers can test applications using different data values and parameters without having to change test scripts or test cases.
With data-driven testing, there are no implications when changes are made to data sets, such as information being added or deleted. The framework is also reusable, enabling automation test engineers to run a test script many thousands of times with different data sets each time.
It offers realistic insights that reflect continuous changes in the test data, so every execution of the test script can potentially uncover defects that would not have been found in previous tests.
By generating test scripts with less code, data-driven tests can greatly improve efficiency,reducing manual tasks such as maintenance, monitoring, and results tracking. Testers can also work with larger volumes of data, resulting in better coverage and improved regression testing. Furthermore, data-driven test automation reduces the risk of duplicates and redundancy, but only if the person who designs the test cases has enough business experience and knowledge to avoid duplicate or redundant tests. Without that experience, this advantage has the potential to cause problems, or even become a disadvantage.
Despite the many advantages of data-driven automation testing, there are still some drawbacks to consider.
First and foremost, testers and engineers using this method of testing require high-level skills and a deep understanding of the process. Without the right training and technical background, the quality of the test results will be much lower than with an expert. In many cases, they will need to learn a new scripting language.
With the need for adequate training and knowledge comes additional costs. Testers need time to code the scripts and learn the intricacies of the framework before being able to generate value from it. Data validation is also a time-consuming task in itself, especially when dealing with large data sets.
Coding for data-driven automation testing is extensive, so there is a lot of initial work required in maintenance and documentation, particularly surrounding test infrastructure, script management, and results analysis.
It's important to first train people on the more technical side of data-driven testing, such as coding test flow navigation into the testing scripts. Have them meet with QA and functional teams to agree on specific formats to design the test cases, then start with a virtual API testing approach using meaningful data to generate benchmark results.
We recommend using real-word data during the testing process, as well as testing both positive and negative outcomes, as this will better prepare testers for every eventuality and allow them to uncover issues in the test scripts more quickly.
Wherever possible, keeping a consistent format within data sources is another good practice as it ensures the stability of the script's design. It also helps to rework any functional tests to ensure security and high performance.
Another way of looking at data-driven testing is through ETL (Extract-Transform-Load) testing, which ensures that data is accurate after loading it from its source to its destination. This process also includes several data verification steps that take place at the different stages between source and destination.
Sometimes, data-driven automation testing is a mix of both ETL testing and the testing of the process itself. We use ETLs that come in a specific format before transforming them into data that makes sense for the client. For one application, there are usually many calculations required, so this dual-pronged approach is a good way to deal with the multiple inputs and outputs that come from each piece of software.
In automation tests, start by choosing the most important scenarios from a business perspective, as they don't commonly have too many outputs or inputs. Then, outline a functional grade for the expected outcomes, before testing them within the application and testing the outputs. The results enable a design format that the whole team can understand, enabling them to build the scripts required for fully automated testing.
In one recent project, we started with API testing, created the data manually, and used it to automate the test case, which took two weeks. The results at the end of the two-week sprint indicated that we only had two or three scenarios automated. With data-driven testing, we were able to cover 50 scenarios in the same two weeks.
Aside from the clear benefits we've outlined here, the main advantage we have found in this approach is that we don't have to waste too much time on maintaining the design of the automated tests, as both the designs and the tests are updated simultaneously.
If your organization is looking to introduce data-driven test automation as a way to boost efficiency and time-to-market, we'd be happy to discuss it with you.
About the Author: Holmes Giovanny Salazar Osorio is a Software Development Engineer in Test at PSL. He has over 7 years experience in the design and development of testing strategies for international projects, using agile methodologies and best practices. Currently, at PSL Holmes works as part of the Quality Engineering team for a project with creating a highly scalable SaaS platform that is focused on sales performance management and wealth management functions.