Test automation

This is an old revision of this page, as edited by Stevebroshar (talk | contribs) at 13:53, 8 August 2025 (Test automation interface: Doesn't seem like a framework think (move up a level); edit for brevity and uniformity). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Test automation is the use of software (separate from the software being tested) for controlling the execution of tests and comparing actual outcome with predicted.[1] Test automation supports testing the system under test (SUT) without manual interaction which can lead to faster test execution and testing more often. Test automation is key aspect of continuous testing and often for continuous integration and continuous delivery (CI/CD).[2]

Compared to manual testing

Automation provides many benefits over manual testing.

API testing

For API testing, tests drive the SUT via its application programming interface (API). Compared to manual testing, automated API testing often can execute a relatively large number of cases in a relatively short time.

GUI testing

For GUI testing, tests drive the SUT via its graphical user interface (GUI) by generating events such as keystrokes and mouse clicks. Automated GUI testing can be challenging to develop, but can run much faster than a human could perform the same testing. Specializations include:

  • Record & playback testing – Some GUI testing tools provide a feature that allows for interactively recording user actions and replaying them later as a test; comparing actual results to expected. An advantage of this approach is that it requires little or no coding. However, some claim that such tests suffer from reliability, maintainability and accuracy issues. For example, changing the label of a button or moving it to another part of the view may require tests to be re-recorded, and such tests often are inefficient and incorrectly record unimportant activities.[citation needed]

Regression testing

When automated testing is in place, regression testing can be a relatively quick and easy operation. Instead of a significant outlay of human time and effort, a regression test run could require nothing more than a push of a button and even starting the run can be automated.

Automated techniques

The following are notable testing techniques categorized as test automation.

Continuous testing

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to asses the business risk of releasing the SUT.[6][7] The scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[8]

Model-based testing

For model-based testing, the SUT is modeled and test cases can be generated from it to support no code test development. Some tools support the encoding of test cases as plain English that can be used on multiple operating systems, browsers, and smart devices.[9]

Test-driven development

Test-driven development (TDD) inherently includes the generation of automation test code. Unit test code is written while the SUT code is written. When the code is complete, the tests are complete as well.[10]

Considerations

A review of 52 practitioner and 26 academic sources found that five main factors to consider in test automation decision are: system under test (SUT), scope of testing, test toolset, human and organizational topics, cross-cutting factors. The factors most frequently identified were: need for regression testing, economic factors, and maturity of SUT.[11][12]

While the reusability of automated tests is valued by software development companies, this property can also be viewed as a disadvantage as it leads to a plateau effect, where repeatedly executing the same tests stops detecting errors.

Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with test oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion.

Considerations when developing automated tests include:

Roles

To support coded automated testing, the test engineer or software quality assurance person must have software coding ability. Some testing techniques such as table-driven and no-code can lessen or alleviate the need for programming skill.

Framework

A test automation framework provides a programming environment that integrates test logic, test data, and other resources. The framework provides the basis of test automation and simplifies the automation effort. Using a framework can lower the cost of test development and maintenance. If there is change to any test case then only the test case file needs to be updated and the driver script and startup script will remain the same.

A framework is responsible for defining the format in which to express expectations, providing a mechanism to hook into or drive the SUT, executing the tests, and reporting results.[13]

Various types of frameworks are available:

  • Linear – procedural code, possibly generated by tools like those that use record and playback
  • Structured – uses control structures - typically ‘if-else’, ‘switch’, ‘for’, ‘while’ conditions/ statements
  • Data-driven – data is persisted outside of tests in a database, spreadsheet, or other mechanism
  • Keyword-driven
  • Hybrid – multiple types are used
  • Agile automation framework
  • Unit testing – some frameworks are intended primarily for unit testing such as xUnit, JUnit and NUnit

Test automation interface

A test automation interface is a platform that provides a workspace for incorporating multiple testing tools and frameworks for system/integration testing. A test automation interface may simplify the process of mapping tests to business criteria without coding. A test automation interface may improve the efficiency and flexibility of maintaining tests.[14]

 
Test Automation Interface Model

A test automation interface consists of the following aspects:

Interface engine
Consists of a parser and a test runner. The parser is present to parse the object files coming from the object repository into the test specific scripting language. The test runner executes the test scripts using a test harness.[14]
Object repository
Collection of UI/Application object data recorded by the testing tool while exploring the SUT.[14]

Defining boundaries between automation framework and a testing tool

Tools are specifically designed to target some particular test environment, such as Windows and web automation tools, etc. Tools serve as a driving agent for an automation process. However, an automation framework is not a tool to perform a specific task, but rather infrastructure that provides the solution where different tools can do their job in a unified manner. This provides a common platform for the automation engineer.

There are various types of frameworks. They are categorized on the basis of the automation component they leverage. These are:

  1. Data-driven testing
  2. Modularity-driven testing
  3. Keyword-driven testing
  4. Hybrid testing
  5. Model-based testing
  6. Code-driven testing
  7. Behavior driven development

Data-driven testing

Data-driven testing (DDT), also known as table-driven testing or parameterized testing, is a software testing technique that uses a table of data that directs test execution by encoding input, expected output and test-environment settings.[15][16] One advantage of DDT over other testing techniques is relative ease to cover an additional test case for the system under test by adding a line to a table instead of having to modify test source code.

Modularity-driven testing

Modularity-driven testing is a term used in the testing of software. The test script modularity framework requires the creation of small, independent scripts that represent modules, sections, and functions of the application-under-test. These small scripts are then used in a hierarchical fashion to construct larger tests, realizing a particular test case.[17]

Keyword-driven testing

Keyword-driven testing, also known as action word based testing (not to be confused with action driven testing), is a software testing methodology suitable for both manual and automated testing. This method separates the documentation of test cases – including both the data and functionality to use – from the prescription of the way the test cases are executed. As a result, it separates the test creation process into two distinct stages: a design and development stage, and an execution stage. The design substage covers the requirement analysis and assessment and the data analysis, definition, and population.

Hybrid testing

Hybrid testing is what most frameworks evolve/develop into over time and multiple projects. The most successful automation frameworks generally accommodate both grammar and spelling, as well as information input. This allows information given to be cross-checked against existing and confirmed information. This helps to prevent false or misleading information being posted. It still however allows others to post new and relevant information to existing posts, and so increases the usefulness and relevance of the site. This said, no system is perfect, and it may not perform to this standard on all subjects all the time but will improve with increasing input and increasing use.

Model-based testing

 
General model-based testing setting
In computing, model-based testing is an approach to testing that leverages model-based design for designing and possibly executing tests. As shown in the diagram on the right, a model can represent the desired behavior of a system under test (SUT). Or a model can represent testing strategies and environments.

Behavior driven development

Behavior-driven development (BDD) involves naming software tests using ___domain language to describe the behavior of the code.

See also

References

  1. ^ Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. p. 74. ISBN 978-0-470-04212-0.
  2. ^ O’Connor, Rory V.; Akkaya, Mariye Umay; Kemaneci, Kerem; Yilmaz, Murat; Poth, Alexander; Messnarz, Richard (2015-10-15). Systems, Software and Services Process Improvement: 22nd European Conference, EuroSPI 2015, Ankara, Turkey, September 30 -- October 2, 2015. Proceedings. Springer. ISBN 978-3-319-24647-5.
  3. ^ Headless Testing with Browsers; https://docs.travis-ci.com/user/gui-and-headless-browsers/
  4. ^ Headless Testing with PhantomJS;http://phantomjs.org/headless-testing.html
  5. ^ Automated User Interface Testing; https://www.devbridge.com/articles/automated-user-interface-testing/
  6. ^ Part of the Pipeline: Why Continuous Testing Is Essential, by Adam Auerbach, TechWell Insights August 2015
  7. ^ The Relationship between Risk and Continuous Testing: An Interview with Wayne Ariola, by Cameron Philipp-Edmonds, Stickyminds December 2015
  8. ^ DevOps: Are You Pushing Bugs to Clients Faster, by Wayne Ariola and Cynthia Dunlop, PNSQC October 2015
  9. ^ Proceedings from the 5th International Conference on Software Testing and Validation (ICST). Software Competence Center Hagenberg. "Test Design: Lessons Learned and Practical Implications. doi:10.1109/IEEESTD.2008.4578383. ISBN 978-0-7381-5746-7.
  10. ^ Vodde, Bas; Koskela, Lasse (2007). "Learning Test-Driven Development by Counting Lines". IEEE Software. 24 (3): 74–79. doi:10.1109/ms.2007.80. S2CID 30671391.
  11. ^ Garousi, Vahid; Mäntylä, Mika V. (2016-08-01). "When and what to automate in software testing? A multi-vocal literature review". Information and Software Technology. 76: 92–117. doi:10.1016/j.infsof.2016.04.015.
  12. ^ Brian Marick. "When Should a Test Be Automated?". StickyMinds.com. Retrieved 2009-08-20.
  13. ^ "Selenium Meet-Up 4/20/2010 Elisabeth Hendrickson on Robot Framework 1of2". YouTube. 28 April 2010. Retrieved 2010-09-26.
  14. ^ a b c "Conquest: Interface for Test Automation Design" (PDF). Archived from the original (PDF) on 2012-04-26. Retrieved 2011-12-11.
  15. ^ "golang/go TableDrivenTests". GitHub.
  16. ^ "JUnit 5 User Guide". junit.org.
  17. ^ DESAI, SANDEEP; SRIVASTAVA, ABHISHEK (2016-01-30). SOFTWARE TESTING : A Practical Approach (in Arabic). PHI Learning Pvt. Ltd. ISBN 978-81-203-5226-1.

General references