Test automation: Difference between revisions

Content deleted Content added
Tag: Reverted
m Reverted edits by 223.181.13.240 (talk) (HG) (3.4.10)
 
(45 intermediate revisions by 19 users not shown)
Line 1:
{{Short description |Use of specialpurpose-built software to control test execution and analysis}}
{{Redirect|Automated QA|the company|AutomatedQA}}
{{Software development process}}
{{More footnotes|date=February 2009}}
In [[software testing]], '''testTest automation''' is the use of [[software]] (separate from the software being tested) tofor controlcontrolling the execution of tests and the comparison ofcomparing actual outcomesoutcome with predicted outcomes.<ref>{{cite book | last = Kolawa | first = Adam |author2=Huizinga, Dorota | title = Automated Defect Prevention: Best Practices in Software Management | year = 2007 | publisher = Wiley-IEEE Computer Society Press | page=74| isbn = 978-0-470-04212-0 }}</ref> Test automation cansupports automatetesting somethe repetitive[[system butunder necessarytest]] tasks(SUT) inwithout a formalized[[Manual testing process|manual alreadyinteraction]] inwhich place,can orlead performto additionalfaster testingtest thatexecution wouldand betesting difficultmore to do manuallyoften. Test automation is criticalkey aspect of [[continuous testing]] and often for [[continuous deliveryintegration]] and [[continuous testingdelivery]] (CI/CD).<ref>{{Cite book|last1=O’Connor|first1=Rory V.|url=https://books.google.com/books?id=2xOcCgAAQBAJ&q=Systems%2C+Software+and+Services+Process+Improvement%3A+27th+European+Conference&pg=PA71|title=Systems, Software and Services Process Improvement: 22nd European Conference, EuroSPI 2015, Ankara, Turkey, September 30 -- October 2, 2015. Proceedings|last2=Akkaya|first2=Mariye Umay|last3=Kemaneci|first3=Kerem|last4=Yilmaz|first4=Murat|last5=Poth|first5=Alexander|last6=Messnarz|first6=Richard|date=2015-10-15|publisher=Springer|isbn=978-3-319-24647-5|language=en}}</ref>
 
==Compared to manual testing==
==General approaches==
Automation provides many benefits over manual testing.
There are many approaches to test automation, however below are the general approaches used widely:
 
===API testing===
* '''[[Graphical user interface testing]]'''. A testing framework that generates [[Graphical user interface|user interface]] events such as keystrokes and mouse clicks, and observes the changes that result in the user interface, to validate that the observable behavior of the program is correct.
For [[API testing]], tests drive the SUT via its [[application programming interface]] (API). Compared to manual testing, automated API testing often can execute a relatively large number of cases in a relatively short time.
* '''[[API testing|API driven testing]]'''. A testing framework that uses a programming interface to the application to validate the behaviour under test. Typically API driven testing bypasses application user interface altogether. It can also be testing [[public interface|public (usually) interfaces]] to classes, modules or libraries are tested with a variety of input arguments to validate that the results that are returned are correct.
 
==Other=GUI approachestesting===
For [[GUI testing]], tests drive the SUT via its [[graphical user interface]] (GUI) by generating events such as keystrokes and mouse clicks. Automated GUI testing can be challenging to develop, but can run much faster than a human could perform the same testing. Specializations include:
===Model-based testing===
{{Main|Model-based testing}}
One way to generate test cases automatically is [[model-based testing]] through use of a model of the system for test case generation, but research continues into a variety of alternative methodologies for doing so.{{Citation needed|date=August 2009}} In some cases, the model-based approach enables non-technical users to create automated business test cases in plain English so that no programming of any kind is needed in order to configure them for multiple operating systems, browsers, and smart devices.<ref>{{cite book|title=Proceedings from the 5th International Conference on Software Testing and Validation (ICST). Software Competence Center Hagenberg. "Test Design: Lessons Learned and Practical Implications.|isbn=978-0-7381-5746-7|doi=10.1109/IEEESTD.2008.4578383}}</ref>
 
* Record & playback testing {{endash}} Some GUI testing tools provide a feature that allows for interactively recording user actions and replaying them later as a test; comparing actual results to expected. An advantage of this approach is that it requires little or no coding. However, some claim that such tests suffer from reliability, maintainability and accuracy issues. For example, changing the label of a button or moving it to another part of the view may require tests to be re-recorded, and such tests often are inefficient and incorrectly record unimportant activities.{{Citation needed|date=March 2013}}
===Regression testing===
Some [[software testing]] tasks (such as extensive low-level interface [[regression testing]]) can be laborious and time-consuming to do manually. In addition, a manual approach might not always be effective in finding certain classes of defects. Test automation offers a possibility to perform these types of testing effectively.
 
* For testing a web site, the GUI is the browser and interaction is via [[DOM events]] and [[HTML]]. A [[headless browser]] or solutions based on [[Selenium (Software)#Selenium WebDriver |Selenium Web Driver]] are normally used for this purpose.<ref>Headless Testing with Browsers; https://docs.travis-ci.com/user/gui-and-headless-browsers/</ref><ref name="Headless Testing with Browsers">Headless Testing with PhantomJS;http://phantomjs.org/headless-testing.html</ref><ref>Automated User Interface Testing; https://www.devbridge.com/articles/automated-user-interface-testing/</ref>
Once automated tests have been developed, they can be run quickly and repeatedly many times. This can be a cost-effective method for regression testing of software products that have a long maintenance life. Even minor patches over the lifetime of the application can cause existing features to break which were working at an earlier point in time.
 
===APIRegression testing===
When automated testing is in place, [[regression testing]] can be a relatively quick and easy operation. Instead of a significant outlay of human time and effort, a regression test run could require nothing more than a push of a button and even starting the run can be automated.
{{Main|API testing}}
 
==Automated techniques==
[[API testing]] is also being widely used by software testers as it enables them to verify requirements independent of their GUI implementation, commonly to test them earlier in development, and to make sure the test itself adheres to clean code principles, especially the single responsibility principle. It involves directly testing [[API]]s as part of [[integration testing]], to determine if they meet expectations for functionality, reliability, performance, and security.<ref name="reichart1">[http://searchsoftwarequality.techtarget.com/tip/Testing-APIs-protects-applications-and-reputations Testing APIs protects applications and reputations], by Amy Reichert, SearchSoftwareQuality March 2015</ref> Since APIs lack a [[Graphical user interface|GUI]], API testing is performed at the [[Communications protocol#Layering|message layer]].<ref name="stickyminds">[http://www.stickyminds.com/interview/all-about-api-testing-interview-jonathan-cooper All About API Testing: An Interview with Jonathan Cooper], by Cameron Philipp-Edmonds, Stickyminds August 19, 2014</ref> API testing is considered critical when an API serves as the primary interface to [[application logic]].<ref>[https://www.gartner.com/en/documents/2645817 Produce Better Software by Using a Layered Testing Strategy]{{cbignore|bot=medic}}, by Sean Kenefick, [[Gartner]] January 7, 2014</ref>
The following are notable testing techniques categorized as test automation.
 
===Graphical user interface (GUI)Continuous testing ===
[[Continuous testing]] is the process of executing automated tests as part of the software delivery pipeline to assess the business risk of releasing the SUT.<ref name="essential">[https://www.techwell.com/techwell-insights/2015/08/part-pipeline-why-continuous-testing-essential Part of the Pipeline: Why Continuous Testing Is Essential], by Adam Auerbach, TechWell Insights August 2015</ref><ref name="stickym">[http://www.stickyminds.com/interview/relationship-between-risk-and-continuous-testing-interview-wayne-ariola The Relationship between Risk and Continuous Testing: An Interview with Wayne Ariola], by Cameron Philipp-Edmonds, Stickyminds December 2015</ref> The scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.<ref name="pnsqc">[http://uploads.pnsqc.org/2015/papers/t-007_Ariola_paper.pdf DevOps: Are You Pushing Bugs to Clients Faster], by Wayne Ariola and Cynthia Dunlop, PNSQC October 2015</ref>
{{main article|Graphical user interface testing}}
 
===Model-based testing===
Many test automation tools provide record and playback features that allow users to interactively record user actions and replay them back any number of times, comparing actual results to those expected. The advantage of this approach is that it requires little or no [[software development]]. This approach can be applied to any application that has a [[graphical user interface]]. However, reliance on these features poses major reliability and maintainability problems. Relabelling a button or moving it to another part of the window may require the test to be re-recorded. Record and playback also often adds irrelevant activities or incorrectly records some activities.{{Citation needed|date=March 2013}}
For [[model-based testing]], the SUT is modeled and test cases can be generated from it to support [[No-code development platform |no code]] test development. Some tools support the encoding of test cases as plain English that can be used on multiple [[operating system]]s, [[browser]]s, and [[smart device]]s.<ref>{{cite book|title=Proceedings from the 5th International Conference on Software Testing and Validation (ICST). Software Competence Center Hagenberg. "Test Design: Lessons Learned and Practical Implications.|isbn=978-0-7381-5746-7|doi=10.1109/IEEESTD.2008.4578383}}</ref>
 
===Test-driven development===
A variation on this type of tool is for testing of web sites. Here, the "interface" is the web page. However, such a framework utilizes entirely different techniques because it is rendering [[HTML]] and listening to [[DOM Events]] instead of operating system events. [[Headless browser]]s or solutions based on [[Selenium (Software)#Selenium WebDriver|Selenium Web Driver]] are normally used for this purpose.<ref>Headless Testing with Browsers; https://docs.travis-ci.com/user/gui-and-headless-browsers/</ref><ref name="Headless Testing with Browsers">Headless Testing with PhantomJS;http://phantomjs.org/headless-testing.html</ref><ref>Automated User Interface Testing; https://www.devbridge.com/articles/automated-user-interface-testing/</ref>
[[Test-driven development]] (TDD) inherently includes the generation of automation test code. [[Unit test]] code is written while the SUT code is written. When the code is complete, the tests are complete as well.<ref name="Learning TDD">{{cite journal|doi=10.1109/ms.2007.80|title=Learning Test-Driven Development by Counting Lines|year=2007|last1=Vodde|first1=Bas|last2=Koskela|first2=Lasse|journal=IEEE Software|volume=24|issue=3|pages=74–79|s2cid=30671391}}</ref>
 
===Other===
Another variation of this type of test automation tool is for testing mobile applications. This is very useful given the number of different sizes, resolutions, and operating systems used on mobile phones. For this variation, a framework is used in order to instantiate actions on the mobile device and to gather results of the actions.
Other test automation techniques include:
 
* [[Data-driven testing]]
Another variation is script-less test automation that does not use record and playback, but instead builds a model{{clarify|date=June 2016}} of the application and then enables the tester to create test cases by simply inserting test parameters and conditions, which requires no scripting skills.
* [[Modularity-driven testing]]
 
* [[Keyword-driven testing]]
==Methodologies==
* [[Hybrid testing]]
===Test-driven development===
* [[Behavior driven development]]
Test automation, mostly using unit testing, is a key feature of [[extreme programming]] and [[agile software development]], where it is known as [[test-driven development]] (TDD) or test-first development. Unit tests can be written to define the functionality ''before'' the code is written. However, these unit tests evolve and are extended as coding progresses, issues are discovered and the code is subjected to refactoring.<ref name="Learning TDD">{{cite journal|doi=10.1109/ms.2007.80|title=Learning Test-Driven Development by Counting Lines|year=2007|last1=Vodde|first1=Bas|last2=Koskela|first2=Lasse|journal=IEEE Software|volume=24|issue=3|pages=74–79|s2cid=30671391}}</ref> Only when all the tests for all the demanded features pass is the code considered complete. Proponents argue that it produces software that is both more reliable and less costly than code that is tested by manual exploration.{{citation needed|date=January 2013}} It is considered more reliable because the code coverage is better, and because it is run constantly during development rather than once at the end of a [[Waterfall model|waterfall]] development cycle. The developer discovers defects immediately upon making a change, when it is least expensive to fix. Finally, [[code refactoring]] is safer when unit testing is used; transforming the code into a simpler form with less [[code duplication]], but equivalent behavior, is much less likely to introduce new defects when the refactored code is covered by unit tests.
 
===Continuous testing===
{{Main|Continuous testing}}
[[Continuous testing]] is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.<ref name="essential">[https://www.techwell.com/techwell-insights/2015/08/part-pipeline-why-continuous-testing-essential Part of the Pipeline: Why Continuous Testing Is Essential], by Adam Auerbach, TechWell Insights August 2015</ref><ref name="stickym">[http://www.stickyminds.com/interview/relationship-between-risk-and-continuous-testing-interview-wayne-ariola The Relationship between Risk and Continuous Testing: An Interview with Wayne Ariola], by Cameron Philipp-Edmonds, Stickyminds December 2015</ref> For Continuous Testing, the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.<ref name="pnsqc">[http://uploads.pnsqc.org/2015/papers/t-007_Ariola_paper.pdf DevOps: Are You Pushing Bugs to Clients Faster], by Wayne Ariola and Cynthia Dunlop, PNSQC October 2015</ref>
 
==Considerations==
A review of 52 practitioner and 26 academic sources found that five main factors to consider in test automation decision are: system under test (SUT), scope of testing, test toolset, human and organizational topics, cross-cutting factors. The factors most frequently identified were: need for regression testing, economic factors, and maturity of SUT.<ref>{{Cite journal|last1=Garousi|first1=Vahid|last2=Mäntylä|first2=Mika V.|date=2016-08-01|title=When and what to automate in software testing? A multi-vocal literature review|journal=Information and Software Technology|volume=76|pages=92–117|doi=10.1016/j.infsof.2016.04.015}}</ref><ref>
===Factors to consider for the decision to implement test automation ===
{{cite web|url=http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=2010|title=When Should a Test Be Automated?|author=Brian Marick|publisher=StickyMinds.com|access-date=2009-08-20}}</ref>
What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make.<ref>
{{cite web|url=http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=2010|title=When Should a Test Be Automated?|author=Brian Marick|publisher=StickyMinds.com|access-date=2009-08-20}}</ref> A multi-vocal literature review of 52 practitioner and 26 academic sources found that five main factors to consider in test automation decision are: 1) System Under Test (SUT), 2) the types and numbers of tests, 3) test-tool, 4) human and organizational topics, and 5) cross-cutting factors. The most frequent individual factors identified in the study were: need for regression testing, economic factors, and maturity of SUT.<ref>{{Cite journal|last1=Garousi|first1=Vahid|last2=Mäntylä|first2=Mika V.|date=2016-08-01|title=When and what to automate in software testing? A multi-vocal literature review|journal=Information and Software Technology|volume=76|pages=92–117|doi=10.1016/j.infsof.2016.04.015}}</ref>
 
While the reusability of automated tests is valued by software development companies, this property can also be viewed as a disadvantage as it leads to a [[plateau effect]], where repeatedly executing the same tests stops detecting errors.
===Plateau effect===
While the reusability of automated tests is valued by software development companies, this property can also be viewed as a disadvantage. It leads to the so-called [[Plateau effect|"Pesticide Paradox"]], where repeatedly executed scripts stop detecting errors that go beyond their frameworks. In such cases, [[manual testing]] may be a better investment. This ambiguity once again leads to the conclusion that the decision on test automation should be made individually, keeping in mind project requirements and peculiarities.
 
===What to test===
Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with [[test oracle]]s), defect logging, etc., without necessarily automating tests in an end-to-end fashion.
 
Considerations when developing automated tests include:
One must keep satisfying popular requirements when thinking of test automation:
* [[Computing platform |Platform]] and [[Operatingoperating system|OS]] independence
* [[Data -driven capability (Input Data, Output Data, [[Metadatatesting]])
* Customization Reporting (DB [[Data Basedatabase]] Access, [[Crystal Reports]])
* EasyEase of [[debugging and logging]]
* [[Logging (computing)|Logging]]
* [[Version control]] friendly – minimal binary files
* [[Version control]]
* Extensible & Customization (Open [[API]]s to be able to integrate with other tools)
* Extension and customization; [[API]]s for integrating with other tools
* Common Driver (For example, in the Java development ecosystem, that means [[Apache Ant|Ant]] or [[Apache Maven|Maven]] and the popular [[Integrated Development Environment|IDEs]]). This enables tests to integrate with the developers' [[workflows]].
* Integration with developer tools; for example, using [[Apache Ant |Ant]] or [[Apache Maven |Maven]] for [[Java]] development
* Support unattended test runs for integration with build processes and batch runs. [[Continuous integration]] servers require this.
* Unattended test runs for integration with build processes and batch runs
* Email Notifications like [[bounce message]]s
* Email notifications; i.e. [[bounce message]]s
* Support distributed execution environment (distributed [[Testbed|test bed]])
* Distributed applicationtest support (distributed [[System Under Test|SUT]])execution
 
==Roles==
To support coded automated testing, the [[test engineer]] or [[software quality assurance]] person must have software coding ability. Some testing techniques such as table-driven and no-code can lessen or alleviate the need for programming skill.
===Test automation tools===
Test automation tools can be expensive and are usually employed in combination with manual testing. Test automation can be made cost-effective in the long term, especially when used repeatedly in [[regression testing]]. A good candidate for test automation is a test case for common flow of an application, as it is required to be executed (regression testing) every time an enhancement is made in the application. Test automation reduces the effort associated with manual testing. Manual effort is needed to develop and maintain automated checks, as well as reviewing test results.
 
===Test engineer=Framework==
A test automation [[Software framework |framework]] provides a programming environment that integrates test logic, test data, and other resources. The framework provides the basis of test automation and simplifies the automation effort. Using a [[Software framework |framework]] can lower the cost of test development and [[Software maintenance |maintenance]]. If there is change to any [[Test case (software)|test case]] then only the test case file needs to be updated and the driver script and startup script will remain the same.
{{Main|Test engineer}}
In automated testing, the [[test engineer]] or [[software quality assurance]] person must have software coding ability since the test cases are written in the form of source code which when run produce output according to the [[Assertion (computing)|assertion]]s that are a part of it. Some test automation tools allow for test authoring to be done by keywords instead of coding, which do not require programming.
 
A framework is responsible for defining the format in which to express expectations, providing a mechanism to hook into or drive the SUT, executing the tests, and reporting results.<ref>{{cite web
== The role of AI in Test Automation ==
In recent years, the field of test automation has witnessed a profound transformation due to the integration of artificial intelligence (AI) technologies. AI has had a significant influence on test automation practices, revolutionizing the way organizations conduct software testing. With the rapid development of AI, software testing has become more efficient and effective, enabling the delivery of high-quality software products. AI-powered solutions have introduced several notable advantages in test automation, such as intelligent test generation and self-healing tests. These innovations allow for the automated generation of test cases, optimization of testing processes, and the ability to detect potential issues swiftly. Moreover, AI in test automation has expanded the scope of testing, facilitating enhanced test coverage, particularly in web user interfaces, APIs, mobile applications, and performance testing.
 
However, this integration of AI in test automation has not been without its challenges. Dependency on training data is a critical consideration, as AI models rely extensively on high-quality training data for accurate predictions and test case generation. Inadequate or biased training data may lead to false positives or false negatives, diminishing the effectiveness of UI test automation. Additionally, the application of AI in test automation may encounter limitations in dealing with complex protocols, such as SOAP or GraphQL, requiring manual intervention in specific cases.<ref>{{Cite web |date=2023-06-05 |title=AI In Test Automation: 8 Undeniables Benefits {{!}} MuukTest |url=https://muuktest.com/blog/ai-in-test-automation/ |access-date=2023-09-18 |language=en-US}}</ref>
 
== Testing at different levels ==
A strategy to decide the amount of tests to automate is the test automation pyramid. This strategy suggests to write three types of tests with different granularity. The higher the level, less is the amount of tests to write.<ref name=":0" />
 
=== Unit, service, and user interface levels ===
[[File:The test automation pyramid.png|thumb|The test automation pyramid proposed by Mike Cohn<ref name=":0" />]]
* As a solid foundation, [[unit testing]] provides robustness to the software products. Testing individual parts of the code makes it easy to write and run the tests. Developers write unit tests as a part of each story and integrate them with CI.<ref>{{Cite web |title=Full Stack Testing by Gayathri Mohan |url=https://www.thoughtworks.com/en-us/insights/books/full-stack-testing |access-date=2022-09-13 |website=www.thoughtworks.com |language=en-US}}</ref>
* The service layer refers to testing the services of an application separately from its user interface, these services are anything that the application does in response to some input or set of inputs.
* At the top level we have [[UI Testing|UI testing]] which has fewer tests due to the different attributes that make it more complex to run, for example the fragility of the tests, where a small change in the user interface can break a lot of tests and adds maintenance effort.<ref name=":0">{{cite book | author = Mike Cohn | title = Succeeding with Agile | year = 2010 | publisher = Raina Chrobak | isbn = 978-0-321-57936-2}}</ref><ref>[https://martinfowler.com/articles/practical-test-pyramid.html The Practical Test Pyramid], by Ham Vocke</ref>
 
=== Unit, integration, and end-to-end levels ===
[[File:Testing Pyramid.png|alt=A triangular diagram depicting Google's "testing pyramid". Progresses from the smallest section "E2E" at the top, to "Integration" in the middle, to the largest section "Unit" at the bottom.|thumb|Google's testing pyramid<ref name=":1">{{Cite web |title=Just Say No to More End-to-End Tests |url=https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html |access-date=2023-02-11 |website=Google Testing Blog |language=en}}</ref>]]
One conception of the testing pyramid contains unit, integration, and end-to-end unit tests. According to [[Google]]'s testing blog, unit tests should make up the majority of your testing strategy, with fewer integration tests and only a small amount of end-to-end tests.<ref name=":1" />
 
* Unit tests: These are tests that test individual components or units of code in isolation. They are fast, reliable, and isolate failures to small units of code.
* Integration tests: These tests check how different units of code work together. Although individual units may function properly on their own, integration tests ensure that they operate together coherently.
* End-to-end tests: These test the system as a whole, simulating real-world usage scenarios. They are the slowest and most complex tests.
 
==Framework approach in automation==
A test automation framework is an integrated system that sets the rules of automation of a specific product. This system integrates the function libraries, test data sources, object details and various reusable modules. These components act as small building blocks which need to be assembled to represent a business process. The framework provides the basis of test automation and simplifies the automation effort.
 
The main advantage of a [[Software framework|framework]] of assumptions, concepts and tools that provide support for automated software testing is the low cost for [[Software maintenance|maintenance]]. If there is change to any [[test case]] then only the test case file needs to be updated and the [[driver Script]] and [[startup script]] will remain the same. Ideally, there is no need to update the scripts in case of changes to the application.
 
Choosing the right framework/scripting technique helps in maintaining lower costs. The costs associated with test scripting are due to development and maintenance efforts. The approach of scripting used during test automation has effect on costs.
 
Various framework/scripting techniques are generally used:
 
# Linear (procedural code, possibly generated by tools like those that use record and playback)
# Structured (uses control structures - typically ‘if-else’, ‘switch’, ‘for’, ‘while’ conditions/ statements)
#[[Data-driven testing|Data-driven]] (data is persisted outside of tests in a database, spreadsheet, or other mechanism)
#[[Keyword-driven testing|Keyword-driven]]
# Hybrid (two or more of the patterns above are used)
# Agile automation framework
 
The Testing framework is responsible for:<ref>{{cite web
| url = https://www.youtube.com/watch?v=qf2i-xQ3LoY
| title = Selenium Meet-Up 4/20/2010 Elisabeth Hendrickson on Robot Framework 1of2
| website = [[YouTube]]
| date = 28 April 2010
| access-date = 2010-09-26
}}</ref>
 
Various types of frameworks are available:
# defining the format in which to express expectations
# creating a mechanism to hook into or drive the application under test
# executing the tests
# reporting results
 
* Linear {{endash}} procedural code, possibly generated by tools like those that use record and playback
===Unit Testing frameworks===
* Structured {{endash}} uses control structures - typically ‘if-else’, ‘switch’, ‘for’, ‘while’ conditions/ statements
A growing trend in software development is the use of [[unit testing]] frameworks such as the [[xUnit]] frameworks (for example, [[JUnit]] and [[NUnit]]) that allow the execution of unit tests to determine whether various sections of the [[code]] are acting as expected under various circumstances. [[Test case]]s describe tests that need to be run on the program to verify that the program runs as expected.
* [[Data-driven testing |Data-driven]] {{endash}} data is persisted outside of tests in a database, spreadsheet, or other mechanism
* [[Keyword-driven testing |Keyword-driven]]
* Hybrid {{endash}} multiple types are used
* Agile automation framework
* Unit testing {{endash}} some frameworks are intended primarily for [[unit testing]] such as [[xUnit]], [[JUnit]] and [[NUnit]]
 
===Test automation interface===
TestA test automation interfacesinterface areis a platformsplatform that provideprovides a single [[workspace]] for incorporating multiple testing tools and frameworks for [[system testing |Systemsystem/Integrationintegration testing]]. of application underA test. Theautomation goalinterface of Test Automation Interface is tomay simplify the process of mapping tests to business criteria without coding. comingA in the way of the process. Testtest automation interface are expected tomay improve the efficiency and flexibility of maintaining test scriptstests.<ref name="Interface">{{cite web
| url = http://www.qualitycow.com/Docs/ConquestInterface.pdf
| title = Conquest: Interface for Test Automation Design
Line 142 ⟶ 99:
[[File:Test Automation Interface.png|thumb|Test Automation Interface Model]]
 
TestA Automationtest Interfaceautomation interface consists of the following core modulesaspects:
 
; Interface engine: Consists of a [[parser]] and a test runner. The parser is present to parse the object files coming from the object repository into the test specific scripting language. The test runner executes the test scripts using a [[test harness]].<ref name="Interface" />
* Interface Engine
* Interface Environment
* Object Repository
 
====Interface engine====
Interface engines are built on top of Interface Environment. Interface engine consists of a [[parser]] and a test runner. The parser is present to parse the object files coming from the object repository into the test specific scripting language. The test runner executes the test scripts using a [[test harness]].<ref name="Interface" />
 
====Object repository====
Object repositories are a collection of UI/Application object data recorded by the testing tool while exploring the application under test.<ref name="Interface" />
 
==Defining boundaries between automation framework and a testing tool==
Tools are specifically designed to target some particular test environment, such as Windows and web automation tools, etc. Tools serve as a driving agent for an automation process. However, an automation framework is not a tool to perform a specific task, but rather infrastructure that provides the solution where different tools can do their job in a unified manner. This provides a common platform for the automation engineer.
 
There are various types of frameworks. They are categorized on the basis of the automation component they leverage. These are:
 
#[[Data-driven testing]]
#[[Modularity-driven testing]]
#[[Keyword-driven testing]]
#[[Hybrid testing]]
#[[Model-based testing]]
# Code-driven testing
#[[Behavior driven development]]
 
===Data-driven testing===
{{Excerpt|Data-driven testing|only=paragraph}}
 
===Modularity-driven testing===
{{Excerpt|Modularity-driven testing|only=paragraph}}
 
===Keyword-driven testing===
{{Excerpt|Keyword-driven testing|only=paragraphs}}
 
===Hybrid testing===
{{Excerpt|Hybrid testing|only=paragraphs}}
 
===Model-based testing===
{{Excerpt|Model-based testing|paragraph=1}}
 
; Object repository: Collection of UI/Application object data recorded by the testing tool while exploring the SUT.<ref name="Interface" />
===Behavior driven development===
{{Excerpt|Behavior driven development|only=paragraph}}
 
== See also ==
*[[ {{Annotated link |Comparison of GUI testing tools]]}}
* [[{{Annotated link |List of web testing tools]]}}
* {{Annotated link |Fuzzing}}
*[[Continuous testing]]
*[[Fuzzing]]
*[[Headless browser]]
* [[Software testing]]
* [[System testing]]
* [[Unit test]]
 
==References==