Content deleted Content added
Tag: Reverted |
m Reverted edits by 223.181.13.240 (talk) (HG) (3.4.10) |
||
(34 intermediate revisions by 12 users not shown) | |||
Line 1:
{{Short description |Use of
{{Redirect|Automated QA|the company|AutomatedQA}}
{{Software development process}}
{{More footnotes|date=February 2009}}
==Compared to manual testing==
Automation provides many benefits over manual testing.
===API testing===
For [[API testing]], tests drive the SUT via its [[application programming interface]] (API). Compared to manual testing, automated API testing often can execute a relatively large number of cases in a relatively short time.
==
For [[GUI testing]], tests drive the SUT via its [[graphical user interface]] (GUI) by generating events such as keystrokes and mouse clicks. Automated GUI testing can be challenging to develop, but can run much faster than a human could perform the same testing. Specializations include:
* Record & playback testing {{endash}} Some GUI testing tools provide a feature that allows for interactively recording user actions and replaying them later as a test; comparing actual results to expected. An advantage of this approach is that it requires little or no coding. However, some claim that such tests suffer from reliability, maintainability and accuracy issues. For example, changing the label of a button or moving it to another part of the view may require tests to be re-recorded, and such tests often are inefficient and incorrectly record unimportant activities.{{Citation needed|date=March 2013}}
* For testing a web site, the GUI is the browser and interaction is via [[DOM events]] and [[HTML]]. A [[headless browser]] or solutions based on [[Selenium (Software)#Selenium WebDriver |Selenium Web Driver]] are normally used for this purpose.<ref>Headless Testing with Browsers; https://docs.travis-ci.com/user/gui-and-headless-browsers/</ref><ref name="Headless Testing with Browsers">Headless Testing with PhantomJS;http://phantomjs.org/headless-testing.html</ref><ref>Automated User Interface Testing; https://www.devbridge.com/articles/automated-user-interface-testing/</ref>
===
When automated testing is in place, [[regression testing]] can be a relatively quick and easy operation. Instead of a significant outlay of human time and effort, a regression test run could require nothing more than a push of a button and even starting the run can be automated.
==Automated techniques==
The following are notable testing techniques categorized as test automation.
===
[[Continuous testing]] is the process of executing automated tests as part of the software delivery pipeline to assess the business risk of releasing the SUT.<ref name="essential">[https://www.techwell.com/techwell-insights/2015/08/part-pipeline-why-continuous-testing-essential Part of the Pipeline: Why Continuous Testing Is Essential], by Adam Auerbach, TechWell Insights August 2015</ref><ref name="stickym">[http://www.stickyminds.com/interview/relationship-between-risk-and-continuous-testing-interview-wayne-ariola The Relationship between Risk and Continuous Testing: An Interview with Wayne Ariola], by Cameron Philipp-Edmonds, Stickyminds December 2015</ref> The scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.<ref name="pnsqc">[http://uploads.pnsqc.org/2015/papers/t-007_Ariola_paper.pdf DevOps: Are You Pushing Bugs to Clients Faster], by Wayne Ariola and Cynthia Dunlop, PNSQC October 2015</ref>
===Model-based testing===
For [[model-based testing]], the SUT is modeled and test cases can be generated from it to support [[No-code development platform |no code]] test development. Some tools support the encoding of test cases as plain English that can be used on multiple [[operating system]]s, [[browser]]s, and [[smart device]]s.<ref>{{cite book|title=Proceedings from the 5th International Conference on Software Testing and Validation (ICST). Software Competence Center Hagenberg. "Test Design: Lessons Learned and Practical Implications.|isbn=978-0-7381-5746-7|doi=10.1109/IEEESTD.2008.4578383}}</ref>
===Test-driven development===
[[Test-driven development]] (TDD) inherently includes the generation of automation test code. [[Unit test]] code is written while the SUT code is written. When the code is complete, the tests are complete as well.<ref name="Learning TDD">{{cite journal|doi=10.1109/ms.2007.80|title=Learning Test-Driven Development by Counting Lines|year=2007|last1=Vodde|first1=Bas|last2=Koskela|first2=Lasse|journal=IEEE Software|volume=24|issue=3|pages=74–79|s2cid=30671391}}</ref>
===Other===
Other test automation techniques include:
* [[Data-driven testing]]
* [[Modularity-driven testing]]
* [[Keyword-driven testing]]
* [[Hybrid testing]]
* [[Behavior driven development]]
==Considerations==
A review of 52 practitioner and 26 academic sources found that five main factors to consider in test automation decision are: system under test (SUT), scope of testing, test toolset, human and organizational topics, cross-cutting factors. The factors most frequently identified were: need for regression testing, economic factors, and maturity of SUT.<ref>{{Cite journal|last1=Garousi|first1=Vahid|last2=Mäntylä|first2=Mika V.|date=2016-08-01|title=When and what to automate in software testing? A multi-vocal literature review|journal=Information and Software Technology|volume=76|pages=92–117|doi=10.1016/j.infsof.2016.04.015}}</ref><ref>
{{cite web|url=http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=2010|title=When Should a Test Be Automated?|author=Brian Marick|publisher=StickyMinds.com|access-date=2009-08-20}}</ref>
While the reusability of automated tests is valued by software development companies, this property can also be viewed as a disadvantage as it leads to a [[plateau effect]], where repeatedly executing the same tests stops detecting errors.
Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with [[test oracle]]s), defect logging, etc., without necessarily automating tests in an end-to-end fashion.
Considerations when developing automated tests include:
* [[Computing platform |Platform]] and [[
* [[Data
*
*
* [[Logging (computing)|Logging]]
* [[Version control]]
* Extension and customization; [[API]]s for integrating with other tools
* Integration with developer tools; for example, using [[Apache Ant |Ant]] or [[Apache Maven |Maven]] for [[Java]] development
* Unattended test runs for integration with build processes and batch runs
* Email notifications; i.e. [[bounce message]]s
* Distributed
==Roles==
To support coded automated testing, the [[test engineer]] or [[software quality assurance]] person must have software coding ability. Some testing techniques such as table-driven and no-code can lessen or alleviate the need for programming skill.
==
A test automation [[Software framework |framework]] provides a programming environment that integrates test logic, test data, and other resources. The framework provides the basis of test automation and simplifies the automation effort. Using a [[Software framework |framework]] can lower the cost of test development and [[Software maintenance |maintenance]]. If there is change to any [[Test case (software)|test case]] then only the test case file needs to be updated and the driver script and startup script will remain the same.
A framework is responsible for defining the format in which to express expectations, providing a mechanism to hook into or drive the SUT, executing the tests, and reporting results.<ref>{{cite web
| url = https://www.youtube.com/watch?v=qf2i-xQ3LoY
| title = Selenium Meet-Up 4/20/2010 Elisabeth Hendrickson on Robot Framework 1of2
Line 118 ⟶ 77:
}}</ref>
Various types of frameworks are available:
* Linear {{endash}} procedural code, possibly generated by tools like those that use record and playback
* Structured {{endash}} uses control structures - typically ‘if-else’, ‘switch’, ‘for’, ‘while’ conditions/ statements
* [[Data-driven testing |Data-driven]] {{endash}} data is persisted outside of tests in a database, spreadsheet, or other mechanism
* [[Keyword-driven testing |Keyword-driven]]
* Hybrid {{endash}} multiple types are used
* Agile automation framework
* Unit testing {{endash}} some frameworks are intended primarily for [[unit testing]] such as [[xUnit]], [[JUnit]] and [[NUnit]]
| url = http://www.qualitycow.com/Docs/ConquestInterface.pdf
| title = Conquest: Interface for Test Automation Design
Line 138 ⟶ 99:
[[File:Test Automation Interface.png|thumb|Test Automation Interface Model]]
; Interface engine: Consists of a [[parser]] and a test runner. The parser is present to parse the object files coming from the object repository into the test specific scripting language. The test runner executes the test scripts using a [[test harness]].<ref name="Interface" />
; Object repository: Collection of UI/Application object data recorded by the testing tool while exploring the SUT.<ref name="Interface" />
== See also ==
*
*
* {{Annotated link |Fuzzing}}
==References==
{{Reflist|30em}}
===General references===
{{refbegin}}
|