Software testing: Difference between revisions

Content deleted Content added
Reverted 1 edit by Bhatnagarsudeep (talk): Spam
Restored revision 1306185766 by Jnestorius (talk): Unreliable source / vendor site
(25 intermediate revisions by 23 users not shown)
Line 9:
Software testing can provide objective, independent information about the [[Quality (business)|quality]] of software and the [[risk]] of its failure to a [[User (computing)|user]] or sponsor.<ref name="Kaner 1">{{Cite conference |last=Kaner |first=Cem |author-link=Cem Kaner |date=November 17, 2006 |title=Exploratory Testing |url=https://kaner.com/pdfs/ETatQAI.pdf |conference=Quality Assurance Institute Worldwide Annual Software Testing Conference |___location=Orlando, FL |access-date=November 22, 2014}}</ref>
 
Software testing can determine the [[Correctness (computer science)|correctness]] of software for specific [[Scenario (computing)|scenarios]] but cannot determine correctness for all scenarios.<ref name="pan">{{Cite web |last=Pan |first=Jiantao |date=Spring 1999 |title=Software Testing |url=httphttps://www.ece.cmu.edu/~koopman/des_s99/sw_testing/ |access-date=November 21, 2017 |publisher=Carnegie Mellon University |type=coursework}}</ref><ref name="Kaner2">{{Cite book |last1=Kaner |first1=Cem |title=Testing Computer Software |last2=Falk |first2=Jack |last3=Nguyen |first3=Hung Quoc |publisher=John Wiley and Sons |year=1999 |isbn=978-0-471-35846-6 |edition=2nd |___location=New York |author-link=Cem Kaner}}</ref> It cannot find all [[software bug|bugs]].
 
Based on the criteria for measuring correctness from an [[test oracle|oracle]], software testing employs principles and mechanisms that might recognize a problem. Examples of oracles include [[specification]]s, [[Design by Contract|contracts]],<ref>{{Cite conference |last1=Leitner |first1=Andreas |last2=Ciupa |first2=Ilinca |last3=Oriol |first3=Manuel |last4=Meyer |first4=Bertrand |author-link4=Bertrand Meyer |last5=Fiva |first5=Arno |date=September 2007 |title=Contract Driven Development = Test Driven Development – Writing Test Cases |url=httphttps://se.inf.ethz.ch/people/leitner/publications/cdd_leitner_esec_fse_2007.pdf |conference=ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007 |___location=Dubrovnik, Croatia |access-date=December 8, 2017}}</ref> comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws.
 
Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewing [[source code|code]] and its associated [[documentation]].
Line 30:
 
[[Glenford J. Myers]] initially introduced the separation of [[debugging]] from testing in 1979.<ref name="Myers 1979">{{Cite book |last=Myers |first=Glenford J. |url=https://archive.org/details/artofsoftwaretes00myer |title=The Art of Software Testing |publisher=John Wiley and Sons |year=1979 |isbn=978-0-471-04328-7 |author-link=Glenford Myers}}</ref> Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."<ref name="Myers 1979" />{{rp|16}}), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.
Software testing typically includes handling software bugs {{endash}} a defect in the [[source code|code]] that causes an undesirable result.<ref name="IEEEglossary">{{Citation |date=1990 |publisher=IEEE |doi=10.1109/IEEESTD.1990.101064 |isbn=978-1-55937-067-7 |title=IEEE Standard Glossary of Software Engineering Terminology }}</ref>{{rp|31}} Bugs generally slow testing progress and involve [[programmer]] assistance to [[debug]] and fix.
 
Not all defects cause a failure. For example, a defect in [[dead code]] will not be considered a failure.
 
A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on new [[computer hardware]], changes in [[source data|data]], and interacting with different software.<ref>{{Cite web |date=March 31, 2011 |title=Certified Tester Foundation Level Syllabus |url=https://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |access-date=December 15, 2017 |publisher=[[International Software Testing Qualifications Board]] |at=Section 1.1.2 |format=pdf |archive-date=October 28, 2017 |archive-url=https://web.archive.org/web/20171028051659/http://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |url-status=dead }}</ref>
 
== Goals ==
Line 41 ⟶ 46:
Not all defects cause a failure. For example, a defect in [[dead code]] will not be considered a failure.
 
A defect that does not cause failure at one point in time may laterlead occurto failure later due to environmental changes. Examples of environment change include running on new [[computer hardware]], changes in [[source data|data]], and interacting with different software.<ref>{{Cite web |date=March 31, 2011 |title=Certified Tester Foundation Level Syllabus |url=https://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |access-date=December 15, 2017 |publisher=[[International Software Testing Qualifications Board]] |at=Section 1.1.2 |format=pdf |archive-date=October 28, 2017 |archive-url=https://web.archive.org/web/20171028051659/http://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |url-status=dead }}</ref>
 
A single defect may result in multiple failure symptoms.
Line 56 ⟶ 61:
Although testing for every possible input is not feasible, testing can use [[combinatorics]] to maximize coverage while minimizing tests.<ref>{{Cite conference |last1=Ramler |first1=Rudolf |last2=Kopetzky |first2=Theodorich |last3=Platz |first3=Wolfgang |date=April 17, 2012 |title=Combinatorial Test Design in the TOSCA Testsuite: Lessons Learned and Practical Implications |conference=IEEE Fifth International Conference on Software Testing and Validation (ICST) |___location=Montreal, QC, Canada |doi=10.1109/ICST.2012.142}}</ref>
 
== CategorizationCategories ==
{{Anchor|Testing types}}
{{Main|Software testing tactics}}
Line 80 ⟶ 85:
=== Static, dynamic, and passive testing ===
 
There are many approaches to software testing. [[Code review|Reviews]], [[software walkthrough|walkthrough]]s, or [[Software inspection|inspections]] are referred to as static testing, whereas executing programmed code with a given set of [[Test case (software)|test case]]s is referred to as [[dynamic testing]].<ref name="GrahamFoundations08">{{Cite book |last1=Graham, D. |url=https://books.google.com/books?id=Ss62LSqCa1MC&pg=PA57 |title=Foundations of Software Testing |last2=Van Veenendaal, E. |last3=Evans, I. |publisher=Cengage Learning |year=2008 |isbn=978-1-84480-989-9 |pages=57–58}}</ref><ref name="OberkampfVerif10">{{Cite book |last1=Oberkampf, W.L. |url=https://books.google.com/books?id=7d26zLEJ1FUC&pg=PA155 |title=Verification and Validation in Scientific Computing |last2=Roy, C.J. |publisher=Cambridge University Press |year=2010 |isbn=978-1-139-49176-1 |pages=154–5}}</ref>
 
Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as [[static program analysis]]. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete [[Function (computer science)|functions]] or modules.<ref name="GrahamFoundations08" /><ref name="OberkampfVerif10" /> Typical techniques for these are either using [[Method stub|stubs]]/drivers or execution from a [[debugger]] environment.<ref name="OberkampfVerif10" />
Line 92 ⟶ 97:
 
=== Preset testing vs adaptive testing ===
The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing<ref>{{Cite journal |last1=Lee |first1=D. | last2=Yannakakis |first2=M. |date=1996 |title=Principles and methods of testing finite state machines-a survey |url=https://doi.org/10.1109/5.533956 |journal=Proceedings of the IEEE |volume=84 |issue=8 |pages=1090–1123|doi=10.1109/5.533956 |url-access=subscription }}</ref>) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing<ref>{{Cite book |last1=Petrenko|first1=A. |last2=Yevtushenko |first2=N. |title=In Testing Software and Systems: 23rd IFIP WG 6.1 International Conference, ICTSS 2011, Paris, France, November 7-10 |chapter= Adaptive testing of deterministic implementations specified by nondeterministic FSMs |series=Lecture Notes in Computer Science | chapter-url=https://doi.org/10.1007/978-3-642-24580-0_12 |year=2011 |volume=7019 |publisher=Springer Berlin Heidelberg |pages=162–178 |doi=10.1007/978-3-642-24580-0_12 |isbn=978-3-642-24579-4 }}</ref><ref>{{Cite book |last1=Petrenko|first1=A. |last2=Yevtushenko |first2=N. |title=In 2014 IEEE 15th International Symposium on High-Assurance Systems Engineering |chapter= Adaptive testing of nondeterministic systems with FSM | url=https://doi.org/10.1109/HASE.2014.39 |year=2014 |publisher=IEEE |pages=224–228 |doi=10.1109/HASE.2014.39 |isbn=978-1-4799-3466-9 }}</ref>).
 
=== Black/white box ===
Line 115 ⟶ 120:
* [[Static testing]] methods
 
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important [[function points]] have been tested.<ref name="CornettCode96">{{Cite web |last=Cornett |first=Steve |date=c. 1996 |title=Code Coverage Analysis |url=httphttps://www.bullseye.com/coverage.html#intro |access-date=November 21, 2017 |publisher=Bullseye Testing Technology |at=Introduction}}</ref> Code coverage as a [[software metric]] can be reported as a percentage for:<ref name="LimayeSoftware09" /><ref name="CornettCode96" /><ref name="BlackPragmatic11">{{Cite book |last=Black, R. |url=https://books.google.com/books?id=n-bTHNW97kYC&pg=PA44 |title=Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional |publisher=John Wiley & Sons |year=2011 |isbn=978-1-118-07938-6 |pages=44–6}}</ref>
 
:* ''Function coverage'', which reports on functions executed
Line 129 ⟶ 134:
Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.<ref name="Patton">{{Cite book |last=Patton |first=Ron |url=https://archive.org/details/softwaretesting0000patt |title=Software Testing |publisher=Sams Publishing |year=2005 |isbn=978-0-672-32798-8 |edition=2nd |___location=Indianapolis}}</ref> Black-box testing methods include: [[equivalence partitioning]], [[boundary value analysis]], [[all-pairs testing]], [[state transition table]]s, [[decision table]] testing, [[fuzz testing]], [[model-based testing]], [[use case]] testing, [[exploratory testing]], and specification-based testing.<ref name="LimayeSoftware09" /><ref name="SalehSoftware09" /><ref name="BlackPragmatic11" />
 
Specification-based testing aims to test the functionality of software according to the applicable requirements.<ref>{{Cite thesis |last=Laycock |first=Gilbert T. |title=The Theory and Practice of Specification Based Software Testing |degree=dissertation |publisher=Department of Computer Science, [[University of Sheffield]] |url=httphttps://www.cs.le.ac.uk/people/glaycock/thesis.pdf |year=1993 |access-date=January 2, 2018}}</ref> This level of testing usually requires thorough [[Test case (software)|test case]]s to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can be [[functional testing|functional]] or [[non-functional testing|non-functional]], though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.<ref>{{Cite journal |last=Bach |first=James |author-link=James Bach |date=June 1999 |title=Risk and Requirements-Based Testing |url=httphttps://www.satisfice.com/articles/requirements_based_testing.pdf |journal=Computer |volume=32 |issue=6 |pages=113–114 |access-date=August 19, 2008}}</ref>
 
Black box testing can be used to any level of testing although usually not at the unit level.<ref name="AmmannIntro16" />
Line 139 ⟶ 144:
===== Visual testing =====
 
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.<ref>{{Cite thesis |last=Lönnberg |first=Jan |title=Visual testing of software |date=October 7, 2003 |degree=MSc |publisher=Helsinki University of Technology |url=httphttps://www.cs.hut.fi/~jlonnber/VisualTesting.pdf |access-date=January 13, 2012}}</ref><ref>{{Cite magazine |last=Chima |first=Raspal |title=Visual testing |url=http://www.testmagazine.co.uk/2011/04/visual-testing |magazine=TEST Magazine |archive-url=https://web.archive.org/web/20120724162657/http://www.testmagazine.co.uk/2011/04/visual-testing/ |archive-date=July 24, 2012 |access-date=January 13, 2012}}</ref>
 
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Line 209 ⟶ 214:
{{See also|Software release life cycle#Beta}}
 
Beta testing comes after alpha testing and can be considered a form of external [[user acceptance testing]]. Versions of the software, known as [[beta version]]s, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or [[computer bug|bug]]s. Beta versions can be made available to the open public to increase the [[Feedback#In organizations|feedback]] field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time ([[perpetual beta]]).<ref>{{Cite web |last=O'Reilly |first=Tim |date=September 30, 2005 |title=What is Web 2.0 |url=httphttps://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html?page=4 |access-date=January 11, 2018 |publisher=O'Reilly Media |at=Section 4. End of the Software Release Cycle}}</ref>
 
=== Functional vs non-functional testing ===
Line 242 ⟶ 247:
=== Usability testing ===
 
[[Usability testing]] is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled [[User experience design#Interaction designers|UI designers]]. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.<ref>{{Cite journal |last=Taqi |first=Farwa |last2=Batool |first2=Syeda Hina |last3=Arshad |first3=Alia |date=2024-05-23 |title=Development and Validation of Cloud Applications Usability Development Scale |url=https://www.tandfonline.com/doi/full/10.1080/10447318.2024.2351715 |journal=International Journal of Human–Computer Interaction |language=en |pages=1–16 |doi=10.1080/10447318.2024.2351715 |issn=1044-7318|url-access=subscription }}</ref>
 
=== Accessibility testing ===
Line 368 ⟶ 373:
* [[Requirements analysis]]: testing should begin in the requirements phase of the [[software development life cycle]]. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
* Test planning: [[test strategy]], [[test plan]], [[testbed]] creation. Since many activities will be carried out during testing, a plan is needed.
* Test development: test procedures, [[Scenario test|test scenarios]], [[Test case (software)|test case]]s, test datasets, test scripts to use in testing software.
* Test execution: testers execute the software based on the plans and test documents then report any errors found to the development team. This part could be complex when running tests with a lack of programming knowledge.
* Test reporting: once testing is completed, testers generate metrics and make final reports on their [[test effort]] and whether or not the software tested is ready for release.
Line 382 ⟶ 387:
{{main|Verification and validation (software)|Software quality control}}
 
Software testing is used in association with [[Verification and validation (software)|verification and validation]]:<ref name="tran">{{Cite web |last=Tran |first=Eushiuan |year=1999 |title=Verification/Validation/Certification |url=httphttps://www.ece.cmu.edu/~koopman/des_s99/verification/index.html |access-date=August 13, 2008 |publisher=Carnegie Mellon University |type=coursework}}</ref>
 
* Verification: Have we built the software right? (i.e., does it implement the requirements).
Line 411 ⟶ 416:
=== Software quality assurance ===
 
In some organizations, software testing is part of a [[software quality assurance]] (SQA) process.<ref name="Kaner2" />{{rp|347}} In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the [[software engineering]] process itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.<ref>{{citationcite web needed|reasonlast1=archiveMishi July|first1=Javed |title=Role of Software Testing in Quality Assurance |url=https://nextbridge.com/software-testing-role-quality-assurance/ 2012|datewebsite=DecemberNextbridge 2017Ltd.}}</ref>
 
Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA ([[quality assurance]]) is the implementation of policies and procedures intended to prevent defects from reaching customers.
Line 433 ⟶ 438:
 
====Test case====
{{Main|Test case (software)}}
 
A [[Test case (software)|test case]] normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.<ref>{{Cite book |last=IEEE |title=IEEE standard for software test documentation |title-link=IEEE 829 |publisher=IEEE |year=1998 |isbn=978-0-7381-1443-9 |___location=New York}}</ref> This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
 
====Test script====
Line 523 ⟶ 528:
* {{Annotated link|Data validation}}
* {{annotated link|Cross-browser testing}}
* [[Database testing]], testing of databases
* {{annotated link|Domain testing}}
* {{Annotated link|Dynamic program analysis}}
Line 537 ⟶ 542:
* {{Annotated link|Trace table}}
* {{Annotated link|Web testing}}
* [[SDET]] – Software Development Engineer in Test
{{div col end}}
 
Line 544 ⟶ 550:
== Further reading ==
 
* {{Cite magazine |last=Meyer |first=Bertrand |date=August 2008 |title=Seven Principles of Software Testing |url=httphttps://se.ethz.ch/~meyer/publications/testing/principles.pdf |magazine=Computer |volume=41 |pages=99–101 |doi=10.1109/MC.2008.306 |access-date=November 21, 2017 |number=8}}
 
== External links ==