Software testing: Difference between revisions

Content deleted Content added
Sify2024 (talk | contribs)
m Explained the STLC process
Tags: Reverted Visual edit
Restored revision 1306185766 by Jnestorius (talk): Unreliable source / vendor site
(42 intermediate revisions by 35 users not shown)
Line 5:
[[File:TestingCup-Polish-Championship-in-Software-Testing-Katowice-2016.jpg|thumb|TestingCup {{endash}} Polish Championship in Software Testing, [[Katowice]], May 2016]]
 
'''Software testing''' is the act of checking whether [[software]] satisfies expectations.
 
Software Testing Life Cycle(STLC)
 
# Requirement Analysis
# Test Planning
# Test Cases Design
# Test Cases Execution
# Bug Reporting
# Final Reporting
 
Software testing can provide objective, independent information about the [[Quality (business)|quality]] of software and the [[risk]] of its failure to a [[User (computing)|user]] or sponsor.<ref name="Kaner 1">{{Cite conference |last=Kaner |first=Cem |author-link=Cem Kaner |date=November 17, 2006 |title=Exploratory Testing |url=https://kaner.com/pdfs/ETatQAI.pdf |conference=Quality Assurance Institute Worldwide Annual Software Testing Conference |___location=Orlando, FL |access-date=November 22, 2014}}</ref>
 
Software testing can determine the [[Correctness (computer science)|correctness]] of software for specific [[Scenario (computing)|scenarios]], but cannot determine correctness for all scenarios.<ref name="pan">{{Cite web |last=Pan |first=Jiantao |date=Spring 1999 |title=Software Testing |url=httphttps://www.ece.cmu.edu/~koopman/des_s99/sw_testing/ |access-date=November 21, 2017 |publisher=Carnegie Mellon University |type=coursework}}</ref> <ref name="Kaner2">{{Cite book |last1=Kaner |first1=Cem |title=Testing Computer Software |last2=Falk |first2=Jack |last3=Nguyen |first3=Hung Quoc |publisher=John Wiley and Sons |year=1999 |isbn=978-0-471-35846-6 |edition=2nd |___location=New York |author-link=Cem Kaner}}</ref> It cannot find all [[software bug|bugs]].
 
Based on the criteria for measuring correctness from an [[test oracle|oracle]], software testing employs principles and mechanisms that might recognize a problem. Examples of oracles include: [[specification]]s, [[Design by Contract|contracts]],<ref>{{Cite conference |last1=Leitner |first1=Andreas |last2=Ciupa |first2=Ilinca |last3=Oriol |first3=Manuel |last4=Meyer |first4=Bertrand |author-link4=Bertrand Meyer |last5=Fiva |first5=Arno |date=September 2007 |title=Contract Driven Development = Test Driven Development – Writing Test Cases |url=httphttps://se.inf.ethz.ch/people/leitner/publications/cdd_leitner_esec_fse_2007.pdf |conference=ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007 |___location=Dubrovnik, Croatia |access-date=December 8, 2017}}</ref> comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws.
 
Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewing [[source code|code]] and its associated [[documentation]].
Line 28 ⟶ 19:
Information learned from software testing may be used to improve the process by which software is developed.<ref name="kolawa">{{Cite book |last1=Kolawa |first1=Adam |url=http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470042125.html |title=Automated Defect Prevention: Best Practices in Software Management |last2=Huizinga |first2=Dorota |publisher=Wiley-IEEE Computer Society Press |year=2007 |isbn=978-0-470-04212-0}}</ref>{{rp|41–43}}
 
Software testing should follow a "pyramid" approach wherein most of your tests should be [[unit tests]], followed by [[Integration testing|integration tests]] and finally [[End-to-end testing|end -to -end (e2e) tests]] should have the lowest proportion.<ref>{{Cite book |last=Cohn |first=Mike |title=Succeeding with Agile: Software Development Using Scrum |publisher=Addison-Wesley Professional |year=2009 |isbn=978-0321579362}}</ref><ref>{{Cite book |last=Molina |first=Alessandro |title=Crafting Test-Driven Software with Python: Write test suites that scale with your applications' needs and complexity using Python and PyTest |publisher=Packt Publishing |year=2021 |isbn=978-1838642655}}</ref><ref>{{Cite book |last=Fernandes da Costa |first=Lucas |title=Testing JavaScript Applications |publisher=Manning |year=2021 |isbn=978-1617297915}}</ref>
 
== Economics ==
 
A study conducted by [[NIST]] in 2002 reportsreported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided, if better software testing was performed.<ref>{{Cite web |date=May 2002 |title=The Economic Impacts of Inadequate Infrastructure for Software Testing |url=https://www.nist.gov/director/planning/upload/report02-3.pdf |access-date=December 19, 2017 |publisher=[[National Institute of Standards and Technology]]}}</ref>{{Dubious|NIST study| date = September 2014}}
 
[[Outsourcing]] software testing because of costs is very common, with China, the Philippines, and India being preferred destinations.{{cncitation needed|date=March 2024}}
 
== History ==
 
[[Glenford J. Myers]] initially introduced the separation of [[debugging]] from testing in 1979.<ref name="Myers 1979">{{Cite book |last=Myers |first=Glenford J. |url=https://archive.org/details/artofsoftwaretes00myer |title=The Art of Software Testing |publisher=John Wiley and Sons |year=1979 |isbn=978-0-471-04328-7 |author-link=Glenford Myers}}</ref> Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."<ref name="Myers 1979" />{{rp|16}}), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.
Software testing typically includes handling software bugs {{endash}} a defect in the [[source code|code]] that causes an undesirable result.<ref name="IEEEglossary">{{Citation |date=1990 |publisher=IEEE |doi=10.1109/IEEESTD.1990.101064 |isbn=978-1-55937-067-7 |title=IEEE Standard Glossary of Software Engineering Terminology }}</ref>{{rp|31}} Bugs generally slow testing progress and involve [[programmer]] assistance to [[debug]] and fix.
 
Not all defects cause a failure. For example, a defect in [[dead code]] will not be considered a failure.
 
A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on new [[computer hardware]], changes in [[source data|data]], and interacting with different software.<ref>{{Cite web |date=March 31, 2011 |title=Certified Tester Foundation Level Syllabus |url=https://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |access-date=December 15, 2017 |publisher=[[International Software Testing Qualifications Board]] |at=Section 1.1.2 |format=pdf |archive-date=October 28, 2017 |archive-url=https://web.archive.org/web/20171028051659/http://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |url-status=dead }}</ref>
 
== Goals ==
Line 48 ⟶ 44:
Software testing typically includes handling software bugs {{endash}} a defect in the [[source code|code]] that causes an undesirable result.<ref name="IEEEglossary">{{Citation |date=1990 |publisher=IEEE |doi=10.1109/IEEESTD.1990.101064 |isbn=978-1-55937-067-7 |title=IEEE Standard Glossary of Software Engineering Terminology }}</ref>{{rp|31}} Bugs generally slow testing progress and involve [[programmer]] assistance to [[debug]] and fix.
 
Not all defects cause a failure. For example, a defect in [[dead code]] will not be considered a failure.
 
A defect that does not cause failure at one point in time may laterlead occurto failure later due to environmental changes. Examples of environment change include running on new [[computer hardware]], changes in [[source data|data]], and interacting with different software.<ref>{{Cite web |date=March 31, 2011 |title=Certified Tester Foundation Level Syllabus |url=https://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |access-date=December 15, 2017 |publisher=[[International Software Testing Qualifications Board]] |at=Section 1.1.2 |format=pdf |archive-date=October 28, 2017 |archive-url=https://web.archive.org/web/20171028051659/http://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |url-status=dead }}</ref>
 
A single defect may result in multiple failure symptoms.
Line 65 ⟶ 61:
Although testing for every possible input is not feasible, testing can use [[combinatorics]] to maximize coverage while minimizing tests.<ref>{{Cite conference |last1=Ramler |first1=Rudolf |last2=Kopetzky |first2=Theodorich |last3=Platz |first3=Wolfgang |date=April 17, 2012 |title=Combinatorial Test Design in the TOSCA Testsuite: Lessons Learned and Practical Implications |conference=IEEE Fifth International Conference on Software Testing and Validation (ICST) |___location=Montreal, QC, Canada |doi=10.1109/ICST.2012.142}}</ref>
 
== CategorizationCategories ==
{{Anchor|Testing types}}
{{Main|Software testing tactics}}
Line 76 ⟶ 72:
=== Levels ===
 
Software testing can be categorized into levels based on how much of the [[software system]] is the focus of a test.<ref name="Computer.org">{{Cite book |title=Guide to the Software Engineering Body of Knowledge |publisher=IEEE Computer Society |year=2014 |isbn=978-0-7695-5166-1 |editor-last=Bourque |editor-first=Pierre |series=3.0 |chapter=Chapter 5 |access-date=January 2, 2018 |editor-last2=Fairley |editor-first2=Richard E. |chapter-url=https://www.computer.org/web/swebok/v3}}</ref><ref name="BourqueSWEBOK14-4">{{Cite book |title=SWEBOK v3.0: Guide to the Software Engineering Body of Knowledge |publisher=IEEE |year=2014 |isbn=978-0-7695-5166-1 |editor-last=Bourque, P. |pages=4–1–4–17 |chapter=Chapter 4: Software Testing |access-date=July 13, 2018 |editor-last2=Fairley, R.D. |chapter-url=http://www4.ncsu.edu/~tjmenzie/cs510/pdf/SWEBOKv3.pdf |archive-date=June 19, 2018 |archive-url=https://web.archive.org/web/20180619003324/http://www4.ncsu.edu/~tjmenzie/cs510/pdf/SWEBOKv3.pdf |url-status=dead }}</ref><ref name="DooleySoftware11">{{Cite book |last=Dooley, J. |url=https://books.google.com/books?id=iOqP9_6w-18C&pg=PA193 |title=Software Development and Professional Practice |publisher=APress |year=2011 |isbn=978-1-4302-3801-0 |pages=193–4}}</ref><ref name="WiegersCreating13">{{Cite book |last=Wiegers, K. |url=https://books.google.com/books?id=uVsUAAAAQBAJ&pg=PA212 |title=Creating a Software Engineering Culture |publisher=Addison-Wesley |year=2013 |isbn=978-0-13-348929-3 |pages=211–2}}</ref>
 
==== Unit testing ====
Line 89 ⟶ 85:
=== Static, dynamic, and passive testing ===
 
There are many approaches to software testing. [[Code review|Reviews]], [[software walkthrough|walkthrough]]s, or [[Software inspection|inspections]] are referred to as static testing, whereas executing programmed code with a given set of [[Test case (software)|test case]]s is referred to as [[dynamic testing]].<ref name="GrahamFoundations08">{{Cite book |last1=Graham, D. |url=https://books.google.com/books?id=Ss62LSqCa1MC&pg=PA57 |title=Foundations of Software Testing |last2=Van Veenendaal, E. |last3=Evans, I. |publisher=Cengage Learning |year=2008 |isbn=978-1-84480-989-9 |pages=57–58}}</ref><ref name="OberkampfVerif10">{{Cite book |last1=Oberkampf, W.L. |url=https://books.google.com/books?id=7d26zLEJ1FUC&pg=PA155 |title=Verification and Validation in Scientific Computing |last2=Roy, C.J. |publisher=Cambridge University Press |year=2010 |isbn=978-1-139-49176-1 |pages=154–5}}</ref>
 
Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as [[static program analysis]]. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete [[Function (computer science)|functions]] or modules.<ref name="GrahamFoundations08" /><ref name="OberkampfVerif10" /> Typical techniques for these are either using [[Method stub|stubs]]/drivers or execution from a [[debugger]] environment.<ref name="OberkampfVerif10" />
Line 101 ⟶ 97:
 
=== Preset testing vs adaptive testing ===
The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing<ref>{{Cite journal |last1=Lee |first1=D. | last2=Yannakakis |first2=M. |date=1996 |title=Principles and methods of testing finite state machines-a survey |url=https://doi.org/10.1109/5.533956 |journal=Proceedings of the IEEE |volume=84 |issue=8 |pages=1090–1123|doi=10.1109/5.533956 |url-access=subscription }}</ref>) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing<ref>{{Cite book |last1=Petrenko|first1=A. |last2=Yevtushenko |first2=N. |title=In Testing Software and Systems: 23rd IFIP WG 6.1 International Conference, ICTSS 2011, Paris, France, November 7-10 |chapter= Adaptive testing of deterministic implementations specified by nondeterministic FSMs |series=Lecture Notes in Computer Science | chapter-url=https://doi.org/10.1007/978-3-642-24580-0_12 |year=2011 |volume=7019 |publisher=Springer Berlin Heidelberg |pages=162–178 |doi=10.1007/978-3-642-24580-0_12 |isbn=978-3-642-24579-4 }}</ref><ref>{{Cite book |last1=Petrenko|first1=A. |last2=Yevtushenko |first2=N. |title=In 2014 IEEE 15th International Symposium on High-Assurance Systems Engineering |chapter= Adaptive testing of nondeterministic systems with FSM | url=https://doi.org/10.1109/HASE.2014.39 |year=2014 |publisher=IEEE |pages=224–228 |doi=10.1109/HASE.2014.39 |isbn=978-1-4799-3466-9 }}</ref>).
 
=== Black/white box ===
 
Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.<ref name="LimayeSoftware09">{{Cite book |last=Limaye, M.G. |url=https://books.google.com/books?id=zUm8My7SiakC&pg=PA108 |title=Software Testing |publisher=Tata McGraw-Hill Education |year=2009 |isbn=978-0-07-013990-9 |pages=108–11}}</ref><ref name="SalehSoftware09">{{Cite book |last=Saleh, K.A. |url=https://books.google.com/books?id=N69KPjBEWygC&pg=PA224 |title=Software Engineering |publisher=J. Ross Publishing |year=2009 |isbn=978-1-932159-94-3 |pages=224–41}}</ref>
 
==== White-box testing ====
{{Main|White-box testing}}
 
[[File:White Box Testing Approach.png|alt=White Box Testing Diagram|thumb|White Boxbox Testingtesting Diagramdiagram]]
 
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs.<ref name="LimayeSoftware09" /><ref name="SalehSoftware09" /> This is analogous to testing nodes in a circuit, e.g., [[in-circuit test]]ing (ICT).
 
While white-box testing can be applied at the [[unit testing|unit]], [[integration testing|integration]], and [[system testing|system]] levels of the software testing process, it is usually done at the unit level.<ref name="AmmannIntro16" /> It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Line 123 ⟶ 120:
* [[Static testing]] methods
 
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important [[function points]] have been tested.<ref name="CornettCode96">{{Cite web |last=Cornett |first=Steve |date=c. 1996 |title=Code Coverage Analysis |url=httphttps://www.bullseye.com/coverage.html#intro |access-date=November 21, 2017 |publisher=Bullseye Testing Technology |at=Introduction}}</ref> Code coverage as a [[software metric]] can be reported as a percentage for:<ref name="LimayeSoftware09" /><ref name="CornettCode96" /><ref name="BlackPragmatic11">{{Cite book |last=Black, R. |url=https://books.google.com/books?id=n-bTHNW97kYC&pg=PA44 |title=Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional |publisher=John Wiley & Sons |year=2011 |isbn=978-1-118-07938-6 |pages=44–6}}</ref>
 
:* ''Function coverage'', which reports on functions executed
Line 137 ⟶ 134:
Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.<ref name="Patton">{{Cite book |last=Patton |first=Ron |url=https://archive.org/details/softwaretesting0000patt |title=Software Testing |publisher=Sams Publishing |year=2005 |isbn=978-0-672-32798-8 |edition=2nd |___location=Indianapolis}}</ref> Black-box testing methods include: [[equivalence partitioning]], [[boundary value analysis]], [[all-pairs testing]], [[state transition table]]s, [[decision table]] testing, [[fuzz testing]], [[model-based testing]], [[use case]] testing, [[exploratory testing]], and specification-based testing.<ref name="LimayeSoftware09" /><ref name="SalehSoftware09" /><ref name="BlackPragmatic11" />
 
Specification-based testing aims to test the functionality of software according to the applicable requirements.<ref>{{Cite thesis |last=Laycock |first=Gilbert T. |title=The Theory and Practice of Specification Based Software Testing |degree=dissertation |publisher=Department of Computer Science, [[University of Sheffield]] |url=httphttps://www.cs.le.ac.uk/people/glaycock/thesis.pdf |year=1993 |access-date=January 2, 2018}}</ref> This level of testing usually requires thorough [[Test case (software)|test case]]s to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can be [[functional testing|functional]] or [[non-functional testing|non-functional]], though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.<ref>{{Cite journal |last=Bach |first=James |author-link=James Bach |date=June 1999 |title=Risk and Requirements-Based Testing |url=httphttps://www.satisfice.com/articles/requirements_based_testing.pdf |journal=Computer |volume=32 |issue=6 |pages=113–114 |access-date=August 19, 2008}}</ref>
 
Black box testing can be used to any level of testing although usually not at the unit level. <ref name="AmmannIntro16" />
 
'''Component interface testing'''
 
Component interface testing is a variation of [[black-box testing]], with the focus on the data values beyond just the related actions of a subsystem component.<ref name="MathurFound11-63">{{Cite book |last=Mathur, A.P. |url=https://books.google.com/books?id=hyaQobu44xUC&pg=PA18 |title=Foundations of Software Testing |publisher=Pearson Education India |year=2011 |isbn=978-81-317-5908-0 |page=63}}</ref> The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.<ref name="Clapp">{{Cite book |last=Clapp |first=Judith A. |url=https://books.google.com/books?id=wAq0rnyiGMEC&pg=PA313 |title=Software Quality Control, Error Analysis, and Testing |year=1995 |isbn=978-0-8155-1363-6 |page=313 |publisher=William Andrew |access-date=January 5, 2018}}</ref><ref name="Mathur">{{Cite book |last=Mathur |first=Aditya P. |url=https://books.google.com/books?id=yU-rTcurys8C&pg=PR38 |title=Foundations of Software Testing |publisher=Pearson Education India |year=2007 |isbn=978-81-317-1660-1 |page=18}}</ref> The data being passed can be considered as "message packets" and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.<ref name=Clapp/> Unusual data values in an interface can help explain unexpected performance in the next unit.
 
===== Visual testing =====
 
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.<ref>{{Cite thesis |last=Lönnberg |first=Jan |title=Visual testing of software |date=October 7, 2003 |degree=MSc |publisher=Helsinki University of Technology |url=httphttps://www.cs.hut.fi/~jlonnber/VisualTesting.pdf |access-date=January 13, 2012}}</ref><ref>{{Cite magazine |last=Chima |first=Raspal |title=Visual testing |url=http://www.testmagazine.co.uk/2011/04/visual-testing |magazine=TEST Magazine |archive-url=https://web.archive.org/web/20120724162657/http://www.testmagazine.co.uk/2011/04/visual-testing/ |archive-date=July 24, 2012 |access-date=January 13, 2012}}</ref>
 
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
 
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
 
[[Ad hoc testing]] and [[exploratory testing]] are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly.<ref name="LewisSoftware16">{{Cite book |last=Lewis, W.E. |url=https://books.google.com/books?id=fgaBDd0TfT8C&pg=PA68 |title=Software Testing and Continuous Quality Improvement |publisher=CRC Press |year=2016 |isbn=978-1-4398-3436-7 |edition=3rd |pages=68–73}}</ref> In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes.<ref name="LewisSoftware16" /> However, unless strict documentation of the procedures areis maintained, one of the limits of ad hoc testing is lack of repeatability.<ref name="LewisSoftware16" />
 
{{further|Graphical user interface testing}}
Line 162 ⟶ 159:
Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."<ref name="RansomeCore13">{{Cite book |last1=Ransome, J. |url=https://books.google.com/books?id=MX5cAgAAQBAJ&pg=PA140 |title=Core Software Security: Security at the Source |last2=Misra, A. |publisher=CRC Press |year=2013 |isbn=978-1-4665-6095-6 |pages=140–3}}</ref> Grey-box testing may also include [[Reverse coding|reverse engineering]] (using dynamic code analysis) to determine, for instance, boundary values or error messages.<ref name="RansomeCore13" /> Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting [[integration testing]] between two modules of code written by two different developers, where only the interfaces are exposed for the test.
 
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding a [[database]]. The tester can observe the state of the product being tested after performing certain actions such as executing [[SQL]] statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, [[exception handling]], and so on.<ref name="ref4">{{Cite web |title=SOA Testing Tools for Black, White and Gray Box |url=http://www.crosschecknet.com/soa_testing_black_white_gray_box.php |archive-url=https://web.archive.org/web/20181001010542/http://www.crosschecknet.com:80/soa_testing_black_white_gray_box.php |archive-date=October 1, 2018 |access-date=December 10, 2012 |publisher=Crosscheck Networks |type=white paper}}</ref>
 
With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.<ref name="AmmannIntro16">{{Cite book |last1=Ammann, P. |url=https://books.google.com/books?id=58LeDQAAQBAJ&pg=PA26 |title=Introduction to Software Testing |last2=Offutt, J. |publisher=Cambridge University Press |year=2016 |isbn=978-1-316-77312-3 |page=26}}</ref>
Line 205 ⟶ 202:
Sometimes, UAT is performed by the customer, in their environment and on their own hardware.
 
OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of a [[quality management system]]. OAT is a common type of non-functional software testing, used mainly in [[software development]] and [[software maintenance]] projects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or [[Operationsoperations readiness and assurance]] (OR&A) testing. [[Functional testing]] within OAT is limited to those tests that are required to verify the ''non-functional'' aspects of the system.
 
In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.<ref>{{Cite web |last=Woods |first=Anthony J. |date=June 5, 2015 |title=Operational Acceptance – an application of the ISO 29119 Software Testing standard |url=https://www.scribd.com/document/257086897/Operational-Acceptance-Test-White-Paper-2015-Capgemini |access-date=January 9, 2018 |publisher=Capgemini Australia |type=Whitepaper}}</ref>
Line 217 ⟶ 214:
{{See also|Software release life cycle#Beta}}
 
Beta testing comes after alpha testing and can be considered a form of external [[user acceptance testing]]. Versions of the software, known as [[beta version]]s, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or [[computer bug|bug]]s. Beta versions can be made available to the open public to increase the [[Feedback#In organizations|feedback]] field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time ([[perpetual beta]]).<ref>{{Cite web |last=O'Reilly |first=Tim |date=September 30, 2005 |title=What is Web 2.0 |url=httphttps://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html?page=4 |access-date=January 11, 2018 |publisher=O'Reilly Media |at=Section 4. End of the Software Release Cycle}}</ref>
 
=== Functional vs non-functional testing ===
Line 250 ⟶ 247:
=== Usability testing ===
 
[[Usability testing]] is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled [[User experience design#Interaction designers|UI designers]]. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.<ref>{{Cite journal |last=Taqi |first=Farwa |last2=Batool |first2=Syeda Hina |last3=Arshad |first3=Alia |date=2024-05-23 |title=Development and Validation of Cloud Applications Usability Development Scale |url=https://www.tandfonline.com/doi/full/10.1080/10447318.2024.2351715 |journal=International Journal of Human–Computer Interaction |language=en |pages=1–16 |doi=10.1080/10447318.2024.2351715 |issn=1044-7318|url-access=subscription }}</ref>
 
=== Accessibility testing ===
 
[[Accessibility]] testing is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are
 
* Ensuring that the color contrast between the font and the background color is appropriate
Line 261 ⟶ 258:
* Ability to use the system using the computer keyboard in addition to the mouse.
 
''==== Common Standardsstandards for compliance'' ====
* [[Americans with Disabilities Act of 1990]]
* [[Section 508 Amendment to the Rehabilitation Act of 1973]]
Line 276 ⟶ 273:
Testing for [[internationalization and localization]] validates that the software can be used with different languages and geographic regions. The process of [[pseudolocalization]] is used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product.
 
Globalization testing verifies that the software is adapted for a new culture, (such as different currencies or time zones).<ref>{{Cite web |title=Globalization Step-by-Step: The World-Ready Approach to Testing. Microsoft Developer Network |url=https://msdn.microsoft.com/en-us/goglobal/bb688148 |access-date=January 13, 2012 |publisher=Microsoft Developer Network |archive-url=https://web.archive.org/web/20120623050851/https://msdn.microsoft.com/en-us/goglobal/bb688148 |archive-date=June 23, 2012}}</ref>
 
Actual translation to human languages must be tested, too. Possible localization and globalization failures include:
 
* Some messages may be untranslated.
* Software is often localized by translating a list of [[String (computer science)|strings]] out of context, and the translator may choose the wrong translation for an ambiguous source string.
* Technical terminology may become inconsistent, if the project is translated by several people without proper coordination or if the translator is imprudent.
* Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language.
* Untranslated messages in the original language may be left [[Hard coding|hard coded]] in the source code, and thus untranslatable.
* Some messages may be created automatically at [[Run time (program lifecycle phase)|run time]] and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing.
* Software may use a [[keyboard shortcut]] that has no function on the source language's [[keyboard layout]], but is used for typing characters in the layout of the target language.
* Software may lack support for the [[character encoding]] of the target language.
* Fonts and font sizes that are appropriate in the source language may be inappropriate in the target language; for example, [[CJK characters]] may become unreadable, if the font is too small.
* A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction.
* Software may lack proper support for reading or writing [[bi-directional text]].
Line 296 ⟶ 294:
{{Main|Development testing}}
 
Development Testingtesting is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development Testingtesting aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
 
Depending on the organization's expectations for software development, Developmentdevelopment Testingtesting might include [[static code analysis]], data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, [[Requirements traceability|traceability]], and other software testing practices.
 
=== A/B testing ===
Line 345 ⟶ 343:
In an organization, testers may be in a separate team from the rest of the [[software development]] team or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers.
 
In the 1980s, the term ''software tester'' started to be used to denote a separate profession.
 
Notable software testing roles and titles include:<ref>{{Cite journal |last1=Gelperin |first1=David |author-link=Dave Gelperin |last2=Hetzel |first2=Bill |author-link2=William C. Hetzel |date=June 1, 1988 |title=The growth of software testing |journal=Communications of the ACM |volume=31 |issue=6 |pages=687–695 |doi=10.1145/62959.62965 |s2cid=14731341|doi-access=free }}</ref> ''test manager'', ''test lead'', ''test analyst'', ''test designer'', ''tester'', ''automation developer'', and ''test administrator''.<ref>{{Cite book |last1=Gregory |first1=Janet |title=More Agile Testing |last2=Crispin |first2=Lisa |publisher=Addison-Wesley Professional |year=2014 |isbn=978-0-13-374956-4 |pages=23–39}}</ref>
Line 373 ⟶ 371:
The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently.
 
* [[Requirements analysis]]: Testingtesting should begin in the requirements phase of the [[software development life cycle]]. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
* Test planning: [[Testtest strategy]], [[test plan]], [[testbed]] creation. Since many activities will be carried out during testing, a plan is needed.
* Test development: Testtest procedures, [[Scenario test|test scenarios]], [[Test case (software)|test case]]s, test datasets, test scripts to use in testing software.
* Test execution: Testerstesters execute the software based on the plans and test documents then report any errors found to the development team. This part could be complex when running tests with a lack of programming knowledge.
* Test reporting: Onceonce testing is completed, testers generate metrics and make final reports on their [[test effort]] and whether or not the software tested is ready for release.
* Test result analysis: Oror Defect''defect Analysisanalysis'', is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
* Defect Retestingretesting: Onceonce a defect has been dealt with by the development team, it is retested by the testing team.
* [[Regression testing]]: Itit is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything and that the software product as a whole is still working correctly.
* Test Closureclosure: Onceonce the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
 
== Quality ==
Line 389 ⟶ 387:
{{main|Verification and validation (software)|Software quality control}}
 
Software testing is used in association with [[Verification and validation (software)|verification and validation]]:<ref name="tran">{{Cite web |last=Tran |first=Eushiuan |year=1999 |title=Verification/Validation/Certification |url=httphttps://www.ece.cmu.edu/~koopman/des_s99/verification/index.html |access-date=August 13, 2008 |publisher=Carnegie Mellon University |type=coursework}}</ref>
 
* Verification: Have we built the software right? (i.e., does it implement the requirements).
Line 418 ⟶ 416:
=== Software quality assurance ===
 
In some organizations, software testing is part of a [[software quality assurance]] (SQA) process.<ref name="Kaner2" />{{rp|347}} In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the [[software engineering]] process itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.<ref>{{citationcite web needed|reasonlast1=archiveMishi July|first1=Javed |title=Role of Software Testing in Quality Assurance |url=https://nextbridge.com/software-testing-role-quality-assurance/ 2012|datewebsite=DecemberNextbridge 2017Ltd.}}</ref>
 
Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA ([[quality assurance]]) is the implementation of policies and procedures intended to prevent defects from reaching customers.
Line 440 ⟶ 438:
 
====Test case====
{{Main|Test case (software)}}
 
A [[Test case (software)|test case]] normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.<ref>{{Cite book |last=IEEE |title=IEEE standard for software test documentation |title-link=IEEE 829 |publisher=IEEE |year=1998 |isbn=978-0-7381-1443-9 |___location=New York}}</ref> This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
 
====Test script====
Line 529 ⟶ 527:
{{Div col|colwidth=40em}}
* {{Annotated link|Data validation}}
* {{anlannotated link|Cross-browser testing}}
* [[Database testing]], testing of databases
* {{anlannotated link|Domain testing}}
* {{Annotated link|Dynamic program analysis}}
* {{Annotated link|Formal verification}}
Line 544 ⟶ 542:
* {{Annotated link|Trace table}}
* {{Annotated link|Web testing}}
* [[SDET]] – Software Development Engineer in Test
{{div col end}}
 
Line 551 ⟶ 550:
== Further reading ==
 
* {{Cite magazine |last=Meyer |first=Bertrand |date=August 2008 |title=Seven Principles of Software Testing |url=httphttps://se.ethz.ch/~meyer/publications/testing/principles.pdf |magazine=Computer |volume=41 |pages=99–101 |doi=10.1109/MC.2008.306 |access-date=November 21, 2017 |number=8}}
 
== External links ==
{{commons category}}
{{Wikiversity department}}
{{WVD}}
 
* {{curlie|Computers/Programming/Software_Testing/Products_and_Tools|Software testing tools and products}}
* [https://www.economist.com/technology-quarterly/2008/03/08/software-that-makes-software-better "Software that makes Software better" Economist.com]