Content deleted Content added
Undid revision 1248776107 by MILLIONDOX (talk) unsourced |
Restored revision 1306185766 by Jnestorius (talk): Unreliable source / vendor site |
||
(29 intermediate revisions by 25 users not shown) | |||
Line 5:
[[File:TestingCup-Polish-Championship-in-Software-Testing-Katowice-2016.jpg|thumb|TestingCup {{endash}} Polish Championship in Software Testing, [[Katowice]], May 2016]]
'''Software testing''' is the act of checking whether [[software]] satisfies expectations.
Software testing can provide objective, independent information about the [[Quality (business)|quality]] of software and the [[risk]] of its failure to a [[User (computing)|user]] or sponsor.<ref name="Kaner 1">{{Cite conference |last=Kaner |first=Cem |author-link=Cem Kaner |date=November 17, 2006 |title=Exploratory Testing |url=https://kaner.com/pdfs/ETatQAI.pdf |conference=Quality Assurance Institute Worldwide Annual Software Testing Conference |___location=Orlando, FL |access-date=November 22, 2014}}</ref>
Software testing can determine the [[Correctness (computer science)|correctness]] of software for specific [[Scenario (computing)|scenarios]]
Based on the criteria for measuring correctness from an [[test oracle|oracle]], software testing employs principles and mechanisms that might recognize a problem. Examples of oracles include
Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewing [[source code|code]] and its associated [[documentation]].
Line 19:
Information learned from software testing may be used to improve the process by which software is developed.<ref name="kolawa">{{Cite book |last1=Kolawa |first1=Adam |url=http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470042125.html |title=Automated Defect Prevention: Best Practices in Software Management |last2=Huizinga |first2=Dorota |publisher=Wiley-IEEE Computer Society Press |year=2007 |isbn=978-0-470-04212-0}}</ref>{{rp|41–43}}
Software testing should follow a "pyramid" approach wherein most of your tests should be [[unit tests]], followed by [[Integration testing|integration tests]] and finally [[End-to-end testing|end
== Economics ==
A study conducted by [[NIST]] in 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided
[[Outsourcing]] software testing because of costs is very common, with China, the Philippines, and India being preferred destinations.{{
== History ==
[[Glenford J. Myers]] initially introduced the separation of [[debugging]] from testing in 1979.<ref name="Myers 1979">{{Cite book |last=Myers |first=Glenford J. |url=https://archive.org/details/artofsoftwaretes00myer |title=The Art of Software Testing |publisher=John Wiley and Sons |year=1979 |isbn=978-0-471-04328-7 |author-link=Glenford Myers}}</ref> Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."<ref name="Myers 1979" />{{rp|16}}), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.
Software testing typically includes handling software bugs {{endash}} a defect in the [[source code|code]] that causes an undesirable result.<ref name="IEEEglossary">{{Citation |date=1990 |publisher=IEEE |doi=10.1109/IEEESTD.1990.101064 |isbn=978-1-55937-067-7 |title=IEEE Standard Glossary of Software Engineering Terminology }}</ref>{{rp|31}} Bugs generally slow testing progress and involve [[programmer]] assistance to [[debug]] and fix.
Not all defects cause a failure. For example, a defect in [[dead code]] will not be considered a failure.
A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on new [[computer hardware]], changes in [[source data|data]], and interacting with different software.<ref>{{Cite web |date=March 31, 2011 |title=Certified Tester Foundation Level Syllabus |url=https://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |access-date=December 15, 2017 |publisher=[[International Software Testing Qualifications Board]] |at=Section 1.1.2 |format=pdf |archive-date=October 28, 2017 |archive-url=https://web.archive.org/web/20171028051659/http://www.istqb.org/downloads/send/2-foundation-level-documents/3-foundation-level-syllabus-2011.html |url-status=dead }}</ref>
== Goals ==
Line 39 ⟶ 44:
Software testing typically includes handling software bugs {{endash}} a defect in the [[source code|code]] that causes an undesirable result.<ref name="IEEEglossary">{{Citation |date=1990 |publisher=IEEE |doi=10.1109/IEEESTD.1990.101064 |isbn=978-1-55937-067-7 |title=IEEE Standard Glossary of Software Engineering Terminology }}</ref>{{rp|31}} Bugs generally slow testing progress and involve [[programmer]] assistance to [[debug]] and fix.
Not all defects cause a failure. For example, a defect in [[dead code]] will not be considered a failure.
A defect that does not cause failure at one point in time may
A single defect may result in multiple failure symptoms.
Line 56 ⟶ 61:
Although testing for every possible input is not feasible, testing can use [[combinatorics]] to maximize coverage while minimizing tests.<ref>{{Cite conference |last1=Ramler |first1=Rudolf |last2=Kopetzky |first2=Theodorich |last3=Platz |first3=Wolfgang |date=April 17, 2012 |title=Combinatorial Test Design in the TOSCA Testsuite: Lessons Learned and Practical Implications |conference=IEEE Fifth International Conference on Software Testing and Validation (ICST) |___location=Montreal, QC, Canada |doi=10.1109/ICST.2012.142}}</ref>
==
{{Anchor|Testing types}}
{{Main|Software testing tactics}}
Line 67 ⟶ 72:
=== Levels ===
Software testing can be categorized into levels based on how much of the [[software system]] is the focus of a test.<ref name="Computer.org">{{Cite book |title=Guide to the Software Engineering Body of Knowledge |publisher=IEEE Computer Society |year=2014 |isbn=978-0-7695-5166-1 |editor-last=Bourque |editor-first=Pierre |series=3.0 |chapter=Chapter 5 |access-date=January 2, 2018 |editor-last2=Fairley |editor-first2=Richard E. |chapter-url=https://www.computer.org/web/swebok/v3}}</ref><ref name="BourqueSWEBOK14-4">{{Cite book |title=SWEBOK v3.0: Guide to the Software Engineering Body of Knowledge |publisher=IEEE |year=2014 |isbn=978-0-7695-5166-1 |editor-last=Bourque, P. |pages=4–1–4–17 |chapter=Chapter 4: Software Testing |access-date=July 13, 2018 |editor-last2=Fairley, R.D. |chapter-url=http://www4.ncsu.edu/~tjmenzie/cs510/pdf/SWEBOKv3.pdf |archive-date=June 19, 2018 |archive-url=https://web.archive.org/web/20180619003324/http://www4.ncsu.edu/~tjmenzie/cs510/pdf/SWEBOKv3.pdf |url-status=dead }}</ref><ref name="DooleySoftware11">{{Cite book |last=Dooley, J. |url=https://books.google.com/books?id=iOqP9_6w-18C&pg=PA193 |title=Software Development and Professional Practice |publisher=APress |year=2011 |isbn=978-1-4302-3801-0 |pages=193–4}}</ref><ref name="WiegersCreating13">{{Cite book |last=Wiegers, K. |url=https://books.google.com/books?id=uVsUAAAAQBAJ&pg=PA212 |title=Creating a Software Engineering Culture |publisher=Addison-Wesley |year=2013 |isbn=978-0-13-348929-3 |pages=211–2}}</ref>
==== Unit testing ====
Line 80 ⟶ 85:
=== Static, dynamic, and passive testing ===
There are many approaches to software testing. [[Code review|Reviews]], [[software walkthrough|walkthrough]]s, or [[Software inspection|inspections]] are referred to as static testing, whereas executing programmed code with a given set of [[Test case (software)|test case]]s is referred to as [[dynamic testing]].<ref name="GrahamFoundations08">{{Cite book |last1=Graham, D. |url=https://books.google.com/books?id=Ss62LSqCa1MC&pg=PA57 |title=Foundations of Software Testing |last2=Van Veenendaal, E. |last3=Evans, I. |publisher=Cengage Learning |year=2008 |isbn=978-1-84480-989-9 |pages=57–58}}</ref><ref name="OberkampfVerif10">{{Cite book |last1=Oberkampf, W.L. |url=https://books.google.com/books?id=7d26zLEJ1FUC&pg=PA155 |title=Verification and Validation in Scientific Computing |last2=Roy, C.J. |publisher=Cambridge University Press |year=2010 |isbn=978-1-139-49176-1 |pages=154–5}}</ref>
Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as [[static program analysis]]. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete [[Function (computer science)|functions]] or modules.<ref name="GrahamFoundations08" /><ref name="OberkampfVerif10" /> Typical techniques for these are either using [[Method stub|stubs]]/drivers or execution from a [[debugger]] environment.<ref name="OberkampfVerif10" />
Line 92 ⟶ 97:
=== Preset testing vs adaptive testing ===
The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing<ref>{{Cite journal |last1=Lee |first1=D. | last2=Yannakakis |first2=M. |date=1996 |title=Principles and methods of testing finite state machines-a survey |url=https://doi.org/10.1109/5.533956 |journal=Proceedings of the IEEE |volume=84 |issue=8 |pages=1090–1123|doi=10.1109/5.533956 |url-access=subscription }}</ref>) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing<ref>{{Cite book |last1=Petrenko|first1=A. |last2=Yevtushenko |first2=N. |title=In Testing Software and Systems: 23rd IFIP WG 6.1 International Conference, ICTSS 2011, Paris, France, November 7-10 |chapter= Adaptive testing of deterministic implementations specified by nondeterministic FSMs |series=Lecture Notes in Computer Science | chapter-url=https://doi.org/10.1007/978-3-642-24580-0_12 |year=2011 |volume=7019 |publisher=Springer Berlin Heidelberg |pages=162–178 |doi=10.1007/978-3-642-24580-0_12 |isbn=978-3-642-24579-4 }}</ref><ref>{{Cite book |last1=Petrenko|first1=A. |last2=Yevtushenko |first2=N. |title=In 2014 IEEE 15th International Symposium on High-Assurance Systems Engineering |chapter= Adaptive testing of nondeterministic systems with FSM | url=https://doi.org/10.1109/HASE.2014.39 |year=2014 |publisher=IEEE |pages=224–228 |doi=10.1109/HASE.2014.39 |isbn=978-1-4799-3466-9 }}</ref>).
=== Black/white box ===
Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.<ref name="LimayeSoftware09">{{Cite book |last=Limaye, M.G. |url=https://books.google.com/books?id=zUm8My7SiakC&pg=PA108 |title=Software Testing |publisher=Tata McGraw-Hill Education |year=2009 |isbn=978-0-07-013990-9 |pages=108–11}}</ref><ref name="SalehSoftware09">{{Cite book |last=Saleh, K.A. |url=https://books.google.com/books?id=N69KPjBEWygC&pg=PA224 |title=Software Engineering |publisher=J. Ross Publishing |year=2009 |isbn=978-1-932159-94-3 |pages=224–41}}</ref>
==== White-box testing ====
{{Main|White-box testing}}
Line 102 ⟶ 108:
[[File:White Box Testing Approach.png|alt=White Box Testing Diagram|thumb|White box testing diagram]]
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills
While white-box testing can be applied at the [[unit testing|unit]], [[integration testing|integration]], and [[system testing|system]] levels of the software testing process, it is usually done at the unit level.<ref name="AmmannIntro16" /> It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Line 114 ⟶ 120:
* [[Static testing]] methods
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important [[function points]] have been tested.<ref name="CornettCode96">{{Cite web |last=Cornett |first=Steve |date=c. 1996 |title=Code Coverage Analysis |url=
:* ''Function coverage'', which reports on functions executed
Line 128 ⟶ 134:
Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.<ref name="Patton">{{Cite book |last=Patton |first=Ron |url=https://archive.org/details/softwaretesting0000patt |title=Software Testing |publisher=Sams Publishing |year=2005 |isbn=978-0-672-32798-8 |edition=2nd |___location=Indianapolis}}</ref> Black-box testing methods include: [[equivalence partitioning]], [[boundary value analysis]], [[all-pairs testing]], [[state transition table]]s, [[decision table]] testing, [[fuzz testing]], [[model-based testing]], [[use case]] testing, [[exploratory testing]], and specification-based testing.<ref name="LimayeSoftware09" /><ref name="SalehSoftware09" /><ref name="BlackPragmatic11" />
Specification-based testing aims to test the functionality of software according to the applicable requirements.<ref>{{Cite thesis |last=Laycock |first=Gilbert T. |title=The Theory and Practice of Specification Based Software Testing |degree=dissertation |publisher=Department of Computer Science, [[University of Sheffield]] |url=
Black box testing can be used to any level of testing although usually not at the unit level.
'''Component interface testing'''
Component interface testing is a variation of [[black-box testing]], with the focus on the data values beyond just the related actions of a subsystem component.<ref name="MathurFound11-63">{{Cite book |last=Mathur, A.P. |url=https://books.google.com/books?id=hyaQobu44xUC&pg=PA18 |title=Foundations of Software Testing |publisher=Pearson Education India |year=2011 |isbn=978-81-317-5908-0 |page=63}}</ref> The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.<ref name="Clapp">{{Cite book |last=Clapp |first=Judith A. |url=https://books.google.com/books?id=wAq0rnyiGMEC&pg=PA313 |title=Software Quality Control, Error Analysis, and Testing |year=1995 |isbn=978-0-8155-1363-6 |page=313 |publisher=William Andrew |access-date=January 5, 2018}}</ref><ref name="Mathur">{{Cite book |last=Mathur |first=Aditya P. |url=https://books.google.com/books?id=yU-rTcurys8C&pg=PR38 |title=Foundations of Software Testing |publisher=Pearson Education India |year=2007 |isbn=978-81-317-1660-1 |page=18}}</ref> The data being passed can be considered as "message packets" and the range or data types can be checked
===== Visual testing =====
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.<ref>{{Cite thesis |last=Lönnberg |first=Jan |title=Visual testing of software |date=October 7, 2003 |degree=MSc |publisher=Helsinki University of Technology |url=
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
[[Ad hoc testing]] and [[exploratory testing]] are important methodologies for checking software integrity
{{further|Graphical user interface testing}}
Line 153 ⟶ 159:
Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."<ref name="RansomeCore13">{{Cite book |last1=Ransome, J. |url=https://books.google.com/books?id=MX5cAgAAQBAJ&pg=PA140 |title=Core Software Security: Security at the Source |last2=Misra, A. |publisher=CRC Press |year=2013 |isbn=978-1-4665-6095-6 |pages=140–3}}</ref> Grey-box testing may also include [[Reverse coding|reverse engineering]] (using dynamic code analysis) to determine, for instance, boundary values or error messages.<ref name="RansomeCore13" /> Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting [[integration testing]] between two modules of code written by two different developers, where only the interfaces are exposed for the test.
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding a [[database]]. The tester can observe the state of the product being tested after performing certain actions such as executing [[SQL]] statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios
With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.<ref name="AmmannIntro16">{{Cite book |last1=Ammann, P. |url=https://books.google.com/books?id=58LeDQAAQBAJ&pg=PA26 |title=Introduction to Software Testing |last2=Offutt, J. |publisher=Cambridge University Press |year=2016 |isbn=978-1-316-77312-3 |page=26}}</ref>
Line 208 ⟶ 214:
{{See also|Software release life cycle#Beta}}
Beta testing comes after alpha testing and can be considered a form of external [[user acceptance testing]]. Versions of the software, known as [[beta version]]s, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or [[computer bug|bug]]s. Beta versions can be made available to the open public to increase the [[Feedback#In organizations|feedback]] field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time ([[perpetual beta]]).<ref>{{Cite web |last=O'Reilly |first=Tim |date=September 30, 2005 |title=What is Web 2.0 |url=
=== Functional vs non-functional testing ===
Line 241 ⟶ 247:
=== Usability testing ===
[[Usability testing]] is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled [[User experience design#Interaction designers|UI designers]]. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.<ref>{{Cite journal |last=Taqi |first=Farwa |last2=Batool |first2=Syeda Hina |last3=Arshad |first3=Alia |date=2024-05-23 |title=Development and Validation of Cloud Applications Usability Development Scale |url=https://www.tandfonline.com/doi/full/10.1080/10447318.2024.2351715 |journal=International Journal of Human–Computer Interaction |language=en |pages=1–16 |doi=10.1080/10447318.2024.2351715 |issn=1044-7318|url-access=subscription }}</ref>
=== Accessibility testing ===
[[Accessibility]] testing is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are
* Ensuring that the color contrast between the font and the background color is appropriate
Line 337 ⟶ 343:
In an organization, testers may be in a separate team from the rest of the [[software development]] team or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers.
In the 1980s, the term ''software tester'' started to be used to denote a separate profession.
Notable software testing roles and titles include:<ref>{{Cite journal |last1=Gelperin |first1=David |author-link=Dave Gelperin |last2=Hetzel |first2=Bill |author-link2=William C. Hetzel |date=June 1, 1988 |title=The growth of software testing |journal=Communications of the ACM |volume=31 |issue=6 |pages=687–695 |doi=10.1145/62959.62965 |s2cid=14731341|doi-access=free }}</ref> ''test manager'', ''test lead'', ''test analyst'', ''test designer'', ''tester'', ''automation developer'', and ''test administrator''.<ref>{{Cite book |last1=Gregory |first1=Janet |title=More Agile Testing |last2=Crispin |first2=Lisa |publisher=Addison-Wesley Professional |year=2014 |isbn=978-0-13-374956-4 |pages=23–39}}</ref>
Line 367 ⟶ 373:
* [[Requirements analysis]]: testing should begin in the requirements phase of the [[software development life cycle]]. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
* Test planning: [[test strategy]], [[test plan]], [[testbed]] creation. Since many activities will be carried out during testing, a plan is needed.
* Test development: test procedures, [[Scenario test|test scenarios]], [[Test case (software)|test case]]s, test datasets, test scripts to use in testing software.
* Test execution: testers execute the software based on the plans and test documents then report any errors found to the development team. This part could be complex when running tests with a lack of programming knowledge.
* Test reporting: once testing is completed, testers generate metrics and make final reports on their [[test effort]] and whether or not the software tested is ready for release.
Line 381 ⟶ 387:
{{main|Verification and validation (software)|Software quality control}}
Software testing is used in association with [[Verification and validation (software)|verification and validation]]:<ref name="tran">{{Cite web |last=Tran |first=Eushiuan |year=1999 |title=Verification/Validation/Certification |url=
* Verification: Have we built the software right? (i.e., does it implement the requirements).
Line 410 ⟶ 416:
=== Software quality assurance ===
In some organizations, software testing is part of a [[software quality assurance]] (SQA) process.<ref name="Kaner2" />{{rp|347}} In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the [[software engineering]] process itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.<ref>{{
Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA ([[quality assurance]]) is the implementation of policies and procedures intended to prevent defects from reaching customers.
Line 432 ⟶ 438:
====Test case====
{{Main|Test case (software)}}
A [[Test case (software)|test case]] normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.<ref>{{Cite book |last=IEEE |title=IEEE standard for software test documentation |title-link=IEEE 829 |publisher=IEEE |year=1998 |isbn=978-0-7381-1443-9 |___location=New York}}</ref> This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
====Test script====
Line 521 ⟶ 527:
{{Div col|colwidth=40em}}
* {{Annotated link|Data validation}}
* {{
* [[Database testing]]
* {{
* {{Annotated link|Dynamic program analysis}}
* {{Annotated link|Formal verification}}
Line 536 ⟶ 542:
* {{Annotated link|Trace table}}
* {{Annotated link|Web testing}}
* [[SDET]] – Software Development Engineer in Test
{{div col end}}
Line 543 ⟶ 550:
== Further reading ==
* {{Cite magazine |last=Meyer |first=Bertrand |date=August 2008 |title=Seven Principles of Software Testing |url=
== External links ==
{{commons category}}
{{Wikiversity department}}
* [https://www.economist.com/technology-quarterly/2008/03/08/software-that-makes-software-better "Software that makes Software better" Economist.com]
|