Test oracle: Difference between revisions

Content deleted Content added
restore spacing
American English. Doug.hoffman set the WP:LANGVAR on 2009-09-10T00:02:11
Line 1:
{{Use American English|date=January 2021}}
{{other uses|Oracle (disambiguation)}}
In [[computing]], [[software engineering]], and [[software testing]], a '''test oracle''' (or just '''oracle''') is a mechanism for determining whether a test has passed or failed.<ref>Kaner, Cem; [http://www.testingeducation.org/k04/OracleExamples.htm ''A Course in Black Box Software Testing''], 2004</ref> The use of oracles involves comparing the output(s) of the system under test, for a given [[test case|test-case]] input, to the output(s) that the oracle determines that product should have. The term "test oracle" was first introduced in a paper by William E. Howden.<ref>{{cite journal |last1=Howden |first1=W.E. |date=July 1978 |title=Theoretical and Empirical Studies of Program Testing |journal=IEEE Transactions on Software Engineering |volume=4 |issue=4 |pages=293–298 |doi=10.1109/TSE.1978.231514 }}</ref> Additional work on different kinds of oracles was explored by [[Elaine Weyuker]].<ref>Weyuker, Elaine J.; "The Oracle Assumption of Program Testing", in ''Proceedings of the 13th International Conference on System Sciences (ICSS), Honolulu, HI, January 1980'', pp. 44-49</ref>
Line 10 ⟶ 11:
=== Specified ===
 
These oracles are typically associated with formalisedformalized approaches to software modellingmodeling and software code construction. They are connected to [[formal specification]],<ref>{{cite book |last1=Börger |first1=E |editor-last1=Hutter |editor-first1=D |editor-last2=Stephan |editor-first2=W |editor-last3=Traverso |editor-first3=P |editor-last4=Ullman |editor-first4=M |date=1999|title=High Level System Design and Analysis Using Abstract State Machines |journal=Applied Formal Methods — FM-Trends 98 |volume=1641 |pages=1–43 |doi=10.1007/3-540-48257-1_1 |series=Lecture Notes in Computer Science |isbn=978-3-540-66462-8 |citeseerx=10.1.1.470.3653 }}</ref> [[model-based design]] which may be used to generate test oracles,<ref>{{cite journal |last1=Peters |first1=D.K. |date=March 1998 |title=Using test oracles generated from program documentation |journal=IEEE Transactions on Software Engineering |volume=24 |issue=3 |pages=161–173 |doi=10.1109/32.667877 |citeseerx=10.1.1.39.2890 }}</ref> state transition specification for which oracles can be derived to aid [[model-based testing]]<ref>{{cite journal| author-last1=Utting |author-first1=Mark |author-last2=Pretschner |author-first2=Alexander |author-last3=Legeard |author-first3=Bruno |title = A taxonomy of model-based testing approaches |journal = Software Testing, Verification and Reliability |volume= 22|issue= 5 |issn= 1099-1689|doi=10.1002/stvr.456 |pages= 297–312|year=2012 |url=https://eprints.qut.edu.au/57853/1/master_pdflatex.pdf }}</ref> and [[conformance testing|protocol conformance testing]],<ref>{{cite book|author-link1=Marie-Claude Gaudel |last1=Gaudel |first1=Marie-Claude |editor-last1=Craeynest |editor-first1=D.|editor-last2=Strohmeier |editor-first2=A|date=2001 |title=Testing from Formal Specifications, a Generic Approach |journal= Reliable SoftwareTechnologies — Ada-Europe 2001 |volume=2043 |pages=35–48 |doi=10.1007/3-540-45136-6_3 |series=Lecture Notes in Computer Science |isbn=978-3-540-42123-8 }}</ref> and [[design by contract]] for which the equivalent test oracle is an [[assertion (software development)|assertion]].
 
Specified Test Oracles have a number of challenges. Formal specification relies on abstraction, which in turn may naturally have an element of imprecision as all models cannot capture all behaviourbehavior.<ref name="Oracle survey"/>{{rp|514}}
 
=== Derived ===
 
A derived test oracle differentiates correct and incorrect behaviourbehavior by using information derived from artefactsartifacts of the system. These may include documentation, system execution results and characteristics of versions of the system under test.<ref name="Oracle survey"/>{{rp|514}} Regression test suites (or reports) are an example of a derived test oracle - they are built on the assumption that the result from a previous system version can be used as aid (oracle) for a future system version. Previously measured performance characteristics may be used as an oracle for future system versions, for example, to trigger a question about observed potential performance degradation. Textual documentation from previous system versions may be used as a basis to guide expectations in future system versions.
 
A pseudo-oracle<ref name="Oracle survey"/>{{rp|515}} falls into the category of derived test oracle. A pseudo-oracle, as defined by Weyuker,<ref name="pseudo-oracle">{{cite journal |last1=Weyuker |first1=E.J. |date=November 1982 |title=On Testing Non-Testable Programs |journal=The Computer Journal |volume=25 |issue=4 |pages=465–470 |doi=10.1093/comjnl/25.4.465 |doi-access=free }}</ref> is a separately written program which can take the same input as the program or system under test so that their outputs may be compared to understand if there might be a problem to investigate.
Line 24 ⟶ 25:
=== Implicit ===
 
An implicit test oracle relies on implied information and assumptions.<ref name="Oracle survey"/>{{rp|518}} For example, there may be some implied conclusion from a program crash, i.e. unwanted behaviourbehavior - an oracle to determine that there may be a problem. There are a number of ways to search and test for unwanted behaviourbehavior, whether some call it negative testing, where there are specialized subsets such as [[fuzzing]].
 
There are limitations in implicit test oracles - as they rely on implied conclusions and assumptions. For example, a program or process crash may not be a priority issue if the system is a fault-tolerant system and so operating under a form of self-healing/[[self-management (computer science)|self-management]]. Implicit test oracles may be susceptible to false positives due to environment dependencies.