Content deleted Content added
Niceguyedc (talk | contribs) m WPCleaner v1.27 - Repaired 1 link to disambiguation page - (You can help) - Haskell / Fixed using WP:WCW - Reference before punctuation |
Citation bot (talk | contribs) Alter: title, template type. Add: chapter-url, chapter. Removed or converted URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 686/1156 |
||
(89 intermediate revisions by 44 users not shown) | |||
Line 1:
{{
'''Random testing''' is a black-box [[software testing]] technique where programs are tested by [[random number generation|generating]] random, independent inputs. Results of the output are compared against software specifications to verify that the test output is pass or fail.<ref name="Hamlet94"/> In case of absence of specifications the exceptions of the language are used which means if an exception arises during test execution then it means there is a fault in the program, it is also used as a way to avoid biased testing.
== Overview ==▼
Random testing for hardware was first examined by [[Melvin Breuer]] in 1971 and initial effort to evaluate its effectiveness was done by Pratima and [[Vishwani Agrawal]] in 1975.<ref>{{cite journal|title=Probabilistic Analysis of Random Test Generation Method for Irredundant Combinational Logic Networks|first1=P.|last1=Agrawal|first2=V. D.|last2=Agrawal|date=1 July 1975|journal=IEEE Transactions on Computers|volume=C-24|issue=7|pages=691–695|doi=10.1109/T-C.1975.224289}}</ref>
In software, Duran and Ntafos had examined random testing in 1984.<ref>{{cite journal|title=An Evaluation of Random Testing|first1=J. W.|last1=Duran|first2=S. C.|last2=Ntafos|date=1 July 1984|journal=IEEE Transactions on Software Engineering|volume=SE-10|issue=4|pages=438–444|doi=10.1109/TSE.1984.5010257}}</ref>
The use of hypothesis testing as a theoretical basis for random testing was described by Howden in ''Functional Testing and Analysis''. The book also contained the development of a simple formula for estimating the number of tests ''n'' that are needed to have confidence at least 1-1/''n'' in a failure rate of no larger than 1/n. The formula is the lower bound ''n''log''n'', which indicates the large number of failure-free tests needed to have even modest confidence in a modest failure rate bound.<ref name=":0">{{Cite book|last=Howden|first=William|title=Functional Program Testing and Analysis|publisher=McGraw Hill|year=1987|isbn=0-07-030550-1|___location=New York|pages=51–53}}</ref>
Consider the following C++ function:
<syntaxhighlight lang="cpp">
int myAbs(int x) {
if (x > 0) {
return x;
}
Line 16 ⟶ 23:
</syntaxhighlight>
Now the random tests for this function could be {123, 36, -35, 48, 0}. Only the value '-35' triggers the bug. If there is no reference implementation to check the result, the bug still could
<syntaxhighlight lang="cpp">
Line 23 ⟶ 30:
int x = getRandomInput();
int result = myAbs(x);
assert(result >= 0);
}
}
</syntaxhighlight>
The reference implementation is sometimes available, e.g. when implementing a simple algorithm in a much more complex way for better performance. For example, to test an implementation of the [[Schönhage–Strassen algorithm]]
<syntaxhighlight lang="cpp">
int getRandomInput() {
}
Line 39 ⟶ 46:
long y = getRandomInput();
long result = fastMultiplication(x, y);
assert(x * y == result);
}
}
</syntaxhighlight>
While this example is limited to simple types (for which a simple random generator can be used), tools targeting object-oriented languages typically explore the program to test and find generators (constructors or methods returning objects of that type) and call them using random inputs (either themselves generated the same way or generated using a pseudo-random generator if possible). Such approaches then maintain a pool of randomly generated objects and use a probability for either reusing a generated object or creating a new one.<ref name="AutoTest"/>
==
According to the seminal paper on random testing by D. Hamlet
<blockquote>[..] the technical, mathematical meaning of "random testing" refers to an explicit lack of "system" in the choice of test data, so that there is no correlation among different tests.<ref name=Hamlet94>{{cite book|title=Encyclopedia of Software Engineering|year=1994|publisher=John Wiley and Sons|isbn=978-0471540021
==Strengths and weaknesses==
▲== Types of random testing ==
{{Unreferenced section|date=August 2014}}
=== With respect to the input ===▼
Random testing is praised for the following strengths:
* Random input sequence generation (i.e. a sequence of method calls)▼
*It is cheap to use: it does not need to be smart about the program under test.
* Random sequence of data inputs (sometimes called stochastic testing) - f.ex. a random sequence of method calls▼
*It does not have any bias: unlike manual testing, it does not overlook bugs because there is misplaced trust in some code.
* Random data selection from existing database▼
*It is quick to find bug candidates: it typically takes a couple of minutes to perform a testing session.
*If software is properly specified: it finds real bugs.
The following weaknesses have been described :
=== Guided vs. unguided ===▼
*It only finds basic bugs (e.g. [[null pointer]] dereferencing).
* undirected random test generation - with no heuristics to guide its search▼
*It is only as precise as the specification and specifications are typically imprecise.
*It compares poorly with other techniques to find bugs (e.g. [[static program analysis]]).
*If different inputs are randomly selected on each test run, this can create problems for [[continuous integration]] because the same tests will pass or fail randomly.<ref name="so">{{cite web|url=https://stackoverflow.com/q/636353 |title=Is it a bad practice to randomly-generate test data?|website=stackoverflow.com|accessdate=15 November 2017}}</ref>
*Some argue that it would be better to thoughtfully cover all relevant cases with manually constructed tests in a white-box fashion, than to rely on randomness.<ref name="so" />
*It may require a very large number of tests for modest levels of confidence in modest failure rates. For example, it will require 459 failure-free tests to have at least 99% confidence that the probability of failure is less than 1/100.<ref name=":0" />
==Types of random testing==
▲*
*directed random test generation - e.g. "feedback-directed random test generation"<ref name="PachecoLET2007">{{cite book|last=Pacheco|first=Carlos|author2=Shuvendu K. Lahiri |author3=Michael D. Ernst |author4=Thomas Ball |chapter=Feedback-Directed Random Test Generation |title=29th International Conference on Software Engineering (ICSE'07)|date=May 2007|pages=75–84|doi=10.1109/ICSE.2007.37 |isbn=978-0-7695-2828-1 |chapter-url=http://people.csail.mit.edu/cpacheco/publications/feedback-random.pdf|issn=0270-5257}}</ref> and "adaptive random testing" <ref name="ART">{{citation |last1=T.Y. Chen |last2=F.-C. Kuo |last3=R.G. Merkel |last4=T.H. Tse |title=Adaptive random testing: The ART of test case diversity |journal=Journal of Systems and Software |volume=83 |issue=1 |pages=60–66 |date=2010 |doi=10.1016/j.jss.2009.02.022|hdl=10722/89054 |url=https://figshare.com/articles/journal_contribution/26243711 |hdl-access=free }}</ref>
== Implementations ==
Some tools implementing random testing:
* [[QuickCheck]] - a famous test tool, originally developed for [[Haskell (programming language)|Haskell]] but ported to many other languages, that generates random sequences of API calls based on a model and verifies system properties that should hold true after each run
*
*
* AutoTest - a tool integrated to EiffelStudio testing automatically Eiffel code with contracts based on the eponymous research prototype.<ref name="AutoTest"/>·
* York Extensible Testing Infrastructure (YETI) - a language agnostic tool which targets various programming languages (Java, JML, CoFoJa, .NET, C, Kermeta).
* GramTest - a grammar based random testing tool written in Java, it uses BNF notation to specify input grammars.
==
<blockquote>Random testing has only a specialized niche in practice, mostly because an effective oracle is seldom available, but also because of
▲<blockquote>Random testing has only a specialized niche in practice, mostly because an effective oracle is seldom available, but also because of difficulties with the operational profile and with generation of pseudorandom input values.<ref name="Hamlet94"/><blockquote>
For programming languages and platforms which have contracts (e.g. Eiffel. .NET or various extensions of Java like JML, CoFoJa...) contracts act as natural oracles and the approach has been applied successfully.<ref name="AutoTest">{{cite web|url=http://se.inf.ethz.ch/research/autotest/|title=AutoTest - Chair of Software Engineering|website=se.inf.ethz.ch|accessdate=15 November 2017}}</ref> In particular, random testing finds more bugs than manual inspections or user reports (albeit different ones).<ref name="ManualvsRandom">{{cite journal|title=On the number and nature of faults found by random testing|year=2009|author=Ilinca Ciupa|author2=Alexander Pretschner|author3=Manuel Oriol|author4=Andreas Leitner|author5=Bertrand Meyer|journal=Software Testing, Verification and Reliability|doi=10.1002/stvr.415|volume=21|pages=3–28}}</ref>
▲(Oracle = instrument for verifying whether the outcomes match the program specification or not. Operation profile = knowledge about usage patterns of the program and thus which parts are more important.)
==
*
*
*
*[[Corner case]]
*[[Edge case]]
*[[Concolic testing]]
==
{{Reflist}}<!--<ref name=":0" />-->
==
*
*
*
{{software testing}}
[[Category:Software testing]]
|