Content deleted Content added
wvhdioooo Tag: Reverted |
ShelfSkewed (talk | contribs) →String searching with don't cares: dab link |
||
(12 intermediate revisions by 7 users not shown) | |||
Line 38:
This article mainly discusses algorithms for the simpler kinds of string searching.
A similar problem introduced in the field of bioinformatics and genomics is the maximal exact matching (MEM).<ref>{{Cite journal|last1=Kurtz|first1=Stefan|last2=Phillippy|first2=Adam|last3=Delcher|first3=Arthur L|last4=Smoot|first4=Michael|last5=Shumway|first5=Martin|last6=Antonescu|first6=Corina|last7=Salzberg|first7=Steven L|date=2004|title=Versatile and open software for comparing large genomes |journal=Genome Biology |volume=5|issue=2|pages=R12|doi=10.1186/gb-2004-5-2-r12|issn=1465-6906 |pmc= 395750|pmid=14759262 |doi-access=free }}</ref> Given two strings, MEMs are common substrings that cannot be extended left or right without causing a mismatch.<ref>{{Cite journal|last1=Khan|first1=Zia|last2=Bloom|first2=Joshua S.|last3=Kruglyak|first3=Leonid|last4=Singh|first4=Mona|date=2009-07-01|title=A practical algorithm for finding maximal exact matches in large sequence datasets using sparse suffix arrays|url= |journal=Bioinformatics |volume=25|issue=13|pages=1609–1616|doi=10.1093/bioinformatics/btp275 |pmc=2732316|pmid=19389736}}</ref>
== Examples of search algorithms ==
Line 44:
A simple and inefficient way to see where one string occurs inside another is to check at each index, one by one. First, we see if there is a copy of the needle starting at the first character of the haystack; if not, we look to see if there's a copy of the needle starting at the second character of the haystack, and so forth. In the normal case, we only have to look at one or two characters for each wrong position to see that it is a wrong position, so in the average case, this takes [[Big O notation|O]](''n'' + ''m'') steps, where ''n'' is the length of the haystack and ''m'' is the length of the needle; but in the worst case, searching for a string like "aaaab" in a string like "aaaaaaaaab", it takes [[Big O notation|O]](''nm'')
=== Finite-state-automaton-based search ===
[[Image:DFA search mommy.svg|200px|right]]
In this approach, backtracking is avoided by constructing a [[deterministic finite automaton]] (DFA) that recognizes a stored search string. These are expensive to construct—they are usually created using the [[powerset construction]]—but are very quick to use. For example, the [[deterministic finite automaton|DFA]] shown to the right recognizes the word "MOMMY". This approach is frequently generalized in practice to search for arbitrary [[regular expression]]s.
===Stubs===
=== Index methods▼
[[Knuth–Morris–Pratt algorithm|Knuth–Morris–Pratt]] computes a [[deterministic finite automaton|DFA]] that recognizes inputs with the string to search for as a suffix, [[Boyer–Moore string-search algorithm|Boyer–Moore]] starts searching from the end of the needle, so it can usually jump ahead a whole needle-length at each step. Baeza–Yates keeps track of whether the previous ''j'' characters were a prefix of the search string, and is therefore adaptable to [[fuzzy string searching]]. The [[bitap algorithm]] is an application of Baeza–Yates' approach.
Faster search algorithms preprocess the text. After building a [[substring index]], for example a [[suffix tree]] or [[suffix array]], the occurrences of a pattern can be found quickly. As an example, a suffix tree can be built in <math>\Theta(n)</math> time, and all <math>z</mat] from the root of the suffix tree.▼
▲=== Index methods ===
▲Faster search algorithms preprocess the text. After building a [[substring index]], for example a [[suffix tree]] or [[suffix array]], the occurrences of a pattern can be found quickly. As an example, a suffix tree can be built in <math>\Theta(n)</math> time, and all <math>z</
=== Other variants ===
Line 69 ⟶ 74:
| Θ(n+m) in average,<br/> O(mn)
| none
|-
! Automaton-based matching
| Θ(km)
| Θ(n)
| Θ(km)
|-
! [[Rabin–Karp algorithm|Rabin–Karp]]
Line 82 ⟶ 92:
! [[Boyer–Moore string-search algorithm|Boyer–Moore]]
| Θ(m + k)
| O(n/m) at best,<br/> O(mn) at worst
| Ω(n/m) at be=https://users.dcc.uchile.cl/~gnavarro/ps/cpm98.pdf |publisher=Springer Berlin Heidelberg |series=Lecture Notes in Computer Science |isbn=978-3-540-64739-3 |access-date=2019-11-22 |archive-date=2019-01-05 |archive-url=https://web.archive.org/web/20190105101910/https://users.dcc.uchile.cl/~gnavarro/ps/cpm98.pdf |url-status=live }}</ref>{{ref|fuzzy+regexp}}▼
| Θ(k)
|-
! [[Two-way string-matching algorithm|Two-way algorithm]]<ref>{{cite journal |last1=Crochemore |first1=Maxime |last2=Perrin |first2=Dominique |title=Two-way string-matching |journal=Journal of the ACM |date=1 July 1991 |volume=38 |issue=3 |pages=650–674 |doi=10.1145/116825.116845 |s2cid=15055316 |url=http://monge.univ-mlv.fr/~mac/Articles-PDF/CP-1991-jacm.pdf |access-date=5 April 2019 |archive-date=24 November 2021 |archive-url=https://web.archive.org/web/20211124025145/http://monge.univ-mlv.fr/~mac/Articles-PDF/CP-1991-jacm.pdf |url-status=live }}</ref>{{ref|libc}}
| Θ(m)
| O(n)
| O(log(m))
|-
▲! Backward Non-Deterministic [[Suffix automaton|DAWG]] Matching
| O(m)
| Ω(n/m) at best,<br/> O(mn) at worst
Line 97 ⟶ 115:
The '''[[Boyer–Moore string-search algorithm]]''' has been the standard benchmark for the practical string-search literature.<ref name=":0">{{cite journal |last1=Hume |last2=Sunday |year=1991 |title=Fast String Searching |journal=Software: Practice and Experience |volume=21 |issue=11 |pages=1221–1248 |doi=10.1002/spe.4380211105 |s2cid=5902579 }}</ref>
==== Algorithms using a finite set of patterns ====
In the following compilation, ''M'' is the length of the longest pattern, ''m'' their total length, ''n'' the length of the searchable text, ''o'' the number of occurrences.
Line 115 ⟶ 133:
| Θ(m)
|-
! [[Commentz-Walter algorithm|Commentz-Walter]]
| [[Boyer–Moore string-search algorithm|Boyer-Moore]]
| Θ(m)
Line 135 ⟶ 152:
Other classification approaches are possible. One of the most common uses preprocessing as main criteria.
{| class="wikitable"
|+Classes of string searching algorithms<ref>Melichar, Borivoj, Jan Holub, and J. Polcar. Text Searching Algorithms. Volume I: Forward String Matching. Vol. 1. 2 vols., 2005. http://stringology.org/athens/TextSearchingAlgorithms/ {{Webarchive|url=https://web.archive.org/web/20160304074815/http://stringology.org/athens/TextSearchingAlgorithms/ |date=2016-03-04 }}.</ref>
!
Line 147 ⟶ 164:
! Patterns preprocessed
| Constructed search engines
| Signature methods
|}
Line 157 ⟶ 174:
* Match the best factor first (BNDM, BOM, Set-BOM)
* Other strategy (Naïve, Rabin–Karp, Vectorized)
===Real-time string matching===
In real-time string matching, one requires the matcher to output a response after reading each character of the text, that indicates
whether this is the last character of a match. The response has to be given within constant time.
The requirement regarding preprocessing vary: O(''m'') preprocessing may be allowed after the pattern is read (but before the reading of
the text), or a stricter requirement may be posed according to which the matcher has to also pause for at most a constant time after reading any character of the pattern (including the last).
For the more lenient version, if one does not mind that the preprocessing time and memory requirement dependend on the size of the alphabet, a real-time solution is provided by
automaton matching.
[[Zvi Galil]] developed a method to turn certain algorithms into real-time algorithms, and applied it to produce a variant of the KMP
matcher that runs in real time under the strict requirement.<ref>{{Cite journal
| last = Galil
| first = Zvi
| title = String matching in real time
| journal = Journal of the ACM
| volume = 28
| issue = 1
| pages = 134–149
| year = 1981
| doi = 10.1145/322234.322244
| pmid =
| url =
}}</ref>
==String searching with don't cares==
In this version of the string searching problem, there is a special symbol, ø (read: don't care), which can match any other symbol (including another ø).
Don't care symbols can appear either in the pattern or in the text. In 2002, an algorithm for this problem that runs in <math>O(n\log m)</math> time has been
given by [[Richard Cole]] and [[Ramesh Hariharan]], improving on a solution from 1973 by [[Michael J. Fischer|Fischer]] and [[Michael S. Paterson|Paterson]]
that has complexity <math>O(n\log m\log k)</math>, where ''k'' is the size of the alphabet.<ref>{{Cite conference
| last1 = Cole
| first1 = Richard
| last2 = Hariharan
| first2 = Ramesh
| title = Verifying candidate matches in sparse and wildcard matching
| book-title = Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
| editor =
| publisher =
| ___location =
| pages = 592–601
| year = 2002
| url =
| doi =
}}</ref> Another algorithm, claimed simpler, has been proposed by [[Peter Clifford (mathematician)|Clifford]] and [[Raphaël Clifford|Clifford]].<ref>{{cite journal |last1=Clifford |first1=Peter |last2=Clifford |first2=Raphaël |title=Simple deterministic wildcard matching |journal=Information Processing Letters |date=January 2007 |volume=101 |issue=2 |pages=53–54 |doi=10.1016/j.ipl.2006.08.002}}</ref>
==See also==
Line 164 ⟶ 224:
*[[Compressed pattern matching]]
*[[Matching wildcards]]
*[[Approximate string matching]]
*[[Full-text search]]
Line 199 ⟶ 260:
*[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2647288/ Kalign2: high-performance multiple alignment of protein and nucleotide sequences allowing external features]
*[https://www.codeproject.com/Articles/5282980/Fastest-Fulltext-Vector-Scalar-Exact-Searcher NyoTengu – high-performance pattern matching algorithm in C] – Implementations of Vector and Scalar String-Matching-Algorithms in C
*[https://arxiv.org/html/2502.20338v3 Nathaniel K. Brown, et al.: "KeBaB: k-mer based breaking for finding long MEMs", arXiv:2502.20338v3 (09 Jun 2025).]
{{strings}}
|