CYK algorithm: Difference between revisions

Content deleted Content added
Broken Link removed
Citation bot (talk | contribs)
Removed URL that duplicated identifier. | Use this bot. Report bugs. | #UCB_CommandLine
 
(204 intermediate revisions by more than 100 users not shown)
Line 1:
{{Short description|Parsing algorithm for context-free grammars}}
The '''Cocke–Younger–Kasami (CYK) algorithm''' (alternatively called '''CKY''') determines whether a
{{Redirect|CYK||Cyk (disambiguation)}}
[[string (computer science)|string]] can be generated by a given [[context-free grammar]] and, if so, how it can be generated. This is known as [[parsing]] the string. The algorithm employs [[bottom-up parsing]] and [[dynamic programming]].
{{Infobox algorithm
|name=Cocke–Younger–Kasami algorithm (CYK)
|class=[[Parsing]] with [[context-free grammar]]s
|data=[[String (computer science)|String]]
|time=<math>\mathcal{O}\left( n^3 \cdot \left| G \right| \right)</math>, where:
* <math>n</math> is length of the string
* <math>|G|</math> is the size of the CNF grammar
}}
 
In [[computer science]], the '''Cocke–Younger–Kasami algorithm''' (alternatively called '''CYK''', or '''CKY''') is a [[parsing]] [[algorithm]] for [[context-free grammar]]s published by Itiroo Sakai in 1961.<ref>{{cite book |last1=Grune |first1=Dick |title=Parsing techniques : a practical guide |date=2008 |publisher=Springer |___location=New York |page=579 |isbn=978-0-387-20248-8 |edition=2nd}}</ref><ref>Itiroo Sakai, “Syntax in universal translation”. In Proceedings 1961 International Conference on Machine Translation of Languages and Applied Language Analysis, Her Majesty’s Stationery Office, London, p. 593-608, 1962.</ref> The algorithm is named after some of its rediscoverers: [[John Cocke (computer scientist)|John Cocke]], Daniel Younger, [[Tadao Kasami]], and [[Jacob T. Schwartz]]. It employs [[bottom-up parsing]] and [[dynamic programming]].
The standard version of CYK operates on context-free grammars given in [[Chomsky normal form]] (CNF). Any context-free grammar may be transformed to a CNF grammar expressing the same language {{harv|Sipser|1997}}.
 
The standard version of CYK operates only on context-free grammars given in [[Chomsky normal form]] (CNF). However any context-free grammar may be algorithmically transformed into a CNF grammar expressing the same language {{harv|Sipser|1997}}.
In the [[theory of computation]], the importance of the CYK algorithm stems from the fact that it constructively proves that it is [[decision problem|decidable]] whether a given [[string (computer science)|string]] belongs to the [[formal language]] described by a given [[context-free grammar]], and the fact that it does so quite efficiently.
 
Using [[Landau symbol]]s, the [[Analysis of algorithms|worst case running time]] of CYK is <math>\Theta(n^3 \cdot |G|)</math>, where ''n'' is the length of the parsed string and ''|G|'' is the size of the CNF grammar ''G''.
The importance of the CYK algorithm stems from its high efficiency in certain situations. Using [[Big O notation|big ''O'' notation]], the [[Analysis of algorithms|worst case running time]] of CYK is <math>\mathcal{O}\left( n^3 \cdot \left| G \right| \right)</math>, where <math>n</math> is the length of the parsed string and <math>\left| G \right|</math> is the size of the CNF grammar <math>G</math> {{harv|Hopcroft|Ullman|1979|p=140}}. This makes it one of the most efficient {{Citation needed|reason=cubic time does not seem efficient at all; other algorithms claim linear execution time|date=August 2023}} parsing algorithms in terms of worst-case [[asymptotic complexity]], although other algorithms exist with better average running time in many practical scenarios.
 
==Standard form==
 
The [[dynamic programming]] algorithm requires the context-free grammar to be rendered into [[Chomsky normal form]] (CNF), because it tests for possibilities to split the current sequence ininto two smaller halfsequences. Any context-free grammar that does not generate the empty string can be represented in CNF using only allows[[Formal grammar#The syntax of grammars|production rules]] of the forms <math>A\rightarrow \alpha</math> and <math>A\rightarrow B C</math>; to allow for the empty string, andone socan suchexplicitly splittingallow <math>S\to \varepsilon</math>, where <math>S</math> is alwaysthe possiblestart forsymbol.<ref>{{Cite CNFbook |last=Sipser |first=Michael |title=Introduction to the theory of computation |date=2006 |publisher=Thomson Course Technology |isbn=0-534-95097-3 |edition=2nd |___location=Boston |at=Definition 2.8 |oclc=58544333}}</ref>
 
==Algorithm==
 
===As pseudocode===
The algorithm in [[pseudocode]] is as follows:
 
'''Letlet''' the input be a string ''SI'' consisting of ''n'' characters: ''a''<sub>1</sub> ... ''a''<sub>''n''</sub>.
'''Letlet''' the grammar contain ''r'' nonterminal symbols ''R''<sub>1</sub> ... ''R''<sub>''r''</sub>, with start symbol ''R''<sub>1</sub>.
'''let''' ''P''[''n'',''n'',''r''] be an array of booleans. Initialize all elements of ''P'' to false.
This grammar contains the subset ''R''<sub>''s''</sub> which is the set of start symbols.
'''Letlet''' ''Pback''[''n'',''n'',''r''] be an array of booleanslists of backpointing triples. Initialize all elements of ''Pback'' to falsethe empty list.
'''For each''' ''i'' = 1 to ''n''
'''for each''' ''s'' = 1 to ''n''
'''Forfor each''' unit production ''R''<sub>''jv''</sub> ->&rarr; ''a''<sub>''is''</sub>,
'''set''' ''P''[''i1'',1''s'',''jv''] = true.
'''For each''' ''i'' = 2 to ''n'' ''-- Length of span''
'''Forfor each''' ''jl'' = 12 to ''n''-''i''+1 ''-- StartLength of span''
'''Forfor each''' ''ks'' = 1 to ''in''-''l''+1 ''-- PartitionStart of span''
'''Forfor each''' production ''Rp''<sub> = 1 to ''Al''</sub> ->1 ''R''<sub>''B''</sub>-- Partition of span''R''<sub>''C''</sub>
'''If'''for each''P''[''j'',''k'',''B''] andproduction ''PR''[<sub>''ja''+''k'',''i''-''k'',''C'']</sub> '''then''' set&rarr; ''PR''[<sub>''jb'',</sub> ''iR'',<sub>''Ac''] = true</sub>
'''Ifif''' any of ''P''[1''p'',''ns'',''xb''] isand true (''xP'' is iterated over the set [''sl''-''p'', where ''s'' are all the indices for +''Rp''<sub>,''sc''] '''then'''</sub>)
'''Thenset''' ''SP''[''l'',''s'',''a''] is member= oftrue, language
append <p,b,c> to ''back''[''l'',''s'',''a'']
'''Else''' ''S'' is not member of language
'''if''' ''P''[n,''1'',''1''] is true '''then'''
''I'' is member of language
'''return''' ''back'' -- by ''retracing the steps through back, one can easily construct all possible parse trees of the string.''
'''else'''
'''return''' "not a member of language"
 
<div class="toccolours mw-collapsible mw-collapsed">
 
==== Probabilistic CYK (for finding the most probable parse) ====
Allows to recover the most probable parse given the probabilities of all productions.
<div class="mw-collapsible-content">
 
'''let''' the input be a string ''I'' consisting of ''n'' characters: ''a''<sub>1</sub> ... ''a''<sub>''n''</sub>.
'''let''' the grammar contain ''r'' nonterminal symbols ''R''<sub>1</sub> ... ''R''<sub>''r''</sub>, with start symbol ''R''<sub>1</sub>.
'''let''' ''P''[''n'',''n'',''r''] be an array of real numbers. Initialize all elements of ''P'' to zero.
'''let''' ''back''[''n'',''n'',''r''] be an array of backpointing triples.
'''for each''' ''s'' = 1 to ''n''
'''for each''' unit production ''R''<sub>''v''</sub> &rarr;''a''<sub>''s''</sub>
'''set''' ''P''[''1'',''s'',''v''] = Pr(''R''<sub>''v''</sub> &rarr;''a''<sub>''s''</sub>)
'''for each''' ''l'' = 2 to ''n'' ''-- Length of span''
'''for each''' ''s'' = 1 to ''n''-''l''+1 ''-- Start of span''
'''for each''' ''p'' = 1 to ''l''-1 ''-- Partition of span''
'''for each''' production ''R''<sub>''a''</sub> &rarr; ''R''<sub>''b''</sub> ''R''<sub>''c''</sub>
prob_splitting = Pr(''R''<sub>''a''</sub> &rarr;''R''<sub>''b''</sub> ''R''<sub>''c''</sub>) * ''P''[''p'',''s'',''b''] * ''P''[''l''-''p'',''s''+''p'',''c'']
'''if''' prob_splitting > ''P''[''l'',''s'',''a''] '''then'''
'''set''' ''P''[''l'',''s'',''a''] = prob_splitting
'''set''' ''back''[''l'',''s'',''a''] = <p,b,c>
'''if''' ''P''[n,''1'',''1''] > 0 '''then'''
find the parse tree by retracing through ''back''
'''return''' the parse tree
'''else'''
'''return''' "not a member of language"
</div>
</div>
 
===As prose===
In informal terms, this algorithm considers every possible subsequencesubstring of the sequenceinput of wordsstring and sets <math>P[il,js,kv]</math> to be true if the subsequencesubstring of wordslength <math>l</math> starting from i of length j<math>s</math> can be generated from Rthe nonterminal <submath>kR_v</submath>. Once it has considered subsequencessubstrings of length 1, it goes on to subsequencessubstrings of length 2, and so on. For subsequencessubstrings of length 2 and greater, it considers every possible partition of the subsequencesubstring into two parts, and checks to see if there is some production P<math>A \to QB R\; C</math> such that Q<math>B</math> matches the first part and R<math>C</math> matches the second part. If so, it records P<math>A</math> as matching the whole subsequencesubstring. Once this process is completed, the sentenceinput string is recognizedgenerated by the grammar if the subsequencesubstring containing the entire sentenceinput string is matched by the start symbol.
 
==Example==
[[File:CYK algorithm animation showing every step of a sentence parsing.gif|thumb|upright=2|Sentence parsing using the CYK algorithm]]
This is an example grammar:
 
:<math chem>\begin{align}
\ce{S} & \ \ce{-> NP\ VP}\\
\ce{VP} & \ \ce{-> VP\ PP}\\
\ce{VP} & \ \ce{-> V\ NP}\\
\ce{VP} & \ \ce{-> eats}\\
\ce{PP} & \ \ce{-> P\ NP}\\
\ce{NP} & \ \ce{-> Det\ N}\\
\ce{NP} & \ \ce{-> she}\\
\ce{V} & \ \ce{-> eats}\\
\ce{P} & \ \ce{-> with}\\
\ce{N} & \ \ce{-> fish}\\
\ce{N} & \ \ce{-> fork}\\
\ce{Det} & \ \ce{-> a}
\end{align}</math>
 
Now the sentence ''she eats a fish with a fork'' is analyzed using the CYK algorithm. In the following table, in <math>P[i,j,k]</math>, {{mvar|i}} is the number of the row (starting at the bottom at 1), and {{mvar|j}} is the number of the column (starting at the left at 1).
 
{| class="wikitable" style="text-align:center"
|+CYK table
|-
Line 51 ⟶ 121:
| she || eats || a || fish || with || a || fork
|}
 
For readability, the CYK table for ''P'' is represented here as a 2-dimensional matrix ''M'' containing a set of non-terminal symbols, such that {{mvar|R<sub>k</sub>}} is in {{tmath|M[i,j]}} if, and only if, {{tmath|P[i,j,k]}}.
In the above example, since a start symbol ''S'' is in {{tmath|M[7,1]}}, the sentence can be generated by the grammar.
 
==Extensions==
 
===Generating a parse tree===
ItThe isabove simplealgorithm tois extenda the[[recognizer]] above algorithm tothat notwill only determine if a sentence is in athe language,. butIt is simple to extend it into a [[parser]] that also constructconstructs a [[parse tree]], by storing parse tree nodes as elements of the array, instead of booleansthe boolean 1. SinceThe node is linked to the grammarsarray beingelements recognizedthat canwere used to produce it, so as to build the tree structure. Only one such node in each array element is needed if only one parse tree is to be produced. However, if all parse trees of an ambiguous sentence are to be kept, it is necessary to store in the array element a list of nodesall (unlessthe oneways wishesthe tocorresponding onlynode pickcan onebe possibleobtained parse tree);in the endparsing resultprocess. This is thensometimes done with a forestsecond oftable possibleB[n,n,r] parseof treesso-called ''backpointers''.
The end result is then a shared-forest of possible parse trees, where common trees parts are factored between the various parses. This shared forest can conveniently be read as an [[ambiguous grammar]] generating only the sentence parsed, but with the same ambiguity as the original grammar, and the same parse trees up to a very simple renaming of non-terminals, as shown by {{harvtxt|Lang|1994}}.
An alternative formulation employs a second table B[n,n,r] of so-called ''backpointers''.
 
===Parsing non-CNF context-free grammars===
 
As pointed out by {{harvtxt|Lange|Leiß|2009}}, the drawback of all known transformations into Chomsky normal form is that they can lead to an undesirable bloat in grammar size. The size of a grammar is the sum of the sizes of its production rules, where the size of a rule is one plus the length of its right-hand side. Using ''<math>g''</math> to denote the size of the original grammar, the size blow-up in the worst case may range from <math>g^2</math> to <math>2^{2 g}</math>, depending on the used transformation algorithm used. For the use in teaching, theyLange and Leiß propose a slight generalization of the CYK algorithm, "without compromising efficiency of the algorithm, clarity of its presentation, or simplicity of proofs" {{harv|Lange|Leiß|2009}}.
 
===Parsing weighted context-free grammars===
It is also possible to extend the CYK algorithm to parse strings using [[weighted context-free grammar|weighted]] and [[stochastic context-free grammar]]s. Weights (probabilities) are then stored in the table P instead of booleans, so P[i,j,A] will contain the minimum weight (maximum probability) that the substring from i to j can be derived from A. Further extensions of the algorithm allow all parses of a string to be enumerated from lowest to highest weight (highest to lowest probability).
 
==== Numerical stability ====
When the probabilistic CYK algorithm is applied to a long string, the splitting probability can become very small due to multiplying many probabilities together. This can be dealt with by summing log-probability instead of multiplying probabilities.
 
===Valiant's algorithm===
Using [[Landau symbol]]s, theThe [[Analysis of algorithms|worst case running time]] of CYK is <math>\Theta(n^3 \cdot |G|)</math>, where ''n'' is the length of the parsed string and |''|G|''| is the size of the CNF grammar ''G''. This makes it one of the most efficient algorithms for recognizing general context-free languages in practice. {{harvtxt|Valiant|1975}} gave an extension of the CYK algorithm. His algorithm computes the same parsing table
as the CYK algorithm; yet he showed that [[Matrix multiplication algorithm#AlgorithmsSub-cubic for efficient matrix multiplicationalgorithms|algorithms for efficient multiplication]] of [[Boolean matrix|matrices with 0-1-entries]] can be utilized for performing this computation.
 
Using the [[Coppersmith–Winograd algorithm]] for multiplying these matrices, this gives an asymptotic worst-case running time of <math>O(n^{2.38} \cdot |G|)</math>. However, the constant term hidden by the [[Big O Notation]] is so large that the Coppersmith–Winograd algorithm is only worthwhile for matrices that are too large to handle on present-day computers {{harv|Knuth|1997}}, and this approach requires subtraction and so is only suitable for recognition. The dependence on efficient matrix multiplication cannot be avoided altogether: {{harvtxt|Lee|2002}} has proved that any parser for context-free grammars working in time <math>O(n^{3-\varepsilon} \cdot |G|)</math> can be effectively converted into an algorithm computing the product of <math>(n \times n)</math>-matrices with 0-1-entries in time <math>O(n^{3 - \varepsilon/3})</math>, and this was extended by Abboud et al.<ref>{{cite arXiv|last1=Abboud|first1=Amir|last2=Backurs|first2=Arturs|last3=Williams|first3=Virginia Vassilevska|date=2015-11-05|title=If the Current Clique Algorithms are Optimal, so is Valiant's Parser|class=cs.CC|eprint=1504.01431}}</ref> to apply to a constant-size grammar.
 
==See also==
Line 74 ⟶ 151:
* [[Earley parser]]
* [[Packrat parser]]
* [[Inside–outside algorithm]]
 
==References==
{{reflist}}
<references />
 
* [[John Cocke]] and Jacob T. Schwartz (1970). Programming languages and their compilers: Preliminary notes. Technical report, [[Courant Institute of Mathematical Sciences]], [[New York University]].
== Sources ==
* [[Tadao Kasami|T. Kasami]] (1965). An efficient recognition and syntax-analysis algorithm for context-free languages. Scientific report AFCRL-65-758, Air Force Cambridge Research Lab, [[Bedford, MA]].
*{{cite conference |title= Syntax in universal translation |last= Sakai |first= Itiroo |date= 1962 |___location= London |publisher= Her Majesty’s Stationery Office |volume= II |pages= 593–608 |conference= 1961 International Conference on Machine Translation of Languages and Applied Language Analysis, Teddington, England}}
* Daniel H. Younger (1967). Recognition and parsing of context-free languages in time ''n''<sup>3</sup>. ''Information and Control'' 10(2): 189&ndash;208.
*{{cite tech report |last1=Cocke |first1=John |author-link1=John Cocke (computer scientist) |last2=Schwartz |first2=Jacob T. |date=April 1970 |title=Programming languages and their compilers: Preliminary notes |edition=2nd revised |publisher=[[Courant Institute of Mathematical Sciences|CIMS]], [[New York University|NYU]] |url=http://www.softwarepreservation.org/projects/FORTRAN/CockeSchwartz_ProgLangCompilers.pdf}}
* [[Donald E. Knuth]]. ''The Art of Computer Programming Volume 2: Seminumerical Algorithms''. Addison-Wesley Professional; 3rd edition (November 14, 1997). ISBN 978-0201896848. pp.&nbsp;501.
* {{cite book | isbn=0-201-02988-X | first1=John E. | last1=Hopcroft | author1-link=John E. Hopcroft | first2=Jeffrey D. | last2=Ullman | author2-link=Jeffrey D. Ullman | title=Introduction to Automata Theory, Languages, and Computation | ___location=Reading/MA | publisher=Addison-Wesley | year=1979 | url=https://archive.org/details/introductiontoau00hopc }}
* {{Citation
*{{cite tech report |last1=Kasami |first1=T. |author-link1=Tadao Kasami |year=1965 |title=An efficient recognition and syntax-analysis algorithm for context-free languages |number=65-758 |publisher=[[Air Force Cambridge Research Laboratories|AFCRL]]}}
| last=Lange
*{{cite book |last1=Knuth |first1=Donald E. |author-link1=Donald Knuth |title=The Art of Computer Programming Volume 2: Seminumerical Algorithms |publisher=Addison-Wesley Professional |edition=3rd |date=November 14, 1997 |isbn=0-201-89684-2 |pages=501 }}
| first=Martin
*{{cite journal |last1=Lang |first1=Bernard |title=Recognition can be harder than parsing |journal=[[Computational Intelligence (journal)|Comput. Intell.]] |year=1994 |volume=10 |issue=4 |pages=486–494 |citeseerx=10.1.1.50.6982 |doi=10.1111/j.1467-8640.1994.tb00011.x |s2cid=5873640 }}
| last2=Leiß
*{{cite journal |last1=Lange |first1=Martin |last2=Leiß |first2=Hans |title=To CNF or not to CNF? An Efficient Yet Presentable Version of the CYK Algorithm |year=2009 |journal=Informatica Didactica |volume=8 |url=http://www.informatica-didactica.de/index.php?page=LangeLeiss2009 }}
| first2=Hans
*{{cite journal |last1=Lee |first1=Lillian |author-link=Lillian Lee (computer scientist)|title=Fast context-free grammar parsing requires fast Boolean matrix multiplication |journal=[[Journal of the ACM|J. ACM]] |volume=49 |issue=1 |pages=1–15 |year=2002 |doi=10.1145/505241.505242 |arxiv=cs/0112018 |s2cid=1243491 }}
| title=To CNF or not to CNF? An Efficient Yet Presentable Version of the CYK Algorithm
*{{cite book |last1=Sipser |first1=Michael |author-link1=Michael Sipser |title=Introduction to the Theory of Computation |publisher=IPS |year=1997 |edition=1st |page=[https://archive.org/details/introductiontoth00sips/page/99 99] |isbn=0-534-94728-X |url=https://archive.org/details/introductiontoth00sips/page/99 }}
| year=2009
*{{cite journal |last1=Valiant |first1=Leslie G. |author-link1=Leslie Valiant |title=General context-free recognition in less than cubic time |journal=[[Journal of Computer and System Sciences|J. Comput. Syst. Sci.]] |volume=10 |issue=2 |year=1975 |pages=308–314 |doi=10.1016/s0022-0000(75)80046-8 |doi-access=free }}
| journal=Informatica Didactica
*{{cite journal |last1=Younger |first1=Daniel H. |date=February 1967 |title=Recognition and parsing of context-free languages in time ''n''<sup>3</sup> |journal=[[Information and Computation|Inform. Control]] |volume=10 |issue=2 |pages=189–208 |doi=10.1016/s0019-9958(67)80007-x|doi-access=free }}
| volume=8
| url=http://www.informatica-didactica.de/cmsmadesimple/index.php?page=LangeLeiss2009
| place=[http://www.informatica-didactica.de/cmsmadesimple/uploads/Artikel/LangeLeiss2009/LangeLeiss2009.pdf pdf]
}}
*{{Citation
| last=Sipser
| first=Michael
| title=Introduction to the Theory of Computation
| publisher=IPS
| year=1997
| edition=1st
| page=99
| isbn =0-534-94728-X
}}
*{{Citation
| last = Lee
| first = Lillian
| title = Fast context-free grammar parsing requires fast Boolean matrix multiplication
| journal = [[Journal of the ACM]]
| volume = 49
| issue = 1
| pages = 1–15
| year = 2002
| doi = 10.1145/505241.505242
| postscript = .
}}
* [[Leslie G. Valiant]]. ''General context-free recognition in less than cubic time''. [[Journal of Computer and System Sciences]], 10:2 (1975), 308–314.
 
==External links==
* [https://raw.org/tool/cyk-algorithm/ Interactive Visualization of the CYK algorithm]
* [http://www.informatik.uni-leipzig.de/alg/lehre/ss08/AUTO-SPRACHEN/Java-Applets/CYK-Algorithmus.html Interactive Applet from the University of Leipzig to demonstrate the CYK-Algorithm (Site is in german)]
* [https://martinlaz.github.io/demos/cky.html CYK parsing demo in JavaScript]
* [http://www.swisseduc.ch/compscience/exorciser/ Exorciser is a Java application to generate exercises in the CYK algorithm as well as Finite State Machines, Markov algorithms etc]
* [https://www.swisseduc.ch/informatik/exorciser/ Exorciser is a Java application to generate exercises in the CYK algorithm as well as Finite State Machines, Markov algorithms etc]
 
{{Parsers}}
 
[[Category:Parsing algorithms]]
 
[[af:CYK-algoritme]]
[[cs:Algoritmus Cocke-Younger-Kasami]]
[[de:Cocke-Younger-Kasami-Algorithmus]]
[[es:Algoritmo CYK]]
[[fr:Algorithme de Cocke-Younger-Kasami]]
[[gl:Algoritmo CYK]]
[[ko:CYK 알고리즘]]
[[nl:CYK-algoritme]]
[[ja:CYK法]]
[[no:CYK-algoritmen]]
[[pl:Algorytm CYK]]
[[pt:Algoritmo CYK]]
[[ru:Алгоритм Коука — Янгера — Касами]]
[[sr:Кук-Јангер-Касами алгоритам]]
[[fi:CYK-algoritmi]]
[[vi:Thuật toán CYK]]
[[zh:CYK算法]]