Content deleted Content added
→Compared to regular expressions: Simplified example. Elaborated point about nondeterminism. |
→Computational model: New section |
||
Line 221:
For recursive grammars and some inputs, the depth of the parse tree can be proportional to the input size,<ref>for example, the LISP expression (x (x (x (x ....))))</ref> so both an LR parser and a packrat parser will appear to have the same worst-case asymptotic performance. However in many domains, for example hand-written source code, the expression nesting depth has an effectively constant bound quite independent of the length of the program, because expressions nested beyond a certain depth tend to get [[refactor]]ed. When it is not necessary to keep the full parse tree, a more accurate analysis would take the depth of the parse tree into account separately from the input size.<ref>This is similar to a situation which arises in [[graph algorithms]]: the [[Bellman–Ford algorithm]] and [[Floyd–Warshall algorithm]] appear to have the same running time (<math>O(|V|^3)</math>) if only the number of vertices is considered. However, a more precise analysis which accounts for the number of edges as a separate parameter assigns the [[Bellman–Ford algorithm]] a time of <math>O(|V|*|E|)</math>, which is quadratic for sparse graphs with <math>|E| \in O(|V|)</math>.</ref>
=== Computational model ===
In order to attain linear overall complexity, the storage used for memoization must furthermore provide [[amortized analysis|amortized constant time]] access to individual data items memoized. In practice that is no problem — for example a dynamically sized [[hash table]] attains this – but that makes use of [[Pointer (computer programming)|pointer]] arithmetic, so it presumes having a [[random-access machine]]. Theoretical discussions of data structures and algorithms have an unspoken tendency to presume a more restricted model (possibly that of [[lambda calculus]], possibly that of [[Scheme (programming language)|Scheme]]), where a sparse table rather has to be built using trees, and data item access is not constant time. Traditional parsing algorithms such as the [[LL parser]] are not affected by this, but it becomes a penalty for the reputation of packrat parsers: they rely on operations of seemingly ill repute.
Viewed the other way around, this says packrat parsers tap into computational power readily available in real life systems, that older parsing algorithms do not understand to employ.
=== Indirect left recursion ===
|