Content deleted Content added
m Correct link to haskell |
→Definition: (m) clarify |
||
(20 intermediate revisions by 18 users not shown) | |||
Line 1:
{{Short description|Data structure implementable in purely functional languages}}
{{
In [[computer science]], a '''purely functional data structure''' is a [[data structure]] that can be directly implemented in a [[purely functional language]]. The main difference between an arbitrary data structure and a purely functional one is that the latter is (strongly) [[immutable object|immutable]]. This restriction ensures the data structure possesses the advantages of immutable objects: (full) [[persistent data structure|persistency]],<nowiki/> quick copy of objects, and [[thread safety]]. Efficient purely functional data structures may require the use of [[lazy evaluation]] and [[memoization]].{{not typo}}
==Definition==
[[Persistent data structure
Formally, a ''
In the book ''Purely functional data structures'', Okasaki
==Ensuring that a data structure is purely functional==
A data structure is never inherently functional. For example, a stack can be implemented as a [[singly-linked list]]. This implementation is purely functional as long as the only operations on the stack return a new stack without altering the old stack. However, if the language is not purely functional, the run-time system may be unable to guarantee immutability. This is illustrated by Okasaki,{{r|Okasaki-book|pp=9-11}}
In order to ensure that a data structure is used in a purely functional way in an impure functional language, [[modular programming|modules]] or [[class (computer programming)|classes]] can be used to ensure manipulation via authorized functions only.{{
==Using purely functional data structures==
One of the central challenges in adapting existing code to use purely functional data structures lies in the fact that mutable data
structures provide "hidden outputs" for functions that use them. Rewriting these functions to use purely functional data structures
requires adding these data structures as explicit outputs.
For instance, consider a function that accepts a mutable list, removes the first element from the list, and returns that element. In a purely functional setting, removing an element from the list produces a new and shorter list, but does not update the original one. In order to be useful, therefore, a purely functional version of this function is likely to have to return the new list along with the removed element. In the most general case, a program converted in this way must return the "state" or "store" of the program as an additional result from every function call. Such a program is said to be written in [[store-passing style]].
==Examples==
Line 25 ⟶ 31:
* [[Priority queue]], implemented as a [[Brodal queue]]
* Random access list, implemented as a skew-binary random access list
* [[Hash consing]]
* [[Zipper (data structure)]]
==Design and implementation==
Line 30 ⟶ 38:
===Laziness and memoization===
Lazy evaluation is particularly interesting in a purely functional language{{r|Okasaki-book|p=31}} because the order of the evaluation never changes the result of a function. Therefore, lazy evaluation naturally becomes an important part of the construction of purely functional data structures. It allows
One of the key tools in building efficient, purely functional data structures is memoization.{{r|Okasaki-book|p=31}}
===Amortized analysis and scheduling===
Some data structures, even those that are not purely functional such as [[dynamic array
In general, having inefficient operations is not acceptable for persistent data structures, because this very operation can be called many times. It is not acceptable either for real-time or for imperative systems, where the user may require the time taken by the operation to be predictable. Furthermore, this unpredictability complicates the use of [[Parallel computing|parallelism]].{{r|Okasaki-book|p=83}}
In order to avoid those problems, some data structures allow for the inefficient operation to be postponed—this is called [[Scheduling (computing)|scheduling]].{{r|Okasaki-book|p=84}}
====Example: queue====
[[Amortized queue]]s{{r|Okasaki-book|p=65}}{{r|Okasaki-book|p=73}} are composed of two singly-linked lists: the front and the reversed rear. Elements are added to the rear list and are removed from the front list. Furthermore, whenever the front queue is empty, the rear queue is reversed and becomes the front, while the rear queue becomes empty. The amortized time complexity of each operation is constant. Each cell of the list is added, reversed and removed at most once. In order to avoid an inefficient operation where the rear list is reversed, [[real-time queue]]s add the restriction that the rear list is only as long as the front list. To ensure that the
==See also==
* [[Persistent data structure]]
==References==
Line 48 ⟶ 60:
==External links==
*[
*[
*[
*[http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-854j-advanced-algorithms-fall-2005/lecture-notes/persistent.pdf Persistent Data Structures] from the [[MIT OpenCourseWare]] course [http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-854j-advanced-algorithms-fall-2005 Advanced Algorithms]
*[
[[Category:Functional data structures]]
|