Dynamic array: Difference between revisions

Content deleted Content added
Undo unexplained blanking
Unreal Engine has an interesting growth factor that's similar to Python, which I believe is valuable to note.
 
(95 intermediate revisions by 72 users not shown)
Line 1:
{{Short description|List data structure to which elements can be added/removed}}
[[File:Dynamic array.svg|thumb|Several values are inserted at the end of a dynamic array using geometric expansion. Grey cells indicate space reserved for expansion. Most insertions are fast (constant time), while some are slow due to the need for reallocation (Θ(''n'') time, labelled with turtles). The ''logical size'' and ''capacity'' of the final array are shown.]]
[[File:Dynamic array.svg|thumb|Several values are inserted at the end of a dynamic array using geometric expansion. Grey cells indicate space reserved for expansion. Most insertions are fast ([[constant time]]), while some are slow due to the need for [[Memory management|reallocation]] ({{math|Θ(''n'')}} time, labelled with turtles). The ''logical size'' and ''capacity'' of the final array are shown.]]
In [[computer science]], a '''dynamic array''', '''growable array''', '''resizable array''', '''dynamic table''', '''mutable array''', or '''array list''' is a [[random access]], variable-size list [[data structure]] that allows elements to be added or removed. It is supplied with standard libraries in many modern mainstream programming languages.
 
In [[computer science]], a '''dynamic array''', '''growable array''', '''resizable array''', '''dynamic table''', '''mutable array''', or '''array list''' is a [[random access]], variable-size [[list data structure]] that allows elements to be added or removed. It is supplied with [[standard libraries]] in many modern mainstream [[programming language]]s. Dynamic arrays overcome a limit of static [[array data type|arrays]], which have a fixed capacity that needs to be specified at [[Memory management|allocation]].
A dynamic array is not the same thing as a [[dynamic memory allocation|dynamically allocated array]], which is a fixed-size [[Array data structure|array]] whose size is fixed when the array is allocated, although a dynamic array may use such a fixed-size array as a back end.<ref name="java_util_ArrayList">See, for example, the [http://hg.openjdk.java.net/jdk6/jdk6/jdk/file/e0e25ac28560/src/share/classes/java/util/ArrayList.java source code of java.util.ArrayList class from OpenJDK 6].</ref>
 
A dynamic array is not the same thing as a [[dynamic memory allocation|dynamically allocated]] array or [[variable-length array]], either of which is an array whose size is fixed when the array is allocated, although a dynamic array may use such a fixed-size array as a back end.<ref name="java_util_ArrayList">See, for example, the [http://hg.openjdk.java.net/jdk6/jdk6/jdk/file/e0e25ac28560/src/share/classes/java/util/ArrayList.java source code of java.util.ArrayList class from OpenJDK 6].</ref>
 
== Bounded-size dynamic arrays and capacity ==
 
TheA simplestsimple dynamic array iscan be constructed by allocating aan array of fixed-size, arraytypically andlarger thenthan dividingthe itnumber intoof twoelements parts:immediately required. The elements of the firstdynamic storesarray are stored contiguously at the elementsstart of the dynamicunderlying array, and the secondremaining ispositions towards the end of the underlying array are reserved, or unused. WeElements can thenbe add or remove elementsadded at the end of thea dynamic array in [[Time complexity#Constant time|constant time]] by using the reserved space, until this space is completely consumed. When all space is consumed, and an additional element is to be added, then the underlying fixed-size array needs to be increased in size. Typically resizing is expensive because it involves allocating a new underlying array and copying each element from the original array. Elements can be removed from the end of a dynamic array in constant time, as no resizing is required. The number of elements used by the dynamic array contents is its ''logical size'' or ''size'', while the size of the underlying array is called the dynamic array's ''capacity'' or ''physical size'', which is the maximum possible size without relocating data.<ref>{{citation|author=Lambert, Kenneth Alfred|title=Physical size and logical size|work=Fundamentals of Python: From First Programs Through Data Structures|page=510|url=httphttps://books.google.com/books?id=VtfM3YGW5jYC&pg=PA518&lpg=PA518&dqq=%22logical+size%22+%22dynamic+array%22&source=bl&ots=9rXJ9tGomJ&sigpg=D5dRs802ax43NmEpKa1BUWFk1qs&hl=en&sa=X&ei=CC1JUcLqGufLigKTjYGwCA&ved=0CGkQ6AEwBw#v=onepage&q=%22logical%20size%22%20%22dynamic%20array%22&f=falsePA518|publisher=Cengage Learning|year=2009|isbn=1423902181978-1423902188}}</ref>
 
A fixed-size array will suffice in applications where the maximum logical size is fixed (e.g. by specification), or can be calculated before the array is allocated. A dynamic array might be preferred if:
In applications where the logical size is bounded, the fixed-size data structure suffices. This may be short-sighted, as more space may be needed later. A [[List of software development philosophies|philosophical programmer]] may prefer to write the code to make every array capable of resizing from the outset, then return to using fixed-size arrays during [[program optimization]]. Resizing the underlying array is an expensive task, typically involving copying the entire contents of the array.
* the maximum logical size is unknown, or difficult to calculate, before the array is allocated
* it is considered that a maximum logical size given by a specification is likely to change
* the amortized cost of resizing a dynamic array does not significantly affect performance or responsiveness
 
== Geometric expansion and amortized cost ==
 
To avoid incurring the cost of resizing many times, dynamic arrays resize by a large amount, such as doubling in size, and use the reserved space for future expansion. The operation of adding an element to the end might work as follows:
<syntaxhighlight lang="c">
function insertEnd(dynarray a, element e)
if (a.size == a.capacity)
// resize a to twice its current capacity:
a.capacity ← a.capacity * 2
// (copy the contents to the new memory ___location here)
a[a.size] ← e
a.size ← a.size + 1
</syntaxhighlight>
As ''n'' elements are inserted, the capacities form a [[geometric progression]]. Expanding the array by any constant proportion ''a'' ensures that inserting ''n'' elements takes [[Big O notation|''O''(''n'')]] time overall, meaning that each insertion takes [[Amortized analysis|amortized]] constant time. Many dynamic arrays also deallocate some of the underlying storage if its size drops below a certain threshold, such as 30% of the capacity. This threshold must be strictly smaller than 1/''a'' in order to provide [[hysteresis]] (provide a stable band to avoid repeatedly growing and shrinking) and support mixed sequences of insertions and removals with amortized constant cost.
 
Dynamic arrays are a common example when teaching [[amortized analysis]].<ref name="gt-ad"/><ref name="clrs"/>
'''function''' insertEnd(''dynarray'' a, ''element'' e)
'''if''' (a.size = a.capacity)
''// resize a to twice its current capacity:''
a.capacity ← a.capacity * 2
''// (copy the contents to the new memory ___location here)''
a[a.size] ← e
a.size ← a.size + 1
 
== Growth factor ==
As ''n'' elements are inserted, the capacities form a [[geometric progression]]. Expanding the array by any constant proportion ensures that inserting ''n'' elements takes [[Big O notation|''O''(''n'')]] time overall, meaning that each insertion takes [[Amortized analysis|amortized]] constant time. The value of this proportion ''a'' leads to a time-space tradeoff: the average time per insertion operation is about ''a''/(''a''−1), while the number of wasted cells is bounded above by (''a''−1)''n''. The choice of ''a'' depends on the library or application: some textbooks use ''a''&nbsp;=&nbsp;2,<ref name="gt-ad">{{citation|first1=Michael T.|last1=Goodrich|author1-link=Michael T. Goodrich|first2=Roberto|last2=Tamassia|author2-link=Roberto Tamassia|title=Algorithm Design: Foundations, Analysis and Internet Examples|publisher=Wiley|year=2002|contribution=1.5.2 Analyzing an Extendable Array Implementation|pages=39–41}}.</ref><ref name="clrs">{{Introduction to Algorithms|chapter=17.4 Dynamic tables|edition=2|pages=416–424}}</ref> but Java's ArrayList implementation uses ''a''&nbsp;=&nbsp;3/2<ref name="java_util_ArrayList" /> and the C implementation of [[Python (programming language)|Python]]'s list data structure uses ''a''&nbsp;=&nbsp;9/8.<ref>[http://svn.python.org/projects/python/trunk/Objects/listobject.c List object implementation] from python.org, retrieved 2011-09-27.</ref>
The growth factor for the dynamic array depends on several factors including a space-time trade-off and algorithms used in the memory allocator itself. For growth factor ''a'', the average time per insertion operation is {{citation needed span|text=about ''a''/(''a''−1), while the number of wasted cells is bounded above by (''a''−1)''n''|date=January 2018}}. If memory allocator uses a [[First fit algorithm|first-fit allocation]] algorithm, then growth factor values such as ''a''=2 can cause dynamic array expansion to run out of memory even though a significant amount of memory may still be available.<ref name=":0">{{Cite web|title = C++ STL vector: definition, growth factor, member functions|url = http://www.gahcep.com/cpp-internals-stl-vector-part-1/|access-date = 2015-08-05|archive-url = https://web.archive.org/web/20150806162750/http://www.gahcep.com/cpp-internals-stl-vector-part-1/|archive-date = 2015-08-06|url-status = dead}}</ref> There have been various discussions on ideal growth factor values, including proposals for the [[golden ratio]] as well as the value 1.5.<ref>{{Cite web|url = https://groups.google.com/forum/#!topic/comp.lang.c++.moderated/asH_VojWKJw%5B1-25%5D|title = vector growth factor of 1.5|website = comp.lang.c++.moderated|publisher = Google Groups|access-date = 2015-08-05|archive-date = 2011-01-22|archive-url = http://arquivo.pt/wayback/20110122130054/https://groups.google.com/forum/#!topic/comp.lang.c++.moderated/asH_VojWKJw%5B1-25%5D|url-status = dead}}</ref> Many textbooks, however, use ''a''&nbsp;=&nbsp;2 for simplicity and analysis purposes.<ref name="gt-ad">{{citation|first1=Michael T.|last1=Goodrich|author1-link=Michael T. Goodrich|first2=Roberto|last2=Tamassia|author2-link=Roberto Tamassia|title=Algorithm Design: Foundations, Analysis and Internet Examples|publisher=Wiley|year=2002|contribution=1.5.2 Analyzing an Extendable Array Implementation|pages=39–41}}.</ref><ref name="clrs">{{Introduction to Algorithms|chapter=17.4 Dynamic tables|edition=2|pages=416–424}}</ref>
 
Below are growth factors used by several popular implementations:
Many dynamic arrays also deallocate some of the underlying storage if its size drops below a certain threshold, such as 30% of the capacity. This threshold must be strictly smaller than 1/''a'' in order to support mixed sequences of insertions and removals with amortized constant cost.
{| class="wikitable"
 
!Implementation
Dynamic arrays are a common example when teaching [[amortized analysis]].<ref name="gt-ad"/><ref name="clrs"/>
!Growth factor (''a'')
|-
|Java ArrayList<ref name="java_util_ArrayList" />
|1.5 (3/2)
|-
|[[Python (programming language)|Python]] PyListObject<ref>[https://github.com/python/cpython/blob/bace59d8b8e38f5c779ff6296ebdc0527f6db14a/Objects/listobject.c#L58 List object implementation] from github.com/python/cpython/, retrieved 2020-03-23.</ref>
|~1.125 (n + (n >> 3))
|-
|[[Microsoft Visual C++]] 2013<ref>{{Cite web|title = Dissecting the C++ STL Vector: Part 3 - Capacity & Size|url = https://hadibrais.wordpress.com/2013/11/15/dissecting-the-c-stl-vector-part-3-capacity/|website = Micromysteries|access-date = 2015-08-05|first = Hadi|last = Brais| date=15 November 2013 }}</ref>
|1.5 (3/2)
|-
|[[G++]] 5.2.0<ref name=":0" />
|2
|-
|[[Clang]] 3.6<ref name=":0" />
|2
|-
|Facebook folly/FBVector<ref>{{Cite web|title = facebook/folly|url = https://github.com/facebook/folly/blob/master/folly/docs/FBVector.md|website = GitHub|access-date = 2015-08-05}}</ref>
|1.5 (3/2)
|-
|[[Unreal Engine]] TArray<ref>{{Cite web |date=2025-02-26 |title=Nested TArrays in structs and memory |url=https://forums.unrealengine.com/t/nested-tarrays-in-structs-and-memory/2357416/3 |access-date=2025-05-26 |website=Epic Developer Community Forums |language=en}}</ref>
|~1.375 (n + ((3 * n) >> 3))
|-
|Rust Vec<ref>{{Cite web|title=rust-lang/rust|url=https://github.com/rust-lang/rust/blob/fd4b177aabb9749dfb562c48e47379cea81dc277/src/liballoc/raw_vec.rs#L443|access-date=2020-06-09|website=GitHub|language=en}}</ref>
|2
|-
|[[Go (programming language)|Go]] slices<ref>{{Cite web|title = golang/go|url = https://github.com/golang/go/blob/master/src/runtime/slice.go#L188|website=GitHub|access-date = 2021-09-14}}</ref>
|between 1.25 and 2
|-
|[[Nim (programming language)|Nim]] sequences<ref>{{Cite web |title=The Nim memory model |url=http://zevv.nl/nim-memory/#_growing_a_seq |access-date=2022-05-24 |website=zevv.nl}}</ref>
|2
|-
|[[Steel Bank Common Lisp|SBCL]] ([[Common Lisp]]) vectors<ref>{{Cite web |title = sbcl/sbcl|url= https://github.com/sbcl/sbcl/blob/master/src/code/array.lisp#L1200-L1204|website=GitHub|access-date=2023-02-15}}</ref>
|2
|-
|[[C Sharp (programming language)|C#]] ([[.NET]] 8) List
|2
|}
 
== Performance ==
 
{{List data structure comparison}}
The dynamic array has performance similar to an array, with the addition of new operations to add and remove elements from the end:
 
* Getting or setting the value at a particular index (constant time)
Line 38 ⟶ 86:
* Inserting or deleting an element at the end of the array (constant amortized time)
 
Dynamic arrays benefit from many of the advantages of arrays, including good [[locality of reference]] and [[data cache]] utilization, compactness (low memory use), and [[random access]]. They usually have only a small fixed additional overhead for storing information about the size and capacity. This makes dynamic arrays an attractive tool for building [[Cache (computing)|cache]]-friendly [[Data structure|data structures]]. However, in languages like Python or Java that enforce reference semantics, the dynamic array generally will not store the actual data, but rather it will store [[Reference (computer science)|references]] to the data that resides in other areas of memory. In this case, accessing items in the array sequentially will actually involve accessing multiple non-contiguous areas of memory, so the many advantages of the cache-friendliness of this data structure are lost.
 
Compared to [[linked list]]s, dynamic arrays have faster indexing (constant time versus linear time) and typically faster iteration due to improved locality of reference; however, dynamic arrays require linear time to insert or delete at an arbitrary ___location, since all following elements must be moved, while linked lists can do this in constant time. This disadvantage is mitigated by the [[gap buffer]] and ''tiered vector'' variants discussed under ''Variants'' below. Also, in a highly [[Fragmentation (computer)|fragmented]] memory region, it may be expensive or impossible to find contiguous space for a large dynamic array, whereas linked lists do not require the whole data structure to be stored contiguously.
 
A [[Self-balancing binary search tree|balanced tree]] can store a list while providing all operations of both dynamic arrays and linked lists reasonably efficiently, but both insertion at the end and iteration over the list are slower than for a dynamic array, in theory and in practice, due to non-contiguous storage and tree traversal/manipulation overhead.
Line 47 ⟶ 95:
[[Gap buffer]]s are similar to dynamic arrays but allow efficient insertion and deletion operations clustered near the same arbitrary ___location. Some [[deque]] implementations use [[Deque#Implementations|array deques]], which allow amortized constant time insertion/removal at both ends, instead of just one end.
 
Goodrich<ref>{{Citation | title=Tiered Vectors: Efficient Dynamic Arrays for Rank-Based Sequences | first1=Michael T. | last1=Goodrich | author1-link = Michael T. Goodrich | first2=John G. | last2=Kloss II | year=1999 | url=httphttps://citeseerarchive.ist.psu.eduorg/viewdocdetails/summary?doi=10.1.1.17.7503algorithmsdatast0000wads/page/205 | journal=[[Workshop on Algorithms and Data Structures]] | pages=[https://archive.org/details/algorithmsdatast0000wads/page/205 205–216] | doi=10.1007/3-540-48447-7_21 | volume=1663 | series=Lecture Notes in Computer Science | isbn=978-3-540-66279-2 | url-access=registration }}</ref> presented a dynamic array algorithm called ''Tieredtiered Vectorsvectors'' that providedprovides ''O''(''n''<sup>1/2''k''</sup>) performance for order preserving insertions orand deletions from theanywhere middle ofin the array, and ''O''(''k'') get and set, where ''k'' ≥ 2 is a constant parameter.
 
[[Hashed array tree|Hashed Array Tree]] (HAT) is a dynamic array algorithm published by Sitarski in 1996.<ref name="sitarski96">{{Citation | title=HATs: Hashed array trees | contributiondepartment=Algorithm Alley | journal=Dr. Dobb's Journal | date=September 1996 | first1=Edward | last1=Sitarski | volume=21 | issue=11 | url=http://www.ddj.com/architect/184409965?pgno=5}}</ref> Hashed Arrayarray Treetree wastes order ''n''<sup>1/2</sup> amount of storage space, where ''n'' is the number of elements in the array. The algorithm has ''O''(1) amortized performance when appending a series of objects to the end of a Hashedhashed Arrayarray Treetree.
 
In a 1999 paper,<ref name="brodnik">{{Citation | title=Resizable Arrays in Optimal Time and Space | datetype=Technical Report CS-99-09 | url=http://www.cs.uwaterloo.ca/research/tr/1999/09/CS-99-09.pdf | year=1999 | first1=Andrej | last1=Brodnik | first2=Svante | last2=Carlsson | first5=ED | last5=Demaine | first4=JI | last4=Munro | first3=Robert | last3=Sedgewick | author3-link=Robert Sedgewick (computer scientist) | publisher=Department of Computer Science, University of Waterloo}}</ref><!-- Defined in {{List data structure comparison}}: {{Citation | title=Resizable Arrays in Optimal Time and Space | date=Technical Report CS-99-09 | url=http://www.cs.uwaterloo.ca/research/tr/1999/09/CS-99-09.pdf | year=1999 | first1=Andrej | last1=Brodnik | first2=Svante | last2=Carlsson | first5=ED | last5=Demaine | first4=JI | last4=Munro | first3=Robert | last3=Sedgewick | author3-link=Robert Sedgewick (computer scientist) | publisher=Department of Computer Science, University of Waterloo}} --> Brodnik et al. describe a tiered dynamic array data structure, which wastes only ''n''<sup>1/2</sup> space for ''n'' elements at any point in time, and they prove a lower bound showing that any dynamic array must waste this much space if the operations are to remain amortized constant time. Additionally, they present a variant where growing and shrinking the buffer has not only amortized but worst-case constant time.
 
Bagwell (2002)<ref>{{Citation | title=Fast Functional Lists, Hash-Lists, Deques and Variable Length Arrays | first1=Phil | last1=Bagwell | year=2002 | publisher=EPFL | url=http://citeseer.ist.psu.edu/bagwell02fast.html}}</ref> presented the [[VList]] algorithm, which can be adapted to implement a dynamic array.
 
Naïve resizable arrays -- also called "the worst implementation" of resizable arrays -- keep the allocated size of the array exactly big enough for all the data it contains, perhaps by calling [[realloc]] for each and every item added to the array. Naïve resizable arrays are the simplest way of implementing a resizable array in C. They don't waste any memory, but appending to the end of the array always takes Θ(''n'') time.<ref name="sitarski96" /><ref>
Mike Lam.
[https://w3.cs.jmu.edu/lam2mo/cs240_2015_08/files/04-dyn_arrays.pdf "Dynamic Arrays"].
</ref><ref>
[https://users.cs.northwestern.edu/~jesse/course/cs214-fa19/lec/17-amortized.pdf "Amortized Time"].
</ref><ref>
[https://iq.opengenus.org/hashed-array-tree/ "Hashed Array Tree: Efficient representation of Array"].
</ref><ref>
[https://people.ksp.sk/~kuko/gnarley-trees/Complexity2.html "Different notions of complexity"].
</ref>
Linearly growing arrays pre-allocate ("waste") Θ(1) space every time they re-size the array, making them many times faster than naïve resizable arrays -- appending to the end of the array still takes Θ(''n'') time but with a much smaller constant.
Naïve resizable arrays and linearly growing arrays may be useful when a space-constrained application needs lots of small resizable arrays;
they are also commonly used as an educational example leading to exponentially growing dynamic arrays.<ref>
Peter Kankowski.
[https://www.strchr.com/dynamic_arrays "Dynamic arrays in C"].
</ref>
 
== Language support ==
 
[[C++]]'s [[Vector (C++)|<code>std::vector</code>]] and [[Rust (programming language)|Rust]]'s <code>std::vec::Vec</code> are implementations of dynamic arrays, as are the <code>ArrayList</code><ref>Javadoc on {{Javadoc:SE|java/util|ArrayList}}</ref> classes supplied with the [[Java (programming language)|Java]] API<ref name=Bloch>{{cite book | title= "Effective Java: Programming Language Guide" |last=Bloch| first=Joshua| publisher=Addison-Wesley | edition=third | isbn=978-0134685991| year=2018}}</ref>{{rp|236}} and the [[.NET Framework]].<ref>[https://msdn.microsoft.com/en-us/library/system.collections.arraylist ArrayList Class]</ref><ref name=Skeet>{{cite book |last=Skeet|first=Jon|title= C# in Depth |date=23 March 2019 |publisher= Manning |isbn= 978-1617294532}}</ref>{{rp|22}}
[[C++]]'s [[Vector (C++)|<code>std::vector</code>]] is an implementation of dynamic arrays, as are the <code>ArrayList</code><ref>Javadoc on {{Javadoc:SE|java/util|ArrayList}}</ref> classes supplied with the [[Java (programming language)|Java]] API and the [[.NET Framework]]. The generic <code>List<></code> class supplied with version 2.0 of the .NET Framework is also implemented with dynamic arrays. [[Smalltalk]]'s <code>OrderedCollection</code> is a dynamic array with dynamic start and end-index, making the removal of the first element also O(1). [[Python (Programming Language) | Python]]'s <code>list</code> datatype implementation is a dynamic array. [[Delphi (programming language)|Delphi]] and [[D (programming language)|D]] implement dynamic arrays at the language's core. [[Ada_(programming_language)|Ada]]'s [[wikibooks:Ada Programming/Libraries/Ada.Containers.Vectors|<code>Ada.Containers.Vectors</code>]] generic package provides dynamic array implementation for a given subtype. Many scripting languages such as [[Perl]] and [[Ruby (programming_language)|Ruby]] offer dynamic arrays as a built-in [[primitive data type]]. Several cross-platform frameworks provide dynamic array implementations for [[C (programming language)|C]]: <code>CFArray</code> and <code>CFMutableArray</code> in [[Core Foundation]]; <code>GArray</code> and <code>GPtrArray</code> in [[GLib]].
 
The generic <code>List<></code> class supplied with version 2.0 of the .NET Framework is also implemented with dynamic arrays. [[Smalltalk]]'s <code>OrderedCollection</code> is a dynamic array with dynamic start and end-index, making the removal of the first element also O(1).
 
[[Python (Programming Language)|Python]]'s <code>list</code> datatype implementation is a dynamic array the growth pattern of which is: 0, 4, 8, 16, 24, 32, 40, 52, 64, 76, ...<ref>[https://github.com/python/cpython/blob/bace59d8b8e38f5c779ff6296ebdc0527f6db14a/Objects/listobject.c#L58 listobject.c (github.com)]</ref>
 
[[Delphi (programming language)|Delphi]] and [[D (programming language)|D]] implement dynamic arrays at the language's core.
 
[[Ada (programming language)|Ada]]'s [[wikibooks:Ada Programming/Libraries/Ada.Containers.Vectors|<code>Ada.Containers.Vectors</code>]] generic package provides dynamic array implementation for a given subtype.
 
Many scripting languages such as [[Perl]] and [[Ruby_(programming_language)|Ruby]] offer dynamic arrays as a built-in [[primitive data type]].
 
Several cross-platform frameworks provide dynamic array implementations for [[C (programming language)|C]], including <code>CFArray</code> and <code>CFMutableArray</code> in [[Core Foundation]], and <code>GArray</code> and <code>GPtrArray</code> in [[GLib]].
 
[[Common Lisp]] provides a rudimentary support for resizable vectors by allowing to configure the built-in <code>array</code> type as ''adjustable'' and the ___location of insertion by the ''fill-pointer''.
 
== See also ==
* [[Stack (data structure)]]
* [[Queue (data structure)]]
 
== References ==
Line 63 ⟶ 146:
 
== External links ==
* [httphttps://wwwxlinux.nist.gov/dads/HTML/dynamicarray.html NIST Dictionary of Algorithms and Data Structures: Dynamic array]
* [http://www.bsdua.org/libbsdua.html#vpool VPOOL] - C language implementation of dynamic array.
* [https://web.archive.org/web/20090704095801/http://www.collectionspy.com/ CollectionSpy] &mdash; A Java profiler with explicit support for debugging ArrayList- and Vector-related issues.
* [http://opendatastructures.org/versions/edition-0.1e/ods-java/2_Array_Based_Lists.html Open Data Structures - Chapter 2 - Array-Based Lists], [[Pat Morin]]
 
{{Data structures}}
Line 73 ⟶ 156:
[[Category:Arrays]]
[[Category:Articles with example pseudocode]]
[[Category:Amortized data structures]]