Examine individual changes
This page allows you to examine the variables generated by the Edit Filter for an individual change.
Variables generated for this change
Variable | Value |
---|---|
Whether or not the edit is marked as minor (no longer in use) (minor_edit ) | false |
Edit count of the user (user_editcount ) | null |
Name of the user account (user_name ) | '42.111.198.230' |
Age of the user account (user_age ) | 0 |
Groups (including implicit) the user is in (user_groups ) | [
0 => '*'
] |
Rights that the user has (user_rights ) | [
0 => 'createaccount',
1 => 'read',
2 => 'edit',
3 => 'createtalk',
4 => 'writeapi',
5 => 'editmyusercss',
6 => 'editmyuserjs',
7 => 'viewmywatchlist',
8 => 'editmywatchlist',
9 => 'viewmyprivateinfo',
10 => 'editmyprivateinfo',
11 => 'editmyoptions',
12 => 'abusefilter-view',
13 => 'abusefilter-log',
14 => 'abusefilter-log-detail',
15 => 'centralauth-merge',
16 => 'vipsscaler-test',
17 => 'ep-bereviewer'
] |
Global groups that the user is in (global_user_groups ) | [] |
Whether or not a user is editing through the mobile interface (user_mobile ) | true |
Page ID (page_id ) | 890887 |
Page namespace (page_namespace ) | 0 |
Page title without namespace (page_title ) | 'Array programming' |
Full page title (page_prefixedtitle ) | 'Array programming' |
Last ten users to contribute to the page (page_recent_contributors ) | [
0 => 'GünniX',
1 => '86.26.205.55',
2 => 'Chimin 07',
3 => 'ClueBot NG',
4 => '49.149.97.224',
5 => '98.217.113.46',
6 => '2001:8B0:181:1:754D:A4A2:1178:86D',
7 => '81.2.81.111',
8 => 'Cedar101',
9 => '128.12.254.132'
] |
Action (action ) | 'edit' |
Edit summary/reason (summary ) | '... ' |
Old content model (old_content_model ) | 'wikitext' |
New content model (new_content_model ) | 'wikitext' |
Old page wikitext, before the edit (old_wikitext ) | '{{Programming paradigms}}
In [[computer science]], '''array programming languages''' (also known as [[vector (computing)|vector]] or '''multidimensional''' languages) generalize operations on [[scalar (computing)|scalar]]s to apply transparently to [[vector (geometric)|vector]]s, [[matrix (mathematics)|matrices]], and higher-dimensional arrays.
Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon to find array programming language [[one-liner program|one-liners]] that require more than a couple of pages of Java code.<ref>{{cite web |url=http://www.cs.nyu.edu/~michaels/screencasts/Java_vs_K/Java_vs_K.html |title=Java and K |accessdate=2008-01-23 |author=Michael Schidlowsky}}</ref>
Modern programming languages that support array programming are commonly used in [[computational science|scientific]] and engineering settings; these include [[Fortran 90]], Mata, [[MATLAB]], [[Analytica (software)|Analytica]], [[TK Solver]] (as lists), [[GNU Octave|Octave]], [[R (programming language)|R]], [[Cilk Plus]], [[Julia (programming language)|Julia]], [[Perl_Data_Language|Perl Data Language (PDL)]] and the [[NumPy]] extension to [[Python (programming language)|Python]]. In these languages, an operation that operates on entire arrays can be called a '''vectorized''' operation,<ref>{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |last-author-amp=yes |journal=Computing in Science and Engineering |publisher=IEEE |year=2011}}</ref> regardless of whether it is executed on a [[vector processor]] or not.
==Concepts==
The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a [[high-level programming language|high-level programming]] model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations.
Iverson described the rationale behind array programming (actually referring to APL) as follows:<ref>{{cite journal |author= Iverson, K. E. |title= Notations as a Tool of Thought. |journal= Communications of the ACM |volume= 23 |issue= 8 |pages= 444–465 |year= 1980 |url= http://www.jsoftware.com/papers/tot.htm |accessdate= 2011-03-22 |doi=10.1145/358896.358899}}</ref>
{{quote|most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician. [...]
The thesis [...] is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation.
[...] it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.
Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.
[...]
Users of computers and programming languages are often concerned primarily with the efficiency of execution of algorithms, and might, therefore, summarily dismiss many of the algorithms presented here. Such dismissal would be short-sighted, since a clear statement of an algorithm can usually be used as a basis from which one may easily derive more efficient algorithm.}}
The basis behind array programming and thinking is to find and exploit the properties of data where individual elements are similar or adjacent. Unlike object orientation which implicitly breaks down data to its constituent parts (or [[scalar (computing)|scalar]] quantities), array orientation looks to group data and apply a uniform handling.
[[Function rank]] is an important concept to array programming languages in general, by analogy to [[tensor]] rank in mathematics: functions that operate on data may be classified by the number of dimensions they act on. Ordinary multiplication, for example, is a scalar ranked function because it operates on zero-dimensional data (individual numbers). The [[cross product]] operation is an example of a vector rank function because it operates on vectors, not scalars. [[Matrix multiplication]] is an example of a 2-rank function, because it operates on 2-dimensional objects (matrices). [[Reduce (higher-order function)|Collapse operators]] reduce the dimensionality of an input data array by one or more dimensions. For example, summing over elements collapses the input array by 1 dimension.
==Uses==
Array programming is very well suited to implicit parallelization; a topic of much research nowadays. Further, [[Intel]] and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting from [[MMX (instruction set)|MMX]] and continuing through [[SSSE3]] and [[3DNow!]], which include rudimentary [[SIMD]] array capabilities. Array processing is distinct from [[parallel computing|parallel processing]] in that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones ([[MIMD]]) to be solved piecemeal by numerous processors. Processors with two or more cores are increasingly common today.
==Languages==
The canonical examples of array programming languages are [[APL (programming language)|APL]], [[J programming language|J]], and [[Fortran]]. Others include: [[D (programming language)|D]], [[A+ (programming language)|A+]], [[Analytica (software)|Analytica]], [[Chapel (programming language)|Chapel]], [[IDL (programming language)|IDL]], [[Julia (programming language)|Julia]], [[K (programming language)|K]], [[Q (programming language from Kx Systems)|Q]], Mata, [[Mathematica]], [[MATLAB]], [[MOLSF]], [[NumPy]], [[GNU Octave]], [[Perl Data Language|PDL]], [[R (programming language)|R]], [[S-Lang (programming language)|S-Lang]], [[SAC programming language|SAC]], [[Nial programming language|Nial]] and [[ZPL (programming language)|ZPL]].
===Scalar languages===
In scalar languages such as [[C (programming language)|C]] and [[Pascal (programming language)|Pascal]], operations apply only to single values, so ''a''+''b'' expresses the addition of two numbers. In such languages, adding one array to another requires indexing and looping, the coding of which is tedious.
<syntaxhighlight lang="c">
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
a[i][j] += b[i][j];
</syntaxhighlight>
===Array languages===
In array languages, operations are generalized to apply to both scalars and arrays. Thus, ''a''+''b'' expresses the sum of two scalars if ''a'' and ''b'' are scalars, or the sum of two arrays if they are arrays.
An array language simplifies programming but possibly at a cost known as the ''abstraction penalty''.<ref>{{cite journal|author=Surana P |title=Meta-Compilation of Language Abstractions. |year=2006 |url=ftp://lispnyc.org/meeting-assets/2007-02-13_pinku/SuranaThesis.pdf |format=[[PDF]] |accessdate=2008-03-17 |deadurl=yes |archiveurl=https://web.archive.org/web/20150217154926/http://lispnyc.org/meeting-assets/2007-02-13_pinku/SuranaThesis.pdf |archivedate=2015-02-17 |df= }}</ref><ref>{{cite web |last= Kuketayev |title= The Data Abstraction Penalty (DAP) Benchmark for Small Objects in Java. |url= http://www.adtmag.com/joop/article.aspx?id=4597 |accessdate= 2008-03-17}}</ref><ref>{{Cite book |last= Chatzigeorgiou |last2= Stephanides |editor-last= Blieberger |editor2-last= Strohmeier |contribution= Evaluating Performance and Power Of Object-Oriented Vs. Procedural Programming Languages |title= Proceedings - 7th International Conference on Reliable Software Technologies - Ada-Europe'2002 |year= 2002 |pages= 367 |publisher= Springer |url= https://books.google.com/?id=QMalP1P2kAMC&dq=%22abstraction+penalty%22 |isbn= 978-3-540-43784-0 }}</ref> Because the additions are performed in isolation from the rest of the coding, they may not produce the optimally most [[algorithmic efficiency|efficient]] code. (For example, additions of other elements of the same array may be subsequently encountered during the same execution, causing unnecessary repeated lookups.) Even the most sophisticated [[optimizing compiler]] would have an extremely hard time amalgamating two or more apparently disparate functions which might appear in different program sections or sub-routines, even though a programmer could do this easily, aggregating sums on the same pass over the array to minimize [[Computational overhead|overhead]]).
====Ada====
The previous C code would become the following in the [[Ada (programming language)|Ada]] language,<ref>[http://www.adaic.org/standards/05rm/html/RM-TTL.html Ada Reference Manual]: [http://www.adaic.org/resources/add_content/standards/05rm/html/RM-G-3-1.html G.3.1 Real Vectors and Matrices]</ref> which supports array-programming syntax.
<pre>
A := A + B;
</pre>
====APL====
[[APL_(programming_language)|APL]] uses single character Unicode symbols with no syntactic sugar.
<pre>
A ← A + B
</pre>
This operation works on arrays of any rank (including rank 0). Dyalog APL extends the original language with [[augmented assignment]]s:
<pre>
A +← B
</pre>
====Analytica====
[[Analytica (software)|Analytica]] provides the same economy of expression as Ada.
<pre>
A := A + B;
</pre>
====BASIC====
[[Dartmouth BASIC]] had MAT statements for matrix and array manipulation in its third edition (1966).
<syntaxhighlight lang="basic">
DIM A(4),B(4),C(4)
MAT A = 1
MAT B = 2*A
MAT C = A + B
MAT PRINT A,B,C
</syntaxhighlight>
====Mata====
[[Stata]]'s matrix programming language Mata supports array programming. Below, we illustrate addition, multiplication, addition of a matrix and a scalar, element by element multiplication, subscripting, and one of Mata's many inverse matrix functions.
<source lang="stata">
. mata:
: A = (1,2,3) \(4,5,6)
: A
1 2 3
+-------------+
1 | 1 2 3 |
2 | 4 5 6 |
+-------------+
: B = (2..4) \(1..3)
: B
1 2 3
+-------------+
1 | 2 3 4 |
2 | 1 2 3 |
+-------------+
: C = J(3,2,1) // A 3 by 2 matrix of ones
: C
1 2
+---------+
1 | 1 1 |
2 | 1 1 |
3 | 1 1 |
+---------+
: D = A + B
: D
1 2 3
+-------------+
1 | 3 5 7 |
2 | 5 7 9 |
+-------------+
: E = A*C
: E
1 2
+-----------+
1 | 6 6 |
2 | 15 15 |
+-----------+
: F = A:*B
: F
1 2 3
+----------------+
1 | 2 6 12 |
2 | 4 10 18 |
+----------------+
: G = E :+ 3
: G
1 2
+-----------+
1 | 9 9 |
2 | 18 18 |
+-----------+
: H = F[(2\1), (1, 2)] // Subscripting to get a submatrix of F and
: // switch row 1 and 2
: H
1 2
+-----------+
1 | 4 10 |
2 | 2 6 |
+-----------+
: I = invsym(F'*F) // Generalized inverse (F*F^(-1)F=F) of a
: // symmetric positive semi-definite matrix
: I
[symmetric]
1 2 3
+-------------------------------------------+
1 | 0 |
2 | 0 3.25 |
3 | 0 -1.75 .9444444444 |
+-------------------------------------------+
: end
</source>
====MATLAB====
The implementation in [[MATLAB]] allows the same economy allowed by using the Ada language.
<pre>
A = A + B;
</pre>
A variant of the MATLAB language is the [[GNU Octave]] language, which extends the original language with [[augmented assignment]]s:
<pre>
A += B;
</pre>
Both MATLAB and GNU Octave natively support linear algebra operations such as matrix multiplication, [[matrix inversion]], and the numerical solution of [[system of linear equations]], even using the [[Moore–Penrose pseudoinverse]].<ref>{{cite web |title= GNU Octave Manual. Arithmetic Operators. |url= https://www.gnu.org/software/octave/doc/interpreter/Arithmetic-Ops.html |accessdate= 2011-03-19}}</ref><ref>{{cite web |title= MATLAB documentation. Arithmetic Operators. |url= http://www.mathworks.com/help/techdoc/ref/arithmeticoperators.html |accessdate= 2011-03-19}}</ref>
The [[Nial]] example of the inner product of two arrays can be implemented using the native matrix multiplication operator. If <code>a</code> is a row vector of size [1 n] and <code>b</code> is a corresponding column vector of size [n 1].
a * b;
The inner product between two matrices having the same number of elements can be implemented with the auxiliary operator <code>(:)</code>, which reshapes a given matrix into a column vector, and the [[transpose]] operator <code>'</code>:
A(:)' * B(:);
====rasql====
The [[Rasdaman#Raster Query Language|rasdaman query language]] is a database-oriented array-programming language. For example, two arrays could be added with the following query:
<pre>
SELECT A + B
FROM A, B
</pre>
====R====
The [[R (programming language)|R]] language supports array paradigm by default. The following example illustrates a process of multiplication of two matrices followed by an addition of a scalar (which is, in fact, a one-element vector) and a vector:
<syntaxhighlight lang="r">
> A <- matrix(1:6, nrow=2) !!this has nrow=2 ... and A has 2 rows
> A
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6
> B <- t( matrix(6:1, nrow=2) ) # t() is a transpose operator !!this has nrow=2 ... and B has 3 rows --- a clear contradiction to the definition of A
> B
[,1] [,2]
[1,] 6 5
[2,] 4 3
[3,] 2 1
> C <- A %*% B
> C
[,1] [,2]
[1,] 28 19
[2,] 40 28
> D <- C + 1
> D
[,1] [,2]
[1,] 29 20
[2,] 41 29
> D + c(1, 1) # c() creates a vector
[,1] [,2]
[1,] 30 21
[2,] 42 30
</syntaxhighlight>
==Mathematical reasoning and language notation==
The matrix left-division operator concisely expresses some semantic properties of matrices. As in the scalar equivalent, if the ([[determinant]] of the) coefficient (matrix) <code>A</code> is not null then it is possible to solve the (vectorial) equation <code>A * x = b</code> by left-multiplying both sides by the [[matrix inversion|inverse]] of <code>A</code>: <code>A<sup>−1</sup></code> (in both MATLAB and GNU Octave languages: <code>A^-1</code>). The following mathematical statements hold when <code>A</code> is a [[matrix rank#Properties|full rank]] [[square matrix#Square matrices|square matrix]]:
:<code>A^-1 *(A * x)==A^-1 * (b)</code>
:<code>(A^-1 * A)* x ==A^-1 * b </code> (matrix-multiplication [[associativity]])
:<code>x = A^-1 * b</code>
where <code>==</code> is the equivalence [[relational operator]].
The previous statements are also valid MATLAB expressions if the third one is executed before the others (numerical comparisons may be false because of round-off errors).
If the system is overdetermined - so that <code>A</code> has more rows than columns - the pseudoinverse <code>A<sup>+</sup></code> (in MATLAB and GNU Octave languages: <code>pinv(A)</code>) can replace the inverse <code>A<sup>−1</sup></code>, as follows:
:<code>pinv(A) *(A * x)==pinv(A) * (b)</code>
:<code>(pinv(A) * A)* x ==pinv(A) * b</code> (matrix-multiplication [[associativity]])
:<code>x = pinv(A) * b</code>
However, these solutions are neither the most concise ones (e.g. still remains the need to notationally differentiate overdetermined systems) nor the most computationally efficient. The latter point is easy to understand when considering again the scalar equivalent <code>a * x = b</code>, for which the solution <code>x = a^-1 * b</code> would require two operations instead of the more efficient <code>x = b / a</code>.
The problem is that generally matrix multiplications are not [[commutativity|commutative]] as the extension of the scalar solution to the matrix case would require:
:<code>(a * x)/ a ==b / a</code>
:<code>(x * a)/ a ==b / a</code> (commutativity does not hold for matrices!)
:<code>x * (a / a)==b / a</code> (associativity also holds for matrices)
:<code>x = b / a</code>
The MATLAB language introduces the left-division operator <code>\</code> to maintain the essential part of the analogy with the scalar case, therefore simplifying the mathematical reasoning and preserving the conciseness:
:<code>A \ (A * x)==A \ b</code>
:<code>(A \ A)* x ==A \ b</code> (associativity also holds for matrices, commutativity is no more required)
:<code>x = A \ b</code>
This is not only an example of terse array programming from the coding point of view but also from the computational efficiency perspective, which in several array programming languages benefits from quite efficient linear algebra libraries such as [[Automatically Tuned Linear Algebra Software|ATLAS]] or [[LAPACK]].<ref>{{cite web |title= GNU Octave Manual. Appendix G Installing Octave. |url= https://www.gnu.org/software/octave/doc/interpreter/Installation.html |accessdate= 2011-03-19}}</ref><ref>{{cite web |title= Mathematica 5.2 Documentation. Software References. |url= http://reference.wolfram.com/legacy/v5_2/Built-inFunctions/AdvancedDocumentation/LinearAlgebra/LinearAlgebraInMathematica/Appendix/AdvancedDocumentationLinearAlgebra7.0.html |accessdate= 2011-03-19}}</ref>
Returning to the previous quotation of Iverson, the rationale behind it should now be evident: {{quote|it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.
Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.}}
==Third-party libraries==
The use of specialized and efficient libraries to provide more terse abstractions is also common in other programming languages. In [[C++]] several linear algebra libraries exploit the language ability to [[operator overloading|overload operators]]. In some cases a very terse abstraction in those languages is explicitly influenced by the array programming paradigm, as the [[Armadillo (C++ library)|Armadillo]] and [[Blitz++]] libraries do.<ref>{{cite web |title= Reference for Armadillo 1.1.8. Examples of Matlab/Octave syntax and conceptually corresponding Armadillo syntax. |url= http://arma.sourceforge.net/docs.html#syntax |accessdate= 2011-03-19}}</ref><ref>{{cite web |title= Blitz++ User's Guide. 3. Array Expressions. |url= http://www.oonumerics.org/blitz/docs/blitz_3.html#SEC80 |accessdate= 2011-03-19}}</ref>
==See also==
* [[Array slicing]]
* [[List of programming languages by type#Array languages|List of array programming languages]]
==References==
{{reflist|30em}}
==External links==
*[http://www.nsl.com/ "No stinking loops" programming]
*[http://www.vector.org.uk/archive/v223/smill222.htm Discovering Array Languages]
{{Programming language}}
[[Category:Array programming languages| ]]
[[Category:Programming paradigms]]
[[Category:Articles with example MATLAB/Octave code]]
[[Category:Articles with example BASIC code]]
[[Category:Articles with example Ada code]]' |
New page wikitext, after the edit (new_wikitext ) | '{{Programming paradigms}}
In [[computer science]], '''array programming languages''' (also known as [[vector (computing)|vector]] or '''multidimensional''' languages) generalize operations on [[scalar (computing)|scalar]]s to apply transparently to [[vector (geometric)|vector]]s, [[matrix (mathematics)|matrices]], and higher-dimensional arrays.
Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon to find array programming language [[one-liner program|one-liners]] that require more than a couple of pages of Java code.<ref>{{cite web |url=http://www.cs.nyu.edu/~michaels/screencasts/Java_vs_K/Java_vs_K.html |title=Java and K |accessdate=2008-01-23 |author=Michael Schidlowsky}}</ref>
Modern programming languages that support array programming are commonly used in [[computational science|scientific]] and engineering settings; these include [[Fortran 90]], Mata, [[MATLAB]], [[Analytica (software)|Analytica]], [[TK Solver]] (as lists), [[GNU Octave|Octave]], [[R (programming language)|R]], [[Cilk Plus]], [[Julia (programming language)|Julia]], [[Perl_Data_Language|Perl Data Language (PDL)]] and the [[NumPy]] extension to [[Python (programming language)|Python]]. In these languages, an operation that operates on entire arrays can be called a '''vectorized''' operation,<ref>{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |last-author-amp=yes |journal=Computing in Science and Engineering |publisher=IEEE |year=2011}}</ref> regardless of whether it is executed on a [[vector processor]] or not. https://youtu.be/fkCeOwuH-2A
==Concepts==
The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a [[high-level programming language|high-level programming]] model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations.
Iverson described the rationale behind array programming (actually referring to APL) as follows:<ref>{{cite journal |author= Iverson, K. E. |title= Notations as a Tool of Thought. |journal= Communications of the ACM |volume= 23 |issue= 8 |pages= 444–465 |year= 1980 |url= http://www.jsoftware.com/papers/tot.htm |accessdate= 2011-03-22 |doi=10.1145/358896.358899}}</ref>
{{quote|most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician. [...]
The thesis [...] is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation.
[...] it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.
Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.
[...]
Users of computers and programming languages are often concerned primarily with the efficiency of execution of algorithms, and might, therefore, summarily dismiss many of the algorithms presented here. Such dismissal would be short-sighted, since a clear statement of an algorithm can usually be used as a basis from which one may easily derive more efficient algorithm.}}
The basis behind array programming and thinking is to find and exploit the properties of data where individual elements are similar or adjacent. Unlike object orientation which implicitly breaks down data to its constituent parts (or [[scalar (computing)|scalar]] quantities), array orientation looks to group data and apply a uniform handling.
[[Function rank]] is an important concept to array programming languages in general, by analogy to [[tensor]] rank in mathematics: functions that operate on data may be classified by the number of dimensions they act on. Ordinary multiplication, for example, is a scalar ranked function because it operates on zero-dimensional data (individual numbers). The [[cross product]] operation is an example of a vector rank function because it operates on vectors, not scalars. [[Matrix multiplication]] is an example of a 2-rank function, because it operates on 2-dimensional objects (matrices). [[Reduce (higher-order function)|Collapse operators]] reduce the dimensionality of an input data array by one or more dimensions. For example, summing over elements collapses the input array by 1 dimension.
==Uses==
Array programming is very well suited to implicit parallelization; a topic of much research nowadays. Further, [[Intel]] and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting from [[MMX (instruction set)|MMX]] and continuing through [[SSSE3]] and [[3DNow!]], which include rudimentary [[SIMD]] array capabilities. Array processing is distinct from [[parallel computing|parallel processing]] in that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones ([[MIMD]]) to be solved piecemeal by numerous processors. Processors with two or more cores are increasingly common today.
==Languages==
The canonical examples of array programming languages are [[APL (programming language)|APL]], [[J programming language|J]], and [[Fortran]]. Others include: [[D (programming language)|D]], [[A+ (programming language)|A+]], [[Analytica (software)|Analytica]], [[Chapel (programming language)|Chapel]], [[IDL (programming language)|IDL]], [[Julia (programming language)|Julia]], [[K (programming language)|K]], [[Q (programming language from Kx Systems)|Q]], Mata, [[Mathematica]], [[MATLAB]], [[MOLSF]], [[NumPy]], [[GNU Octave]], [[Perl Data Language|PDL]], [[R (programming language)|R]], [[S-Lang (programming language)|S-Lang]], [[SAC programming language|SAC]], [[Nial programming language|Nial]] and [[ZPL (programming language)|ZPL]].
===Scalar languages===
In scalar languages such as [[C (programming language)|C]] and [[Pascal (programming language)|Pascal]], operations apply only to single values, so ''a''+''b'' expresses the addition of two numbers. In such languages, adding one array to another requires indexing and looping, the coding of which is tedious.
<syntaxhighlight lang="c">
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
a[i][j] += b[i][j];
</syntaxhighlight>
===Array languages===
In array languages, operations are generalized to apply to both scalars and arrays. Thus, ''a''+''b'' expresses the sum of two scalars if ''a'' and ''b'' are scalars, or the sum of two arrays if they are arrays.
An array language simplifies programming but possibly at a cost known as the ''abstraction penalty''.<ref>{{cite journal|author=Surana P |title=Meta-Compilation of Language Abstractions. |year=2006 |url=ftp://lispnyc.org/meeting-assets/2007-02-13_pinku/SuranaThesis.pdf |format=[[PDF]] |accessdate=2008-03-17 |deadurl=yes |archiveurl=https://web.archive.org/web/20150217154926/http://lispnyc.org/meeting-assets/2007-02-13_pinku/SuranaThesis.pdf |archivedate=2015-02-17 |df= }}</ref><ref>{{cite web |last= Kuketayev |title= The Data Abstraction Penalty (DAP) Benchmark for Small Objects in Java. |url= http://www.adtmag.com/joop/article.aspx?id=4597 |accessdate= 2008-03-17}}</ref><ref>{{Cite book |last= Chatzigeorgiou |last2= Stephanides |editor-last= Blieberger |editor2-last= Strohmeier |contribution= Evaluating Performance and Power Of Object-Oriented Vs. Procedural Programming Languages |title= Proceedings - 7th International Conference on Reliable Software Technologies - Ada-Europe'2002 |year= 2002 |pages= 367 |publisher= Springer |url= https://books.google.com/?id=QMalP1P2kAMC&dq=%22abstraction+penalty%22 |isbn= 978-3-540-43784-0 }}</ref> Because the additions are performed in isolation from the rest of the coding, they may not produce the optimally most [[algorithmic efficiency|efficient]] code. (For example, additions of other elements of the same array may be subsequently encountered during the same execution, causing unnecessary repeated lookups.) Even the most sophisticated [[optimizing compiler]] would have an extremely hard time amalgamating two or more apparently disparate functions which might appear in different program sections or sub-routines, even though a programmer could do this easily, aggregating sums on the same pass over the array to minimize [[Computational overhead|overhead]]).
====Ada====
The previous C code would become the following in the [[Ada (programming language)|Ada]] language,<ref>[http://www.adaic.org/standards/05rm/html/RM-TTL.html Ada Reference Manual]: [http://www.adaic.org/resources/add_content/standards/05rm/html/RM-G-3-1.html G.3.1 Real Vectors and Matrices]</ref> which supports array-programming syntax.
<pre>
A := A + B;
</pre>
====APL====
[[APL_(programming_language)|APL]] uses single character Unicode symbols with no syntactic sugar.
<pre>
A ← A + B
</pre>
This operation works on arrays of any rank (including rank 0). Dyalog APL extends the original language with [[augmented assignment]]s:
<pre>
A +← B
</pre>
====Analytica====
[[Analytica (software)|Analytica]] provides the same economy of expression as Ada.
<pre>
A := A + B;
</pre>
====BASIC====
[[Dartmouth BASIC]] had MAT statements for matrix and array manipulation in its third edition (1966).
<syntaxhighlight lang="basic">
DIM A(4),B(4),C(4)
MAT A = 1
MAT B = 2*A
MAT C = A + B
MAT PRINT A,B,C
</syntaxhighlight>
====Mata====
[[Stata]]'s matrix programming language Mata supports array programming. Below, we illustrate addition, multiplication, addition of a matrix and a scalar, element by element multiplication, subscripting, and one of Mata's many inverse matrix functions.
<source lang="stata">
. mata:
: A = (1,2,3) \(4,5,6)
: A
1 2 3
+-------------+
1 | 1 2 3 |
2 | 4 5 6 |
+-------------+
: B = (2..4) \(1..3)
: B
1 2 3
+-------------+
1 | 2 3 4 |
2 | 1 2 3 |
+-------------+
: C = J(3,2,1) // A 3 by 2 matrix of ones
: C
1 2
+---------+
1 | 1 1 |
2 | 1 1 |
3 | 1 1 |
+---------+
: D = A + B
: D
1 2 3
+-------------+
1 | 3 5 7 |
2 | 5 7 9 |
+-------------+
: E = A*C
: E
1 2
+-----------+
1 | 6 6 |
2 | 15 15 |
+-----------+
: F = A:*B
: F
1 2 3
+----------------+
1 | 2 6 12 |
2 | 4 10 18 |
+----------------+
: G = E :+ 3
: G
1 2
+-----------+
1 | 9 9 |
2 | 18 18 |
+-----------+
: H = F[(2\1), (1, 2)] // Subscripting to get a submatrix of F and
: // switch row 1 and 2
: H
1 2
+-----------+
1 | 4 10 |
2 | 2 6 |
+-----------+
: I = invsym(F'*F) // Generalized inverse (F*F^(-1)F=F) of a
: // symmetric positive semi-definite matrix
: I
[symmetric]
1 2 3
+-------------------------------------------+
1 | 0 |
2 | 0 3.25 |
3 | 0 -1.75 .9444444444 |
+-------------------------------------------+
: end
</source>
====MATLAB====
The implementation in [[MATLAB]] allows the same economy allowed by using the Ada language.
<pre>
A = A + B;
</pre>
A variant of the MATLAB language is the [[GNU Octave]] language, which extends the original language with [[augmented assignment]]s:
<pre>
A += B;
</pre>
Both MATLAB and GNU Octave natively support linear algebra operations such as matrix multiplication, [[matrix inversion]], and the numerical solution of [[system of linear equations]], even using the [[Moore–Penrose pseudoinverse]].<ref>{{cite web |title= GNU Octave Manual. Arithmetic Operators. |url= https://www.gnu.org/software/octave/doc/interpreter/Arithmetic-Ops.html |accessdate= 2011-03-19}}</ref><ref>{{cite web |title= MATLAB documentation. Arithmetic Operators. |url= http://www.mathworks.com/help/techdoc/ref/arithmeticoperators.html |accessdate= 2011-03-19}}</ref>
The [[Nial]] example of the inner product of two arrays can be implemented using the native matrix multiplication operator. If <code>a</code> is a row vector of size [1 n] and <code>b</code> is a corresponding column vector of size [n 1].
a * b;
The inner product between two matrices having the same number of elements can be implemented with the auxiliary operator <code>(:)</code>, which reshapes a given matrix into a column vector, and the [[transpose]] operator <code>'</code>:
A(:)' * B(:);
====rasql====
The [[Rasdaman#Raster Query Language|rasdaman query language]] is a database-oriented array-programming language. For example, two arrays could be added with the following query:
<pre>
SELECT A + B
FROM A, B
</pre>
====R====
The [[R (programming language)|R]] language supports array paradigm by default. The following example illustrates a process of multiplication of two matrices followed by an addition of a scalar (which is, in fact, a one-element vector) and a vector:
<syntaxhighlight lang="r">
> A <- matrix(1:6, nrow=2) !!this has nrow=2 ... and A has 2 rows
> A
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6
> B <- t( matrix(6:1, nrow=2) ) # t() is a transpose operator !!this has nrow=2 ... and B has 3 rows --- a clear contradiction to the definition of A
> B
[,1] [,2]
[1,] 6 5
[2,] 4 3
[3,] 2 1
> C <- A %*% B
> C
[,1] [,2]
[1,] 28 19
[2,] 40 28
> D <- C + 1
> D
[,1] [,2]
[1,] 29 20
[2,] 41 29
> D + c(1, 1) # c() creates a vector
[,1] [,2]
[1,] 30 21
[2,] 42 30
</syntaxhighlight>
==Mathematical reasoning and language notation==
The matrix left-division operator concisely expresses some semantic properties of matrices. As in the scalar equivalent, if the ([[determinant]] of the) coefficient (matrix) <code>A</code> is not null then it is possible to solve the (vectorial) equation <code>A * x = b</code> by left-multiplying both sides by the [[matrix inversion|inverse]] of <code>A</code>: <code>A<sup>−1</sup></code> (in both MATLAB and GNU Octave languages: <code>A^-1</code>). The following mathematical statements hold when <code>A</code> is a [[matrix rank#Properties|full rank]] [[square matrix#Square matrices|square matrix]]:
:<code>A^-1 *(A * x)==A^-1 * (b)</code>
:<code>(A^-1 * A)* x ==A^-1 * b </code> (matrix-multiplication [[associativity]])
:<code>x = A^-1 * b</code>
where <code>==</code> is the equivalence [[relational operator]].
The previous statements are also valid MATLAB expressions if the third one is executed before the others (numerical comparisons may be false because of round-off errors).
If the system is overdetermined - so that <code>A</code> has more rows than columns - the pseudoinverse <code>A<sup>+</sup></code> (in MATLAB and GNU Octave languages: <code>pinv(A)</code>) can replace the inverse <code>A<sup>−1</sup></code>, as follows:
:<code>pinv(A) *(A * x)==pinv(A) * (b)</code>
:<code>(pinv(A) * A)* x ==pinv(A) * b</code> (matrix-multiplication [[associativity]])
:<code>x = pinv(A) * b</code>
However, these solutions are neither the most concise ones (e.g. still remains the need to notationally differentiate overdetermined systems) nor the most computationally efficient. The latter point is easy to understand when considering again the scalar equivalent <code>a * x = b</code>, for which the solution <code>x = a^-1 * b</code> would require two operations instead of the more efficient <code>x = b / a</code>.
The problem is that generally matrix multiplications are not [[commutativity|commutative]] as the extension of the scalar solution to the matrix case would require:
:<code>(a * x)/ a ==b / a</code>
:<code>(x * a)/ a ==b / a</code> (commutativity does not hold for matrices!)
:<code>x * (a / a)==b / a</code> (associativity also holds for matrices)
:<code>x = b / a</code>
The MATLAB language introduces the left-division operator <code>\</code> to maintain the essential part of the analogy with the scalar case, therefore simplifying the mathematical reasoning and preserving the conciseness:
:<code>A \ (A * x)==A \ b</code>
:<code>(A \ A)* x ==A \ b</code> (associativity also holds for matrices, commutativity is no more required)
:<code>x = A \ b</code>
This is not only an example of terse array programming from the coding point of view but also from the computational efficiency perspective, which in several array programming languages benefits from quite efficient linear algebra libraries such as [[Automatically Tuned Linear Algebra Software|ATLAS]] or [[LAPACK]].<ref>{{cite web |title= GNU Octave Manual. Appendix G Installing Octave. |url= https://www.gnu.org/software/octave/doc/interpreter/Installation.html |accessdate= 2011-03-19}}</ref><ref>{{cite web |title= Mathematica 5.2 Documentation. Software References. |url= http://reference.wolfram.com/legacy/v5_2/Built-inFunctions/AdvancedDocumentation/LinearAlgebra/LinearAlgebraInMathematica/Appendix/AdvancedDocumentationLinearAlgebra7.0.html |accessdate= 2011-03-19}}</ref>
Returning to the previous quotation of Iverson, the rationale behind it should now be evident: {{quote|it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.
Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.}}
==Third-party libraries==
The use of specialized and efficient libraries to provide more terse abstractions is also common in other programming languages. In [[C++]] several linear algebra libraries exploit the language ability to [[operator overloading|overload operators]]. In some cases a very terse abstraction in those languages is explicitly influenced by the array programming paradigm, as the [[Armadillo (C++ library)|Armadillo]] and [[Blitz++]] libraries do.<ref>{{cite web |title= Reference for Armadillo 1.1.8. Examples of Matlab/Octave syntax and conceptually corresponding Armadillo syntax. |url= http://arma.sourceforge.net/docs.html#syntax |accessdate= 2011-03-19}}</ref><ref>{{cite web |title= Blitz++ User's Guide. 3. Array Expressions. |url= http://www.oonumerics.org/blitz/docs/blitz_3.html#SEC80 |accessdate= 2011-03-19}}</ref>
==See also==
* [[Array slicing]]
* [[List of programming languages by type#Array languages|List of array programming languages]]
==References==
{{reflist|30em}}
==External links==
*[http://www.nsl.com/ "No stinking loops" programming]
*[http://www.vector.org.uk/archive/v223/smill222.htm Discovering Array Languages]
{{Programming language}}
[[Category:Array programming languages| ]]
[[Category:Programming paradigms]]
[[Category:Articles with example MATLAB/Octave code]]
[[Category:Articles with example BASIC code]]
[[Category:Articles with example Ada code]]' |
Unified diff of changes made by edit (edit_diff ) | '@@ -4,5 +4,5 @@
Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon to find array programming language [[one-liner program|one-liners]] that require more than a couple of pages of Java code.<ref>{{cite web |url=http://www.cs.nyu.edu/~michaels/screencasts/Java_vs_K/Java_vs_K.html |title=Java and K |accessdate=2008-01-23 |author=Michael Schidlowsky}}</ref>
-Modern programming languages that support array programming are commonly used in [[computational science|scientific]] and engineering settings; these include [[Fortran 90]], Mata, [[MATLAB]], [[Analytica (software)|Analytica]], [[TK Solver]] (as lists), [[GNU Octave|Octave]], [[R (programming language)|R]], [[Cilk Plus]], [[Julia (programming language)|Julia]], [[Perl_Data_Language|Perl Data Language (PDL)]] and the [[NumPy]] extension to [[Python (programming language)|Python]]. In these languages, an operation that operates on entire arrays can be called a '''vectorized''' operation,<ref>{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |last-author-amp=yes |journal=Computing in Science and Engineering |publisher=IEEE |year=2011}}</ref> regardless of whether it is executed on a [[vector processor]] or not.
+Modern programming languages that support array programming are commonly used in [[computational science|scientific]] and engineering settings; these include [[Fortran 90]], Mata, [[MATLAB]], [[Analytica (software)|Analytica]], [[TK Solver]] (as lists), [[GNU Octave|Octave]], [[R (programming language)|R]], [[Cilk Plus]], [[Julia (programming language)|Julia]], [[Perl_Data_Language|Perl Data Language (PDL)]] and the [[NumPy]] extension to [[Python (programming language)|Python]]. In these languages, an operation that operates on entire arrays can be called a '''vectorized''' operation,<ref>{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |last-author-amp=yes |journal=Computing in Science and Engineering |publisher=IEEE |year=2011}}</ref> regardless of whether it is executed on a [[vector processor]] or not. https://youtu.be/fkCeOwuH-2A
==Concepts==
' |
New page size (new_size ) | 20082 |
Old page size (old_size ) | 20053 |
Size change in edit (edit_delta ) | 29 |
Lines added in edit (added_lines ) | [
0 => 'Modern programming languages that support array programming are commonly used in [[computational science|scientific]] and engineering settings; these include [[Fortran 90]], Mata, [[MATLAB]], [[Analytica (software)|Analytica]], [[TK Solver]] (as lists), [[GNU Octave|Octave]], [[R (programming language)|R]], [[Cilk Plus]], [[Julia (programming language)|Julia]], [[Perl_Data_Language|Perl Data Language (PDL)]] and the [[NumPy]] extension to [[Python (programming language)|Python]]. In these languages, an operation that operates on entire arrays can be called a '''vectorized''' operation,<ref>{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |last-author-amp=yes |journal=Computing in Science and Engineering |publisher=IEEE |year=2011}}</ref> regardless of whether it is executed on a [[vector processor]] or not. https://youtu.be/fkCeOwuH-2A'
] |
Lines removed in edit (removed_lines ) | [
0 => 'Modern programming languages that support array programming are commonly used in [[computational science|scientific]] and engineering settings; these include [[Fortran 90]], Mata, [[MATLAB]], [[Analytica (software)|Analytica]], [[TK Solver]] (as lists), [[GNU Octave|Octave]], [[R (programming language)|R]], [[Cilk Plus]], [[Julia (programming language)|Julia]], [[Perl_Data_Language|Perl Data Language (PDL)]] and the [[NumPy]] extension to [[Python (programming language)|Python]]. In these languages, an operation that operates on entire arrays can be called a '''vectorized''' operation,<ref>{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |last-author-amp=yes |journal=Computing in Science and Engineering |publisher=IEEE |year=2011}}</ref> regardless of whether it is executed on a [[vector processor]] or not.'
] |
Whether or not the change was made through a Tor exit node (tor_exit_node ) | 0 |
Unix timestamp of change (timestamp ) | 1515435009 |