Content deleted Content added
No edit summary |
m Disambiguating links to General-purpose computing on graphics processing units (link changed to General-purpose computing on graphics processing units (software)) using DisamAssist. |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 2:
In [[computer science]], '''array programming''' refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in [[computational science|scientific]] and engineering settings.
Modern programming languages that support array programming (also known as [[vector (data structure)|vector]] or [[multidimensional analysis|multidimensional]] languages) have been engineered specifically to generalize operations on [[scalar (computing)|scalar]]s to apply transparently to [[vector (geometric)|vector]]s, [[matrix (mathematics)|matrices]], and higher-dimensional arrays. These include [[APL (programming language)|APL]], [[J (programming language)|J]], [[Fortran]], [[MATLAB]], [[Analytica (software)|Analytica]], [[GNU Octave|Octave]], [[R (programming language)|R]], [[Cilk Plus]], [[Julia (programming language)|Julia]], [[Perl Data Language|Perl Data Language (PDL)]]
==Concepts of array==
Line 21:
==Uses==
Array programming is very well suited to [[implicit parallelization]]; a topic of much research nowadays. Further, [[Intel]] and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting from [[MMX (instruction set)|MMX]] and continuing through [[SSSE3]] and [[3DNow!]], which include rudimentary [[Single instruction, multiple data|SIMD]] array capabilities. This has continued into the 2020s with instruction sets such as [[AVX-512]], making modern CPUs sophisticated vector processors. Array processing is distinct from [[parallel computing|parallel processing]] in that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones ([[Multiple instruction, multiple data|MIMD]]) to be solved piecemeal by numerous processors. Processors with [[Multi-core processor|multiple cores]] and [[Graphics processing unit|GPU]]s with thousands of [[General-purpose computing on graphics processing units (software)|general computing cores]] are common as of 2023.
==Languages==
Line 283:
==Third-party libraries==
The use of specialized and efficient libraries to provide more terse abstractions is also common in other programming languages. In [[C++]] several linear algebra libraries exploit the language's ability to [[operator overloading|overload operators]]. In some cases a very terse abstraction in those languages is explicitly influenced by the array programming paradigm, as the [[NumPy]] extension library to [[Python (programming language)|Python]], [[Armadillo (C++ library)|Armadillo]] and [[Blitz++]] libraries do.<ref>{{cite web |title= Reference for Armadillo 1.1.8. Examples of Matlab/Octave syntax and conceptually corresponding Armadillo syntax. |url=
==See also==
|