Single instruction, multiple data: Difference between revisions

Content deleted Content added
Removed unnecessary bold from a single letter S in the fourth paragraph
m Removing link(s) Wikipedia:Articles for deletion/Permute instruction closed as soft delete (XFDcloser)
 
(39 intermediate revisions by 11 users not shown)
Line 1:
{{Short description|Type of parallel processing}}
{{redirectRedirect|SIMD|the cryptographic hash function|SIMD (hash function)|the Scottish statistical tool|Scottish index of multiple deprivation}}
{{See also|SIMD within a register|Single instruction, multiple threads}}
{{Update|inaccurate=yes|date=March 2017}}
{{Flynn's Taxonomy}}
{{Update|inaccurate=yes|date=March 2017}}
 
[[File:SIMD2.svg|thumb|Single instruction, multiple data]]
 
'''Single instruction, multiple data''' ('''SIMD''') is a type of [[parallel computer|parallel processingcomputing]] (processing) in [[Flynn's taxonomy]]. SIMD describes computers with [[multiple processing elements]] that perform the same operation on multiple data points simultaneously. SIMD can be internal (part of the hardware design) and it can be directly accessible through an [[instruction set architecture]] (ISA), but it should not be confused with an ISA.
 
Such machines exploit [[Data parallelism|data level parallelism]], but not [[Concurrent computing|concurrency]]: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). A simple example is to add many pairs of numbers together, all of the SIMD units are performing an addition, but each one has different pairs of values to add. SIMD is particularlyespecially applicable to common tasks such as adjusting the contrast in a [[digital image]] or adjusting the volume of [[digital audio]]. Most modern [[Centralcentral processing unit|CPU]] (CPU) designs include SIMD instructions to improve the performance of [[multimedia]] use. In recent CPUs, SIMD units are tightly coupled with cache hierarchies and prefetch mechanisms, which minimize latency during large block operations. For instance, AVX-512-enabled processors can prefetch entire cache lines and apply fused multiply-add operations (FMA) in a single SIMD cycle.
 
== Confusion between SIMT and SIMD ==
{{See also|SIMD within a register|Single instruction, multiple threads|Vector processor}}
 
[[Image:ILLIAC_IV.jpg|thumb|[[ILLIAC IV]] Array overview, from ARPA-funded Introductory description by Steward Denenberg, July 15 1971<ref>{{Cite web | title=Archived copy | url=https://apps.dtic.mil/sti/tr/pdf/ADA954882.pdf | archive-url=https://web.archive.org/web/20240427173522/https://apps.dtic.mil/sti/tr/pdf/ADA954882.pdf | archive-date=2024-04-27}}</ref>]]
 
SIMD has three different subcategories in [[Flynn's taxonomy#Single instruction stream, multiple data streams (SIMD)|Flynn's 1972 Taxonomy]], one of which is [[Singlesingle instruction, multiple threads|SIMT]] (SIMT). SIMT should not be confused with [[Thread (computing)|software threads]] or [[Multithreading (computer architecture)|hardware threads]], both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution., Asuch key distinctionas in SIMT is the presence[[ILLIAC of control flow mechanisms like warps (NVIDIA terminology) or wavefronts (AMD terminology). These allow divergence and convergence of threads, even under shared instruction streams, thereby offering slightly more flexibility than classical SIMDIV]].
Such machines exploit [[Data parallelism|data level parallelism]], but not [[Concurrent computing|concurrency]]: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). A simple example is to add many pairs of numbers together, all of the SIMD units are performing an addition, but each one has different pairs of values to add. SIMD is particularly applicable to common tasks such as adjusting the contrast in a [[digital image]] or adjusting the volume of [[digital audio]]. Most modern [[Central processing unit|CPU]] designs include SIMD instructions to improve the performance of [[multimedia]] use. In recent CPUs, SIMD units are tightly coupled with cache hierarchies and prefetch mechanisms, which minimize latency during large block operations. For instance, AVX-512-enabled processors can prefetch entire cache lines and apply fused multiply-add operations (FMA) in a single SIMD cycle.
 
SIMD should not be confused with [[Vector processing]], characterized by the [[Cray 1]] and clarified in [[Duncan's taxonomy]]. The
SIMD has three different subcategories in [[Flynn's taxonomy#Single instruction stream, multiple data streams (SIMD)|Flynn's 1972 Taxonomy]], one of which is [[Single instruction, multiple threads|SIMT]]. SIMT should not be confused with [[Thread (computing)|software threads]] or [[Multithreading (computer architecture)|hardware threads]], both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution. A key distinction in SIMT is the presence of control flow mechanisms like warps (NVIDIA terminology) or wavefronts (AMD terminology). These allow divergence and convergence of threads, even under shared instruction streams, thereby offering slightly more flexibility than classical SIMD.
[[Vector processor#Difference between SIMD and vector processors|difference between SIMD and vector processors]] is primarily the presence of a Cray-style {{code|SET VECTOR LENGTH}} instruction.
 
One key distinction between SIMT and SIMD is that the SIMD unit will not have its own memory.
Each hardware element (PU) working on individual data item sometimes also referred as SIMD lane or channel. Modern [[graphics processing unit]]s (GPUs) are often wide SIMD (typically >16 data lanes or channel) implementations.{{cn|date=July 2024}} Some newer GPUs go beyond simple SIMD and integrate mixed-precision SIMD pipelines, which allow concurrent execution of 8-bit, 16-bit, and 32-bit operations in different lanes. This is critical for applications like AI inference, where mixed precision boosts throughput.
Another key distinction in SIMT is the presence of control flow mechanisms like warps ([[Nvidia]] terminology) or wavefronts (Advanced Micro Devices ([[AMD]]) terminology). [[ILLIAC IV]] simply called them "Control Signals". These signals ensure that each Processing Element in the entire parallel array is synchronized in its simultaneous execution of the (one, current) broadcast instruction.
 
Each hardware element (PU, or PE in [[ILLIAC IV]] terminology) working on individual data item sometimes also referred to as a [[SIMD lane]] or channel. The ILLIAC IV PE was a scalar 64-bit unit that could do 2x32-bit [[Predication_(computer_architecture)#SIMD,_SIMT_and_vector_predication|predication]]. Modern [[graphics processing unit]]s (GPUs) are ofteninvariably wide [[SIMD within a register]] (SWAR) and typically >have more that 16 data lanes or channel)channels implementationsof such Processing Elements.{{cn|date=July 2024}} Some newer GPUs go beyond simple SIMD and integrate mixed-precision SIMD{{cn|date=July 2025}} SWAR pipelines, which allowperforms concurrent executionsub-word of[[8-bit computing|8-bit]], [[16-bit computing|16-bit]], and [[32-bit operationscomputing|32-bit]] in different lanesoperations. This is critical for applications like AI inference, where mixed precision boosts throughput.
Additionally, SIMD can exist in both fixed and scalable vector forms. Fixed-width SIMD units operate on a constant number of data points per instruction, while scalable designs, like RISC-V Vector or ARM's SVE, allow the number of data elements to vary depending on the hardware implementation. This improves forward compatibility across generations of processors.
 
==History==
The first known operational use to date of [[SIMD within a register]] was the [[TX-2]], in 1958. It was capable of 36-bit operations and two 18-bit or four 9-bit sub-word operations.
The first use of SIMD instructions was in the [[ILLIAC IV]], which was completed in 1972. This included 64 (of an original design of 256) processors that had local memory to hold different values while performing the same instruction. Separate hardware quickly send out the values to be processed and gathered up the results.
 
The first commercial use of SIMD instructions was in the [[ILLIAC IV]], which was completed in 1972. This included 64 (of an original design of 256) processors that had local memory to hold different values while performing the same instruction. Separate hardware quickly sendsent out the values to be processed and gathered up the results.
SIMD was the basis for [[vector processor|vector supercomputers]] of the early 1970s such as the [[CDC STAR-100|CDC Star-100]] and the [[TI Advanced Scientific Computer|Texas Instruments ASC]], which could operate on a "vector" of data with a single instruction. Vector processing was especially popularized by [[Cray]] in the 1970s and 1980s. Vector processing architectures are now considered separate from SIMD computers: [[Duncan's Taxonomy]] includes them whereas [[Flynn's Taxonomy]] does not, due to Flynn's work (1966, 1972) pre-dating the [[Cray-1]] (1977).
 
SIMD was the basis for [[vector processor|vectorVector supercomputers]] of the early 1970s such as the [[CDC STAR-100|CDC Star-100]] and the [[TI Advanced Scientific Computer|Texas Instruments ASC]], which could operate on a "vector" of data with a single instruction. Vector processing was especially popularized by [[Cray]] in the 1970s and 1980s. Vector processing architectures are now considered separate from SIMD computers: [[Duncan's Taxonomy]] includes them whereas [[Flynn's Taxonomy]] does not, due to Flynn's work (1966, 1972) pre-dating the [[Cray-1]] (1977). The complexity of Vector processors however inspired a simpler arrangement known as [[SIMD within a register]].
The first era of modern SIMD computers was characterized by [[Massive parallel processing|massively parallel processing]]-style [[supercomputer]]s such as the [[Thinking Machines Corporation|Thinking Machines]] [[Connection Machine|CM-1 and CM-2]]. These computers had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalar [[Multiple instruction, multiple data|MIMD]] approaches based on commodity processors such as the [[Intel i860|Intel i860 XP]] became more powerful, and interest in SIMD waned.<ref>{{cite web|url=http://www.cs.kent.edu/~walker/classes/pdc.f01/lectures/MIMD-1.pdf|title=MIMD1 - XP/S, CM-5}}</ref>
 
The first era of modern SIMD computers was characterized by [[Massive parallel processing|massively parallel processing]]-style [[supercomputer]]s such as the [[Thinking Machines Corporation|Thinking Machines]] [[Connection Machine|]] CM-1 and CM-2]]. These computers had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalar [[Multiplemultiple instruction, multiple data|MIMD]] (MIMD) approaches based on commodity processors such as the [[Intel i860|Intel i860 XP]] became more powerful, and interest in SIMD waned.<ref>{{cite web|url=http://www.cs.kent.edu/~walker/classes/pdc.f01/lectures/MIMD-1.pdf|title=MIMD1 - XP/S, CM-5}}</ref>
The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this particular type of computing power, and microprocessor vendors turned to SIMD to meet the demand.<ref name="conte">{{cite conference |title=The long and winding road to high-performance image processing with MMX/SSE |first1=G. |last1=Conte |first2=S. |last2=Tommesani |first3=F. |last3=Zanichelli |book-title=Proc. Fifth IEEE Int'l Workshop on Computer Architectures for Machine Perception |year=2000 |doi=10.1109/CAMP.2000.875989 |s2cid=13180531 |hdl=11381/2297671}}</ref> This resurgence also coincided with the rise of DirectX and OpenGL shader models, which heavily leveraged SIMD under the hood. The graphics APIs encouraged programmers to adopt data-parallel programming styles, indirectly accelerating SIMD adoption in desktop software. Hewlett-Packard introduced [[Multimedia Acceleration eXtensions|MAX]] instructions into [[PA-RISC]] 1.1 desktops in 1994 to accelerate MPEG decoding.<ref>{{cite book |first=R.B. |last=Lee |chapter=Realtime MPEG video via software decompression on a PA-RISC processor |title=digest of papers Compcon '95. Technologies for the Information Superhighway |year=1995 |pages=186–192 |doi=10.1109/CMPCON.1995.512384 |isbn=0-8186-7029-0|s2cid=2262046 }}</ref> Sun Microsystems introduced SIMD integer instructions in its "[[Visual Instruction Set|VIS]]" instruction set extensions in 1995, in its [[UltraSPARC|UltraSPARC I]] microprocessor. MIPS followed suit with their similar [[MDMX]] system.
 
The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this particular type of computing power, and microprocessor vendors turned to SIMD to meet the demand.<ref name="conte">{{cite conference |title=The long and winding road to high-performance image processing with MMX/SSE |first1=G. |last1=Conte |first2=S. |last2=Tommesani |first3=F. |last3=Zanichelli |book-title=Proc. Fifth IEEE Int'l Workshop on Computer Architectures for Machine Perception |year=2000 |doi=10.1109/CAMP.2000.875989 |s2cid=13180531 |hdl=11381/2297671}}</ref> This resurgence also coincided with the rise of [[DirectX]] and OpenGL shader models, which heavily leveraged SIMD under the hood. The graphics APIs encouraged programmers to adopt data-parallel programming styles, indirectly accelerating SIMD adoption in desktop software. Hewlett-Packard introduced [[Multimedia Acceleration eXtensions|MAX]] (MAX) instructions into [[PA-RISC]] 1.1 desktops in 1994 to accelerate MPEG decoding.<ref>{{cite book |first=R.B. |last=Lee |chapter=Realtime MPEG video via software decompression on a PA-RISC processor |title=digest of papers Compcon '95. Technologies for the Information Superhighway |year=1995 |pages=186–192 |doi=10.1109/CMPCON.1995.512384 |isbn=0-8186-7029-0|s2cid=2262046 }}</ref> Sun Microsystems introduced SIMD integer instructions in its "[[Visual Instruction Set|VIS]]" instruction set extensions in 1995, in its [[UltraSPARC|UltraSPARC I]] microprocessor. MIPS followed suit with their similar [[MDMX]] system.
 
The first widely deployed desktop SIMD was with Intel's [[MMX (instruction set)|MMX]] extensions to the [[x86]] architecture in 1996. This sparked the introduction of the much more powerful [[AltiVec]] system in the [[Motorola]] [[PowerPC]] and IBM's [[IBM Power microprocessors|POWER]] systems. Intel responded in 1999 by introducing the all-new [[Streaming SIMD Extensions|SSE]] system. Since then, there have been several extensions to the SIMD instruction sets for both architectures. Advanced vector extensions AVX, [[AVX2]] and [[AVX-512]] are developed by Intel. AMD supports AVX, [[AVX2]], and [[AVX-512]] in their current products.<ref>{{Cite web |title=AMD Zen 4 AVX-512 Performance Analysis On The Ryzen 9 7950X Review |url=https://www.phoronix.com/review/amd-zen4-avx512 |access-date=2023-07-13 |website=www.phoronix.com |language=en}}</ref>
Line 30 ⟶ 42:
 
==Advantages==
An application that may take advantage of SIMD is one where the same value is being added to (or subtracted from) a large number of data points, a common operation in many [[multimedia]] applications. One example would be changing the brightness of an image. Each [[pixel]] of an image consists of three values for the brightness of the red (R), green (G) and blue (B) portions of the color. To change the brightness, the R, G and B values are read from memory, a value is added to (or subtracted from) them, and the resulting values are written back out to memory. Audio [[Digitaldigital signal processing|DSPprocessor]]s (DSPs) would likewise, for volume control, multiply both Left and Right channels simultaneously.
 
With a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, and a number of values can be loaded all at once. Instead of a series of instructions saying "retrieve this pixel, now retrieve the next pixel", a SIMD processor will have a single instruction that effectively says "retrieve n pixels" (where n is a number that varies from design to design). For a variety of reasons, this can take much less time than retrieving each pixel individually, as with a traditional CPU design. Moreover, SIMD instructions can exploit data reuse, where the same operand is used across multiple calculations, via broadcasting features. For example, multiplying several pixels by a constant scalar value can be done more efficiently by loading the scalar once and broadcasting it across a SIMD register.
Line 39 ⟶ 51:
* Not all algorithms can be vectorized easily. For example, a flow-control-heavy task like code [[parsing]] may not easily benefit from SIMD; however, it is theoretically possible to vectorize comparisons and ''"batch flow"'' to target maximal cache optimality, though this technique will require more intermediate state. Note: Batch-pipeline systems (example: GPUs or software rasterization pipelines) are most advantageous for cache control when implemented with SIMD intrinsics, but they are not exclusive to SIMD features. Further complexity may be apparent to avoid dependence within series such as code strings; while independence is required for vectorization.{{Clarify|reason=You lost me. whilst this is all probably true, it is not in easy-to-follow language. which probably means it needs its own subsection, or references, or just rewriting.|date=June 2021}} Additionally, divergent control flow—where different data lanes would follow different execution paths—can lead to underutilization of SIMD hardware. To handle such divergence, techniques like masking and predication are often employed, but they introduce performance overhead and complexity.
* Large register files which increases power consumption and required chip area.
* Currently, implementing an algorithm with SIMD instructions usually requires human labor; most compilers[[compiler]]s do not generate SIMD instructions from a typical [[C (programming language)|C]] program, for instance. [[Automatic vectorization]] in compilers is an active area of computer science research. (Compare [[Vector processor|vector processing]].)
* Programming with particulargiven SIMD instruction sets can involve numerousmany low-level challenges.
*# SIMD may have restrictions on [[Data structure alignment|data alignment]]; programmers familiar with onea particulargiven architecture may not expect this. Worse: the alignment may change from one revision or "compatible" processor to another.
*# Gathering data into SIMD registers and scattering it to the correct destination locations is tricky (sometimes requiring [[permute instruction|permuteinstructions (operations]]) and can be inefficient.
*# Specific instructions like rotations or three-operand addition are not available in some SIMD instruction sets.
*# Instruction sets are architecture-specific: some processors lack SIMD instructions entirely, so programmers must provide non-vectorized implementations (or different vectorized implementations) for them.
Line 48 ⟶ 60:
*# The early [[MMX (instruction set)|MMX]] instruction set shared a register file with the floating-point stack, which caused inefficiencies when mixing floating-point and MMX code. However, [[SSE2]] corrects this.
 
To remedy problems 1 and 5, Cray-style [[RISC-VVector processors]]'s vector extension usesuse an alternative approach: instead of exposing the sub-register-level details directly to the programmer, the instruction set abstracts them out asat aleast fewthe "vectorlength registers"(number thatof useelements) theinto samea interfacesruntime acrosscontrol allregister, CPUsusually withnamed this"VL" instruction(Vector setLength). The hardware then handles all alignment issues and "strip-mining" of loops. Machines with different vector sizes would be able to run the same code. LLVM calls this vector type "{{not a typo|vscale}}".{{citation needed|date=June 2021}}
 
AnWith SIMD, an order of magnitude increase in code size is not uncommon, when compared to equivalent scalar or equivalent vector code, and an order of magnitude ''or greater'' effectiveness (work done per instruction) is achievable with Vector ISAs.<ref>{{cite web |last1=Patterson |first1=David |last2=Waterman |first2=Andrew |title=SIMD Instructions Considered Harmful |url=https://www.sigarch.org/simd-instructions-considered-harmful/ |website=SIGARCH |date=18 September 2017}}</ref>
 
ARM's [[Scalable Vector Extension]] takes another approach, known in [[Flynn's taxonomy#Single instruction stream, multiple data streams (SIMD)|Flynn's Taxonomy]] as "Associative Processing", more commonly known today as [[Predication (computer architecture)#SIMD, SIMT and vector predication|"Predicated" (masked)]] SIMD. This approach is not as compact as [[Vectorvector processing]] but is still far better than non-predicated SIMD. Detailed comparative examples are given inat the{{section [[link|Vector processor#|Vector instruction example|Vector processing]] page}}. In addition, all versions of the ARM architecture have offered Load and Store multiple instructions, to Load or Store a block of data from a continuous block of memory, into a range or non-continuous set of registers.<ref>{{Cite web |title=ARM LDR/STR, LDM/STM instructions - Programmer All |url=https://programmerall.com/article/2483661565/ |access-date=2025-04-19 |website=programmerall.com}}</ref>
 
==Chronology==
{| class="wikitable"
|+ Examples of SIMD supercomputerssupercomputer (notexamples includingexcluding [[vector processor]]s)
|-
! Year !! Example
|-
| 1974 || [[ILLIAC IV]] - an Array Processor comprising scalar 64-bit PEs
|-
| 1974 || [[ICL Distributed Array Processor]] (DAP)
Line 68 ⟶ 80:
| 1981 || [[Geometric-Arithmetic Parallel Processor]] from [[Martin Marietta]] (continued at [[Lockheed Martin]], then at [http://www.teranex.com Teranex] and [[Silicon Optix]])
|-
| 1983-19911983–1991 || [[Goodyear MPP|Massively Parallel Processor]] (MPP), from [[NASA]]/[[Goddard Space Flight Center]]
|-
| 1985 || [[Connection Machine]], models 1 and 2 (CM-1 and CM-2), from [[Thinking Machines Corporation]]
|-
| 1987-19961987–1996 || [[MasPar]] MP-1 and MP-2
|-
| 1991 || [[Zephyr DC]] from [[Wavetracer]]
|-
| 2001 || [[Xplor (Pyxsys)|Xplor]] from [[Pyxsys, Inc.]]
|}
 
==Hardware==
Small-scale (64 or 128 bits) SIMD became popular on general-purpose CPUs in the early 1990s and continued through 1997 and later with Motion Video Instructions (MVI) for [[DEC Alpha|Alpha]]. SIMD instructions can be found, to one degree or another, on most CPUs, including [[IBM]]'s [[AltiVec]] and [[Signal Processing Engine| (SPE]]) for [[PowerPC]], [[Hewlett-Packard|HP]]'s (HP) [[PA-RISC]] [[Multimedia Acceleration eXtensions]] (MAX), [[Intel Corporation|Intel]]'s [[MMX (instruction set)|MMX and iwMMXt]], [[Streaming SIMD Extensions|SSE]] (SSE), [[SSE2]], [[SSE3]] [[SSSE3]] and [[SSE4|SSE4]].x]], [[Advanced Micro Devices|AMD]]'s [[3DNow!]], [[ARC (processor)|ARC]]'s]] ARC Video subsystem, [[SPARC]]'s [[Visual Instruction Set|VIS]] and VIS2, [[Sun Microsystems|Sun]]'s [[MAJC]], [[ARM Holdings|ARM]]'s]] [[ARM architecture#Advanced SIMD (Neon)|Neon]] technology, [[MIPS architecture|MIPS]]' [[MDMX]] (MaDMaX) and [[MIPS-3D]]. The IBM, Sony, Toshiba co-developed [[Cell (microprocessorprocessor)|Cell Processor]]processor's]] [[Cell (processor)#Synergistic Processing UnitElement (SPE)|SPUSynergistic Processing Element's]] (SPE's) instruction set is heavily SIMD based. [[Philips]], now [[NXP Semiconductors|NXP]], developed several SIMD processors named [[Xetal]]. The Xetal has 320 16-bit processor elements especially designed for vision tasks. Apple's M1 and M2 chips also incorporate SIMD units deeply integrated with their GPU and Neural Engine, using Apple-designed SIMD pipelines optimized for image filtering, convolution, and matrix multiplication. This unified memory architecture helps SIMD instructions operate on shared memory pools more efficiently.
 
Intel's [[AVX-512]] SIMD instructions process 512 bits of data at once.
Line 88 ⟶ 100:
[[File:SIMD cpu diagram1.svg|right|thumb|280px| The SIMD tripling of four 8-bit numbers. The CPU loads 4 numbers at once, multiplies them all in one SIMD-multiplication, and saves them all at once back to RAM. In theory, the speed can be multiplied by 4.]]
 
SIMD instructions are widely used to process 3D graphics, although modern [[Video card|graphics card]]s with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them particularlyespecially useful for data processing and compression. They are also used in cryptography.<ref>[http://marc.info/?l=openssl-dev&m=108530261323715&w=2 RE: SSE2 speed], showing how SSE2 is used to implement SHA hash algorithms</ref><ref>[http://cr.yp.to/snuffle.html#speed Salsa20 speed; Salsa20 software], showing a stream cipher implemented using SSE2</ref><ref>[http://markmail.org/message/tygo74tyjagwwnp4 Subject: up to 1.4x RSA throughput using SSE2], showing RSA implemented using a non-SIMD SSE2 integer multiply instruction.</ref> The trend of general-purpose computing on GPUs ([[GPGPU]]) may lead to wider use of SIMD in the future. Recent compilers such as [[LLVM]], [[GNU Compiler Collection]] (GCC), and Intel's ICC offer aggressive auto-vectorizationvectoring options. Developers can often enable these with flags like <code>-O3</code> or <code>-ftree-vectorize</code>, which guide the compiler to restructure loops for SIMD compatibility.
 
Adoption of SIMD systems in [[personal computer]] software was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, like [[MMX (instruction set)|MMX]] and [[3DNow!]], offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using the [[Floating-point unit|FPU]] and MMX [[Processor register|registers]]. Compilers also often lacked support, requiring programmers to resort to [[assembly language]] coding.
 
SIMD on [[x86]] had a slow start. The introduction of [[3DNow!]] by [[Advanced Micro Devices|AMD]] and [[Streaming SIMD Extensions|SSE]] by [[Intel Corporation|Intel]] confused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized math [[Library (computing)|libraries]] that use SIMD instructions, and open source alternatives like [[libSIMD]], [[SIMDx86]] and [[SLEEF]] have started to appear (see also [[libm]]).<ref>{{cite web |title=SIMD library math functions |url=https://stackoverflow.com/a/36637424 |website=Stack Overflow |access-date=16 January 2020}}</ref>
 
[[Apple Inc.|Apple Computer]] had somewhat more success, even though they entered the SIMD market later than the rest. [[AltiVec]] offered a rich system and can be programmed using increasingly sophisticated compilers from [[Motorola]], [[IBM]] and [[GNU]], therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for example [[iTunes]] and [[QuickTime]]. However, in 2006, Apple computers moved to Intel x86 processors. Apple's [[Application programming interface|API]]s and [[Integrated development environment|development tools]] ([[Xcode|XCode]]) were modified to support [[SSE2]] and [[SSE3]] as well as AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM and [[Freescale Semiconductor]]. Even though Apple has stopped using PowerPC processors in their products, further development of AltiVec is continued in several PowerPC and [[Power ISA]] designs from Freescale and IBM.
Line 99 ⟶ 111:
 
===Programmer interface===
It is common for publishers of the SIMD instruction sets to make their own [[C/ (programming language)|C]] and [[C++]] language extensions with [[intrinsic function]]s or special datatypes (with [[operator overloading]]) guaranteeing the generation of vector code. Intel, AltiVec, and ARM NEON provide extensions widely adopted by the compilers targeting their CPUs. (More complex operations are the task of vector math libraries.)
 
The [[GNU C Compiler]] takes the extensions a step further by abstracting them into a universal interface that can be used on any platform by providing a way of defining SIMD datatypes.<ref>{{cite web |title=Vector Extensions |url=https://gcc.gnu.org/onlinedocs/gcc/Vector-Extensions.html |website=Using the GNU Compiler Collection (GCC) |access-date=16 January 2020}}</ref> The [[LLVM]] Clang compiler also implements the feature, with an analogous interface defined in the IR.<ref>{{cite web |title=Clang Language Extensions |url=https://clang.llvm.org/docs/LanguageExtensions.html |website=Clang 11 documentation |access-date=16 January 2020}}</ref> Rust's {{code|packed_simd}} crate (and the experimental {{code|std::simd}}) uses this interface, and so does [[Swift (programming language)|Swift]] 2.0+.
Line 105 ⟶ 117:
C++ has an experimental interface {{code|std::experimental::simd}} that works similarly to the GCC extension. LLVM's libcxx seems to implement it.{{Citation needed|date=March 2023}} For GCC and libstdc++, a wrapper library that builds on top of the GCC extension is available.<ref>{{cite web |title=VcDevel/std-simd |url=https://github.com/VcDevel/std-simd |publisher=VcDevel |date=6 August 2020}}</ref>
 
[[Microsoft Corporation|Microsoft]] added SIMD to [[.NET Core|.NET]] in RyuJIT.<ref>{{cite web|url=https://devblogs.microsoft.com/dotnet/ryujit-the-next-generation-jit-compiler-for-net|title=RyuJIT: The next-generation JIT compiler for .NET|date=30 September 2013 }}</ref> The {{code|System.Numerics.Vector}} package, available on NuGet, implements SIMD datatypes.<ref>{{cite web|url=https://devblogs.microsoft.com/dotnet/the-jit-finally-proposed-jit-and-simd-are-getting-married|title=The JIT finally proposed. JIT and SIMD are getting married|date=7 April 2014 }}</ref> Java also has a new proposed API for SIMD instructions available in [[OpenJDK]] 17 in an incubator module.<ref>{{cite web|url=https://openjdk.java.net/jeps/338|title=JEP 338: Vector API}}</ref> It also has a safe fallback mechanism on unsupported CPUs to simple loops.
 
Instead of providing an SIMD datatype, compilers can also be hinted to auto-vectorize some loops, potentially taking some assertions about the lack of data dependency. This is not as flexible as manipulating SIMD variables directly, but is easier to use. [[OpenMP]] 4.0+ has a {{code|#pragma omp simd}} hint.<ref>{{cite web |title=SIMD Directives |url=https://www.openmp.org/spec-html/5.0/openmpsu42.html |website=www.openmp.org}}</ref> This OpenMP interface has replaced a wide set of nonstandard extensions, including [[Cilk]]'s {{code|#pragma simd}},<ref>{{cite web |title=Tutorial pragma simd |url=https://www.cilkplus.org/tutorial-pragma-simd |website=CilkPlus |date=18 July 2012 |access-date=9 August 2020 |archive-date=4 December 2020 |archive-url=https://web.archive.org/web/20201204055745/https://www.cilkplus.org/tutorial-pragma-simd |url-status=dead }}</ref> GCC's {{code|#pragma GCC ivdep}}, and many more.<ref>{{cite web|url=https://www.openmp.org/wp-content/uploads/OpenMP_SC20_Loop_Transformations.pdf|title=OMP5.1: Loop Transformations|first=Michael|last=Kruse}}</ref>
 
===SIMD multi-versioning===
Line 114 ⟶ 126:
* Library multi-versioning (LMV): the entire [[Library (computing)|programming library]] is duplicated for many instruction set extensions, and the operating system or the program decides which one to load at run-time.
 
FMV, manually coded in assembly language, is quite commonly used in a number of performance-critical libraries such as glibc and libjpeg-turbo. [[Intel C++ Compiler]], [[GNU Compiler Collection]] since GCC 6, and [[Clang]] since clang 7 allow for a simplified approach, with the compiler taking care of function duplication and selection. GCC and clang requires explicit {{code|target_clones}} labels in the code to "clone" functions,<ref>{{cite web |title=Function multi-versioning in GCC 6 |url=https://lwn.net/Articles/691932/ |website=lwn.net |date=22 June 2016 }}</ref> while ICC does so automatically (under the command-line option {{code|/Qax}}). The [[Rust programming language]] also supports FMV. The setup is similar to GCC and Clang in that the code defines what instruction sets to compile for, but cloning is manually done via inlining.<ref>{{cite web |title=2045-target-feature |url= https://rust-lang.github.io/rfcs/2045-target-feature.html |website=The Rust RFC Book}}</ref>
 
As using FMV requires code modification on GCC and Clang, vendors more commonly use library multi-versioning: this is easier to achieve as only compiler switches need to be changed. [[Glibc]] supports LMV and this functionality is adopted by the Intel-backed Clear Linux project.<ref name=clear>{{cite web |title=Transparent use of library packages optimized for Intel® architecture |url=https://clearlinux.org/news-blogs/transparent-use-library-packages-optimized-intel-architecture |website=Clear Linux* Project |access-date=8 September 2019 |language=en}}</ref>
Line 125 ⟶ 137:
Instances of these types are immutable and in optimized code are mapped directly to SIMD registers. Operations expressed in Dart typically are compiled into a single instruction without any overhead. This is similar to C and C++ intrinsics. Benchmarks for [[4×4 matrix|4×4]] [[matrix multiplication]], [[3D vertex transformation]], and [[Mandelbrot set]] visualization show near 400% speedup compared to scalar code written in Dart.
 
McCutchan's work on Dart, now called SIMD.js, has been adopted by [[ECMAScript]] and Intel announced at IDF 2013 that they arewere implementing McCutchan's specification for both [[V8 (JavaScript engine)|V8]] and [[SpiderMonkey (JavaScript engine)|SpiderMonkey]].<ref>{{cite web |title=SIMD in JavaScript |url=https://01.org/node/1495 |website=01.org |date=8 May 2014}}</ref> However, by 2017, SIMD.js has beenwas taken out of the [[ECMAScript]] standard queue in favor of pursuing a similar interface in [[WebAssembly]].<ref>{{cite web |title=tc39/ecmascript_simd: SIMD numeric type for EcmaScript. |url=https://github.com/tc39/ecmascript_simd/ |website=GitHub |publisher=Ecma TC39 |access-date=8 September 2019 |date=22 August 2019}}</ref> AsSupport offor AugustSIMD 2020,was added to the WebAssembly interface2.0 remains unfinishedspecification, butwhich itswas portablefinished 128on 2022 and became official on December 2024.<ref>{{cite web |url=https://webassembly.org/news/2025-bit03-20-wasm-2.0/ SIMD|title=Wasm feature2.0 hasCompleted already- WebAssembly}}</ref> LLVM's autovectoring, when compiling C or C++ to WebAssembly, can target WebAssembly SIMD to seenautomatically somemake use inof manySIMD, engineswhile SIMD intrinsic are also available.<ref>{{Citationcite web needed|datetitle=MarchUsing 2025SIMD with WebAssembly |url=https://emscripten.org/docs/porting/simd.html |work=Emscripten 4.0.11-git (dev) documentation}}</ref>
 
Emscripten, Mozilla's C/C++-to-JavaScript compiler, with extensions can enable compilation of C++ programs that make use of SIMD intrinsics or GCC-style vector code to the SIMD API of JavaScript, resulting in equivalent speedups compared to scalar code.<ref>{{cite web |title=SIMD in JavaScript via C++ and Emscripten |first1=Peter |last1=Jensen |first2=Ivan |last2=Jibaja |first3=Ningxin |last3=Hu |first4=Dan |last4=Gohman |first5=John |last5=McCutchan |year=2015 |format=PDF |url=https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx3cG12cDIwMTV8Z3g6NTkzYWE2OGNlNDAyMTRjOQ}}</ref> It also supports (and now prefers) the WebAssembly 128-bit SIMD proposal.<ref>{{cite web |title=Porting SIMD code targeting WebAssembly |url=https://emscripten.org/docs/porting/simd.html |website=Emscripten 1.40.1 documentation}}</ref>
 
==Commercial applications==
It has generally proven difficult to find sustainable commercial applications for SIMD-only processors.
 
One that has had some measure of success is the [[Geometric-Arithmetic Parallel Processor|GAPP]], which was developed by [[Lockheed Martin]] and taken to the commercial sector by their spin-off [[Teranex]]. The GAPP's recent incarnations have become a powerful tool in real-time [[digital image processing|video processing]] applications like conversion between various video standards and frame rates ([[NTSC]] to/from [[PAL]], NTSC to/from [[Highhigh-definition television|HDTV]] (HDTV) formats, etc.), [[deinterlacing]], [[Noise reduction|image [[noise reduction]], adaptive [[video compression]], and image enhancement.
 
A more ubiquitous application for SIMD is found in [[video game]]s: nearly every modern [[video game console]] since [[History of video game consoles (sixth generation)|1998]] has incorporated a SIMD processor somewhere in its architecture. The [[PlayStation 2]] was unusual in that one of its vector-float units could function as an autonomous [[Digitaldigital signal processor|DSP]] (DSP) executing its own instruction stream, or as a coprocessor driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. [[Microsoft]]'s [[DirectX|Direct3D]] 9.0]] now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.
 
A later processor that used vector processing is the [[Cell (microprocessorprocessor)|Cell Processorprocessor]] used in the Playstation 3, which was developed by [[IBM]] in cooperation with [[Toshiba]] and [[Sony]]. It uses a number of SIMD processors (a [[Nonnon-Uniformuniform Memorymemory Access|NUMAaccess]] (NUMA) architecture, each with independent [[cache memory|local store]] and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications. It differs from traditional ISAs by being SIMD from the ground up with no separate scalar registers.
 
Ziilabs produced an SIMD type processor for use on mobile devices, such as media players and mobile phones.<ref>{{cite web |url=https://secure.ziilabs.com/products/processors/zms05.aspx |title=ZiiLABS ZMS-05 ARM 9 Media Processor |website=ZiiLabs |access-date=2010-05-24 |url-status=dead |archive-url=https://web.archive.org/web/20110718153716/https://secure.ziilabs.com/products/processors/zms05.aspx |archive-date=2011-07-18 }}</ref>
 
Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc. [[ClearSpeed]]'s CSX600 (2004) has 96 cores each with two double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architect [[Bill Dally]]. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a [[MIPS architecture|MIPS]] CPU.
 
==See also==
Line 146 ⟶ 156:
* [[Instruction set architecture]]
*[[Flynn's taxonomy]]
* [[SWAR|SIMD within a register ([[SWAR)]])
* [[SPMD|Single Programprogram, Multiplemultiple Datadata]] (SPMD)]]
* [[OpenCL]]
 
Line 159 ⟶ 169:
* [http://software.intel.com/en-us/articles/optimizing-the-rendering-pipeline-of-animated-models-using-the-intel-streaming-simd-extensions Article about Optimizing the Rendering Pipeline of Animated Models Using the Intel Streaming SIMD Extensions]
* [https://web.archive.org/web/20130921070044/http://www.yeppp.info/ "Yeppp!": cross-platform, open-source SIMD library from Georgia Tech]
* [https://computing.llnl.gov/tutorials/parallel_comp/ Introduction to Parallel Computing from LLNL Lawrence Livermore National Laboratory] {{Webarchive|url=https://web.archive.org/web/20130610122229/https://computing.llnl.gov/tutorials/parallel_comp/ |date=2013-06-10 }}
* {{GitHub|simd-everywhere/simde}}: A portable implementation of platform-specific intrinsics for other platforms (e.g. SSE intrinsics for ARM NEON), using C/C++ headers