Content deleted Content added
m →Research and abandoned: HTTP to HTTPS for SourceForge |
|||
(48 intermediate revisions by 23 users not shown) | |||
Line 1:
{{Short description|Property of an intermediate representation in a compiler}}
In [[compiler]] design, '''static single assignment form''' (often abbreviated as '''SSA form''' or simply '''SSA''') is a
There are efficient algorithms for converting programs into SSA form. To convert to SSA, existing variables in the original IR are split into versions, new variables typically indicated by the original name with a subscript, so that every definition gets its own version. Additional statements that assign to new versions of variables may also need to be introduced at the join point of two control flow paths. Converting from SSA form to machine code is also efficient.
SSA makes numerous analyses needed for optimizations easier to perform, such as determining [[use-define chain]]s, because when looking at a use of a variable there is only one place where that variable may have received a value. Most optimizations can be adapted to preserve SSA form, so that one optimization can be performed after another with no additional analysis. The SSA based optimizations are usually more efficient and more powerful than their non-SSA form prior equivalents.
In [[functional language]] compilers, such as those for [[Scheme (programming language)|Scheme]] and [[ML programming language|ML]], [[continuation-passing style]] (CPS) is generally used. SSA is formally equivalent to a well-behaved subset of CPS excluding non-local control flow, so optimizations and transformations formulated in terms of one generally apply to the other. Using CPS as the intermediate representation is more natural for higher-order functions and interprocedural analysis. CPS also easily encodes [[call/cc]], whereas SSA does not.<ref name="Kelsey">{{cite book
|first1=Richard A. |last1=Kelsey▼
|title=Papers from the 1995 ACM SIGPLAN workshop on Intermediate representations |chapter=A correspondence between continuation passing style and static single assignment form |year=1995 |pages=13–22
|isbn=0897917545 |doi=10.1145/202529.202532 |s2cid=6207179 |chapter-url=https://
== History ==
SSA was developed in the 1980s by several researchers at [[International Business Machines|IBM]]. Kenneth Zadeck, a key member of the team, moved to Brown University as development continued.{{sfn|Rastello|Tichadou|2022|loc=sec. 1.4}}<ref name=Zadeck>{{cite conference|title=The Development of Static Single Assignment Form|url=https://compilers.cs.uni-saarland.de/ssasem/talks/Kenneth.Zadeck.pdf|conference=Static Single-Assignment Form Seminar|first=Kenneth|last=Zadeck|conference-url=https://compilers.cs.uni-saarland.de/ssasem/|___location=Autrans, France|date=April 2009}}</ref> A 1986 paper introduced birthpoints, identity assignments, and variable renaming such that variables had a single static assignment.<ref>{{cite book |last1=Cytron |first1=Ron |last2=Lowry |first2=Andy |last3=Zadeck |first3=F. Kenneth |title=Proceedings of the 13th ACM SIGACT-SIGPLAN symposium on Principles of programming languages - POPL '86 |chapter=Code motion of control structures in high-level languages |date=1986 |pages=70–85 |doi=10.1145/512644.512651|s2cid=9099471 }}</ref> A subsequent 1987 paper by [[Jeanne Ferrante]] and Ronald Cytron<ref>{{cite conference |last1=Cytron |first1=Ronald Kaplan |first2=Jeanne |last2=Ferrante |title=What's in a name? Or, the value of renaming for parallelism detection and storage allocation |conference=International Conference on Parallel Processing, ICPP'87 1987 |pages=19–27}}</ref> proved that the renaming done in the previous paper removes all false dependencies for scalars.<ref name=Zadeck/> In 1988, Barry Rosen, [[Mark N. Wegman]], and Kenneth Zadeck replaced the identity assignments with Φ-functions, introduced the name "static single-assignment form", and demonstrated a now-common SSA optimization.<ref name="original">{{cite book|author1 = Barry Rosen| author2 = Mark N. Wegman|author3 = F. Kenneth Zadeck| title = Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88| chapter = Global value numbers and redundant computations| chapter-url=https://www.cs.wustl.edu/~cytron/cs531/Resources/Papers/valnum.pdf|year=1988| pages = 12–27| doi = 10.1145/73560.73562| isbn = 0-89791-252-7}}</ref> The name Φ-function was chosen by Rosen to be a more publishable version of "phony function".<ref name=Zadeck/> Alpern, Wegman, and Zadeck presented another optimization, but using the name "static single assignment".<ref>{{cite book |last1=Alpern |first1=B. |last2=Wegman |first2=M. N. |last3=Zadeck |first3=F. K. |title=Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88 |chapter=Detecting equality of variables in programs |date=1988 |pages=1–11 |doi=10.1145/73560.73561|isbn=0897912527 |s2cid=18384941 }}</ref> Finally, in 1989, Rosen, Wegman, Zadeck, Cytron, and Ferrante found an efficient means of converting programs to SSA form.<ref name="Cytron_1991">{{cite journal
|title=Efficiently computing static single assignment form and the control dependence graph
|author1=Cytron, Ron |author2=Ferrante, Jeanne |author3=Rosen, Barry K. |author4=Wegman, Mark N. |author5=Zadeck, F. Kenneth |name-list-style=amp |journal=ACM Transactions on Programming Languages and Systems |volume=13 |year=1991 |pages=451–490 |url=http://www.cs.utexas.edu/~pingali/CS380C/2010/papers/ssaCytron.pdf |issue=4
|doi=10.1145/115372.115320 |citeseerx=10.1.1.100.6361 |s2cid=13243943 }}</ref>
▲|first1=Richard A. |last1=Kelsey
▲|isbn=0897917545 |doi=10.1145/202529.202532 |s2cid=6207179 |url=https://www.cs.purdue.edu/homes/suresh/502-Fall2008/papers/kelsey-ssa-cps.pdf
==Benefits==
Line 27 ⟶ 32:
[[Compiler optimization]] algorithms that are either enabled or strongly enhanced by the use of SSA include:
* [[Constant
* [[Value range propagation]]<ref>[http://llvm.org/devmtg/2007-05/05-Lewycky-Predsimplify.pdf value range propagation]</ref> – precompute the potential ranges a calculation could be, allowing for the creation of branch predictions in advance
* [[Sparse conditional constant propagation]] – range-check some values, allowing tests to predict the most likely branch
Line 56 ⟶ 61:
Φ functions are not implemented as machine operations on most machines. A compiler can implement a Φ function by inserting "move" operations at the end of every predecessor block. In the example above, the compiler might insert a move from <var>y</var><sub>1</sub> to <var>y</var><sub>3</sub> at the end of the middle-left block and a move from <var>y</var><sub>2</sub> to <var>y</var><sub>3</sub> at the end of the middle-right block. These move operations might not end up in the final code based on the compiler's [[register allocation]] procedure. However, this approach may not work when simultaneous operations are speculatively producing inputs to a Φ function, as can happen on [[wide-issue]] machines. Typically, a wide-issue machine has a selection instruction used in such situations by the compiler to implement the Φ function.
===Computing minimal SSA using dominance frontiers===
Line 78 ⟶ 81:
</pre>
Dominance frontiers define the points at which Φ functions are needed. In the above example, when control is passed to node 4, the definition of <code>result</code> used depends on whether control was passed from node 2 or 3. Φ functions are not needed for variables defined in a dominator, as there is only one possible definition that can apply.
There is an efficient algorithm for finding dominance frontiers of each node. This algorithm was originally described in
Keith D. Cooper, Timothy J. Harvey, and Ken Kennedy of [[Rice University]] describe an algorithm in their paper titled ''A Simple, Fast Dominance Algorithm'':<ref name="Cooper_2001">{{cite
|title=A Simple, Fast Dominance Algorithm |id=Rice University, CS Technical Report 06-33870
|
'''for each''' node b
Line 101 ⟶ 104:
==Variations that reduce the number of Φ functions==
"Minimal" SSA inserts the minimal number of Φ functions required to ensure that each name is assigned a value exactly once and that each reference (use) of a name in the original program can still refer to a unique name.
However, some of these Φ functions could be ''[[dead code elimination|dead]]''.
===Pruned SSA===
Pruned SSA form is based on a simple observation: Φ functions are only needed for variables that are "live" after the Φ function. (Here, "live" means that the value is used along some path that begins at the Φ function in question.) If a variable is not live, the result of the Φ function cannot be used and the assignment by the Φ function is dead.
Construction of pruned SSA form uses [[live-variable analysis|live-variable information]] in the Φ function insertion phase to decide whether a given Φ function is needed.
Another possibility is to treat pruning as a [[dead-code elimination]] problem.
===Semi-pruned SSA===
Semi-pruned SSA form<ref>{{cite
Computing the set of block-local variables is a simpler and faster procedure than full live-variable analysis, making semi-pruned SSA form more efficient to compute than pruned SSA form.
===Block arguments===
Block arguments are an alternative to Φ functions that is representationally identical but in practice can be more convenient during optimization. Blocks are named and take a list of block arguments, notated as function parameters. When calling a block the block arguments are bound to specified values. [[MLton]], [[Swift (programming language)|Swift]] SIL, and LLVM
==Converting out of SSA form==
SSA form is not normally used for direct execution (although it is possible to interpret SSA<ref>{{cite book
|chapter=Interpreting programs in static single assignment form |year=2004 |last=von Ronne |first=Jeffery |author2=Ning Wang |author3=Michael Franz |title=Proceedings of the 2004 workshop on Interpreters, virtual machines and emulators - IVME '04 |page=23 |doi=10.1145/1059579.1059585 |isbn=1581139098 |s2cid=451410 |url=https://escholarship.org/uc/item/98n3s5r5 |chapter-url=http://dl.acm.org/citation.cfm?doid=1059579.1059585 }}</ref>), and it is frequently used "on top of" another IR with which it remains in direct correspondence.
Performing optimizations on SSA form usually leads to entangled SSA-Webs, meaning there are Φ instructions whose operands do not all have the same root operand. In such cases [[graph coloring|color-out]] algorithms are used to come out of SSA. Naive algorithms introduce a copy along each predecessor path that caused a source of different root symbol to be put in Φ than the destination of Φ. There are multiple algorithms for coming out of SSA with fewer copies, most use interference graphs or some approximation of it to do copy coalescing.<ref>{{cite journal |last1=Boissinot |first1=Benoit |last2=Darte |first2=Alain |last3=Rastello |first3=Fabrice |last4=Dinechin |first4=Benoît Dupont de |last5=Guillon |first5=Christophe |title=Revisiting Out-of-SSA Translation for Correctness, Code Quality, and Efficiency |journal=HAL-Inria Cs.DS |date=2008 |pages=14 |url=https://hal.inria.fr/inria-00349925 |language=en}}</ref>
Line 135 ⟶ 138:
==Compilers using SSA form==
{{
=== Open-source ===
* The ETH [[Oberon-2]] compiler was one of the first public projects to incorporate "GSA", a variant of SSA.▼
* [[WebKit]] uses SSA in its JIT compilers.<ref>{{Cite web|url=https://webkit.org/blog/3362/introducing-the-webkit-ftl-jit/|title=Introducing the WebKit FTL JIT|date=13 May 2014}}</ref><ref>{{Cite web|url=https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/|title=Introducing the B3 JIT Compiler|date=15 February 2016}}</ref>▼
* The [[LLVM]] Compiler Infrastructure uses SSA form for all scalar register values (everything except memory) in its primary code representation. SSA form is only eliminated once register allocation occurs, late in the compile process (often at link time).▼
* [[Swift (programming language)|Swift]] defines its own SSA form above LLVM IR, called SIL (Swift Intermediate Language).<ref>{{Cite web|url=https://github.com/
* The [[Open64]] compiler uses SSA form in its global scalar optimizer, though the code is brought into SSA form before and taken out of SSA form afterwards. Open64 uses extensions to SSA form to represent memory in SSA form as well as scalar values.▼
* The [[Erlang (programming language)|Erlang]]
* Since the version 4 (released in April 2005) [[GNU Compiler Collection | GCC]], the [[GNU Compiler Collection]], makes extensive use of SSA. The [[front and back ends|frontends]] generate "[[GIMPLE#GENERIC and GIMPLE|GENERIC]]" code that is then converted into "[[GIMPLE#GENERIC and GIMPLE|GIMPLE]]" code by the "gimplifier". High-level optimizations are then applied on the SSA form of "GIMPLE". The resulting optimized intermediate code is then translated into [[Register Transfer Language|RTL]], on which low-level optimizations are applied. The architecture-specific [[Front and back ends|backend]]s finally turn RTL into [[assembly language]].▼
▲* The [[LLVM]] Compiler Infrastructure uses SSA form for all scalar register values (everything except memory) in its primary code representation.
▲*
* [[Go (programming language)|Go]] (1.7: for x86-64 architecture only; 1.8: for all supported architectures).<ref>{{Cite web|url=https://golang.org/doc/go1.7#compiler|title=Go 1.7 Release Notes - The Go Programming Language|website=golang.org|access-date=2016-08-17}}</ref><ref>{{Cite web|url=https://golang.org/doc/go1.8#compiler|title=Go 1.8 Release Notes - The Go Programming Language|website=golang.org|access-date=2017-02-17}}</ref>▼
* [[IBM]]'s open source adaptive [[Java virtual machine]], [[Jikes RVM]], uses extended Array SSA, an extension of SSA that allows analysis of scalars, arrays, and object fields in a unified framework. Extended Array SSA analysis is only enabled at the maximum optimization level, which is applied to the most frequently executed portions of code.
* In 2002, [http://citeseer.ist.psu.edu/721276.html researchers modified] IBM's JikesRVM (named Jalapeño at the time) to run both standard Java [[bytecode]] and a typesafe SSA ([[SafeTSA]]) bytecode class files, and demonstrated significant performance benefits to using the SSA bytecode.▼
* [[Oracle Corporation|Oracle]]'s [[HotSpot (virtual machine)|HotSpot Java Virtual Machine]] uses an SSA-based intermediate language in its JIT compiler.<ref>{{cite web|url=http://www.oracle.com/technetwork/java/whitepaper-135217.html|publisher=Oracle Corporation|title=The Java HotSpot Performance Engine Architecture}}</ref>▼
* Microsoft [[Visual C++]] compiler backend available in [[Microsoft Visual Studio]] 2015 Update 3 uses SSA<ref>{{cite web|url=https://blogs.msdn.microsoft.com/vcblog/2016/05/04/new-code-optimizer|title=Introducing a new, advanced Visual C++ code optimizer|date=4 May 2016}}</ref>▼
▲* [[Mono (software)|Mono]] uses SSA in its JIT compiler called Mini.
* [http://jackcc.sf.net jackcc] is an open-source compiler for the academic instruction set Jackal 3.0. It uses a simple 3-operand code with SSA for its intermediate representation. As an interesting variant, it replaces Φ functions with a so-called SAME instruction, which instructs the register allocator to place the two live ranges into the same physical register.▼
* Although not a compiler, the [http://boomerang.sourceforge.net/ Boomerang] decompiler uses SSA form in its internal representation. SSA is used to simplify expression propagation, identifying parameters and returns, preservation analysis, and more.▼
* [[Portable.NET]] uses SSA in its JIT compiler.▼
* The Illinois Concert Compiler circa 1994<ref>{{cite web|url=http://www-csag.ucsd.edu/projects/concert.html|title=Illinois Concert Project}}</ref> used a variant of SSA called SSU (Static Single Use) which renames each variable when it is assigned a value, and in each conditional context in which that variable is used; essentially the static single information form mentioned above. The SSU form is documented in [http://www-csag.ucsd.edu/papers/jplevyak-thesis.ps John Plevyak's Ph.D Thesis].▼
* The COINS compiler uses SSA form optimizations as explained [https://web.archive.org/web/20040531024854/http://www.is.titech.ac.jp/~sassa/coins-www-ssa/english/ here].▼
* The [[Mozilla]] [[Firefox]] [[SpiderMonkey]] JavaScript engine uses SSA-based IR.<ref>{{cite web|url=https://wiki.mozilla.org/IonMonkey/Overview|title=IonMonkey Overview}},</ref>
* The [[Chromium (web browser)|Chromium]] [[V8 JavaScript engine]] implements SSA in its Crankshaft compiler infrastructure as [https://blog.chromium.org/2010/12/new-crankshaft-for-v8.html announced in December 2010]
* [[PyPy]] uses a linear SSA representation for traces in its JIT compiler.
* The [[Android Runtime]]<ref>{{cite video |title=The Evolution of ART - Google I/O 2016 |time=3m47s |url=https://www.youtube.com/watch?v=fwMM6g7wpQ8 |date=25 May 2016 |work=Google}}</ref> and the [[Dalvik (software)|Dalvik Virtual Machine]] use SSA.<ref>{{cite web |title=JIT through the ages |url=http://www.cs.columbia.edu/~aho/cs6998/reports/12-12-11_Ramanan_JIT.pdf |last=Ramanan |first=Neeraja | date=12 Dec 2011 }}</ref>
* The Standard ML compiler [[MLton]] uses SSA in one of its intermediate languages.
* [[LuaJIT]] makes heavy use of SSA-based optimizations.<ref>{{cite web|url=http://wiki.luajit.org/Optimizations|title=Bytecode Optimizations|publisher=the LuaJIT project}}</ref>
* The [[PHP]] and [[Hack (programming language)|Hack]] compiler [[HHVM]] uses SSA in its IR.<ref>{{cite web|url=https://github.com/facebook/hhvm/blob/master/hphp/doc/ir.specification|title=HipHop Intermediate Representation (HHIR)|website=[[GitHub]]|date=30 October 2021}}</ref>
*
* libFirm, a library for use as the [[Compiler#Three-stage compiler structure|middle and back ends of a compiler]], uses SSA form for all scalar register values until code generation by use of an SSA-aware register allocator.<ref>{{cite web|url=http://pp.ipd.kit.edu/firm/|title=Firm - Optimization and Machine Code Generation}}</ref>
▲* [[Go (programming language)|Go]] (1.7: for x86-64 architecture only; 1.8: for all supported architectures).<ref>{{Cite web|url=https://golang.org/doc/go1.7#compiler|title=Go 1.7 Release Notes - The Go Programming Language|website=golang.org|access-date=2016-08-17}}</ref><ref>{{Cite web|url=https://golang.org/doc/go1.8#compiler|title=Go 1.8 Release Notes - The Go Programming Language|website=golang.org|access-date=2017-02-17}}</ref>
* Various [[Mesa (computer graphics)|Mesa]] drivers via NIR, an SSA representation for shading languages.<ref>{{Cite web|url=https://lists.freedesktop.org/archives/mesa-dev/2014-December/072761.html|title=Reintroducing NIR, a new IR for mesa|last=Ekstrand|first=Jason|date=16 December 2014 }}</ref>▼
=== Commercial ===
▲* [[Oracle Corporation|Oracle]]'s [[HotSpot (virtual machine)|HotSpot Java Virtual Machine]] uses an SSA-based intermediate language in its JIT compiler.<ref>{{cite web|url=
▲* Microsoft [[Visual C++]] compiler backend available in [[Microsoft Visual Studio]] 2015 Update 3 uses SSA<ref>{{cite web|url=https://blogs.msdn.microsoft.com/vcblog/2016/05/04/new-code-optimizer|title=Introducing a new, advanced Visual C++ code optimizer|date=4 May 2016}}</ref>
* [[SPIR-V]], the shading language standard for the [[Vulkan (API)|Vulkan graphics API]] and [[compute kernel|kernel language]] for [[OpenCL]] compute API, is an SSA representation.<ref>{{cite web|url=https://www.khronos.org/registry/spir-v/specs/1.0/SPIRV.pdf|title=SPIR-V spec}}</ref>
* The IBM family of XL compilers, which include [[IBM XL C/C++ Compilers|C, C++]] and Fortran.<ref>{{cite journal|url=https://www.cs.rice.edu/~vs3/PDF/ibmjrd97.pdf|title=Automatic selection of high-order transformations in the IBM XL FORTRAN compilers|first=V.|last=Sarkar|journal=[[IBM Journal of Research and Development]]|volume=41|issue=3|pages=233–264|date=May 1997|publisher=IBM|doi=10.1147/rd.413.0233 }}</ref>
▲* Various [[Mesa (computer graphics)|Mesa]] drivers via NIR, an SSA representation for shading languages.<ref>{{Cite web|url=https://lists.freedesktop.org/archives/mesa-dev/2014-December/072761.html|title=Reintroducing NIR, a new IR for mesa|last=Ekstrand|first=Jason}}</ref>
* NVIDIA [[CUDA]]<ref>{{cite journal|url=https://www.researchgate.net/publication/235605681|title=CUDA: Compiling and optimizing for a GPU platform|date=2012 |doi=10.1016/j.procs.2012.04.209 |last1=Chakrabarti |first1=Gautam |last2=Grover |first2=Vinod |last3=Aarts |first3=Bastiaan |last4=Kong |first4=Xiangyun |last5=Kudlur |first5=Manjunath |last6=Lin |first6=Yuan |last7=Marathe |first7=Jaydeep |last8=Murphy |first8=Mike |last9=Wang |first9=Jian-Zhong |journal=Procedia Computer Science |volume=9 |pages=1910–1919 |doi-access=free }}</ref>
▲* [[WebKit]] uses SSA in its JIT compilers.<ref>{{Cite web|url=https://webkit.org/blog/3362/introducing-the-webkit-ftl-jit/|title=Introducing the WebKit FTL JIT|date=13 May 2014}}</ref><ref>{{Cite web|url=https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/|title=Introducing the B3 JIT Compiler|date=15 February 2016}}</ref>
▲* [[Swift (programming language)|Swift]] defines its own SSA form above LLVM IR, called SIL (Swift Intermediate Language).<ref>{{Cite web|url=https://github.com/apple/swift/blob/master/docs/SIL.rst|title=Swift Intermediate Language (GitHub)|website=[[GitHub]]|date=30 October 2021}}</ref><ref>{{Cite web|url=https://www.youtube.com/watch?v=Ntj8ab-5cvE |archive-url=https://ghostarchive.org/varchive/youtube/20211221/Ntj8ab-5cvE |archive-date=2021-12-21 |url-status=live|title=Swift's High-Level IR: A Case Study of Complementing LLVM IR with Language-Specific Optimization, LLVM Developers Meetup 10/2015|website=[[YouTube]]}}{{cbignore}}</ref>
=== Research and abandoned ===
▲* [[Erlang (programming language)|Erlang]] rewrote their compiler in OTP 22.0 to "internally use an intermediate representation based on Static Single Assignment (SSA)." With plans for further optimizations built on top of SSA in future releases.<ref>{{Cite web|url=http://www.erlang.org/news/132|title=OTP 22.0 Release Notes}}</ref>
▲* The ETH [[Oberon-2]] compiler was one of the first public projects to incorporate "GSA", a variant of SSA.
▲* The [[Open64]] compiler
▲* In 2002, [http://citeseer.ist.psu.edu/721276.html researchers modified] IBM's JikesRVM (named Jalapeño at the time) to run both standard Java [[bytecode]] and a typesafe SSA ([[SafeTSA]]) bytecode class files, and demonstrated significant performance benefits to using the SSA bytecode.
▲* [http://jackcc.sf.net jackcc] is an open-source compiler for the academic instruction set Jackal 3.0.
▲* The Illinois Concert Compiler circa 1994<ref>{{cite web|url=http://www-csag.ucsd.edu/projects/concert.html|title=Illinois Concert Project|archive-url=https://web.archive.org/web/20140313140417/http://www-csag.ucsd.edu/projects/concert.html|archive-date=2014-03-13|url-status=dead}}</ref> used a variant of SSA called SSU (Static Single Use) which renames each variable when it is assigned a value, and in each conditional context in which that variable is used; essentially the static single information form mentioned above.
▲* The COINS compiler uses SSA form optimizations as explained [https://web.archive.org/web/20040531024854/http://www.is.titech.ac.jp/~sassa/coins-www-ssa/english/ here].
* Reservoir Labs' R-Stream compiler supports non-SSA (quad list), SSA and SSI (Static Single Information<ref>{{cite tech report |url=https://cscott.net/Publications/ssi.pdf |title=Static Single Information Form |last1=Ananian |first1=C. Scott |last2=Rinard |first2=Martin |year=1999|citeseerx = 10.1.1.1.9976}}</ref>) forms.<ref>{{cite book|url=https://www.springer.com/us/book/9780387097657|title=Encyclopedia of Parallel Computing}}</ref>
▲* Although not a compiler, the [
==References==
Line 176 ⟶ 187:
===General references===
* {{cite book |title=SSA-based compiler design |date=2022 |___location=Cham |isbn=978-3-030-80515-9 |doi=10.1007/978-3-030-80515-9 |s2cid=63274602 |language=en|editor-first1=Fabrice|editor-last1=Rastello|editor-first2=Florent Bouchez|editor-last2=Tichadou|url=https://pfalcon.github.io/ssabook/latest/book-full.pdf}}
* {{cite book |author=Appel, Andrew W.
|title=Modern Compiler Implementation in ML
Line 188 ⟶ 200:
|title=SSA is Functional Programming
|journal=ACM SIGPLAN Notices |date=April 1998 |volume=33 |issue=4 |pages=17–20 |doi=10.1145/278283.278285 |s2cid=207227209
|doi-access=free }}
* {{cite journal |author=Pop, Sebastian
|title=The SSA Representation Framework: Semantics, Analyses and GCC Implementation
Line 198 ⟶ 210:
*[http://www.dcs.gla.ac.uk/~jsinger/ssa.html The SSA Bibliography]. Extensive catalogue of SSA research papers.
* Zadeck, F. Kenneth. [http://webcast.rice.edu/webcast.php?action=details&event=1346 "The Development of Static Single Assignment Form"], December 2007 talk on the origins of SSA.
[[Category:Compiler optimizations]]
|