Static single-assignment form: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Added date. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 507/748
Bender the Bot (talk | contribs)
 
(14 intermediate revisions by 11 users not shown)
Line 9:
|first1=Richard A. |last1=Kelsey
|title=Papers from the 1995 ACM SIGPLAN workshop on Intermediate representations |chapter=A correspondence between continuation passing style and static single assignment form |year=1995 |pages=13–22
|isbn=0897917545 |doi=10.1145/202529.202532 |s2cid=6207179 |chapter-url=https://wwwbernsteinbear.cs.purdue.educom/homesassets/suresh/502-Fall2008/papersimg/kelsey-ssa-cps.pdf}}</ref>
}}</ref>
 
== History ==
 
SSA was developed in the 1980s by several researchers at [[International Business Machines|IBM]]. Kenneth Zadeck, a key member of the team, moved to Brown University as development continued.{{sfn|Rastello|Tichadou|2022|loc=sec. 1.4}}<ref name=Zadeck>{{cite conference|title=The Development of Static Single Assignment Form|url=https://compilers.cs.uni-saarland.de/ssasem/talks/Kenneth.Zadeck.pdf|conference=Static Single-Assignment Form Seminar|first=Kenneth|last=Zadeck|conference-url=https://compilers.cs.uni-saarland.de/ssasem/|___location=Autrans, France|date=April 2009}}</ref> A 1986 paper introduced birthpoints, identity assignments, and variable renaming such that variables had a single static assignment.<ref>{{cite journalbook |last1=Cytron |first1=Ron |last2=Lowry |first2=Andy |last3=Zadeck |first3=F. Kenneth |title=Code motion of control structures in high-level languages |journal=Proceedings of the 13th ACM SIGACT-SIGPLAN Symposiumsymposium on Principles of Programmingprogramming Languageslanguages - POPL '86 |chapter=Code motion of control structures in high-level languages |date=1986 |pages=70–85 |doi=10.1145/512644.512651|s2cid=9099471 }}</ref> A subsequent 1987 paper by [[Jeanne Ferrante]] and Ronald CitronCytron<ref>{{cite conference |last1=Cytron |first1=Ronald Kaplan |first2=Jeanne |last2=Ferrante |title=What's in a name? Or, the value of renaming for parallelism detection and storage allocation |conference=International Conference on Parallel Processing, ICPP'87 1987 |pages=19–27}}</ref> proved that the renaming done in the previous paper removes all false dependencies for scalars.<ref name=Zadeck/> In 1988, Barry Rosen, [[Mark N. Wegman]], and Kenneth Zadeck replaced the identity assignments with Φ-functions, introduced the name "static single-assignment form", and demonstrated a now-common SSA optimization.<ref name="original">{{cite journalbook|author1 = Barry Rosen| author2 = Mark N. Wegman|author3 = F. Kenneth Zadeck| title = Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88| chapter = Global value numbers and redundant computations| chapter-url=https://www.cs.wustl.edu/~cytron/cs531/Resources/Papers/valnum.pdf|title=Global value numbers and redundant computations|year=1988|journal=Proceedings ofpages the= 15th12–27| ACMdoi SIGPLAN-SIGACT= Symposium10.1145/73560.73562| onisbn Principles= of Programming Languages0-89791-252-7}}</ref> The name Φ-function was chosen by Rosen to be a more publishable version of "phony function".<ref name=Zadeck/> Alpern, Wegman, and Zadeck presented another optimization, but using the name "single static single assignment".<ref>{{cite book |last1=Alpern |first1=B. |last2=Wegman |first2=M. N. |last3=Zadeck |first3=F. K. |title=Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88 |chapter=Detecting equality of variables in programs |date=1988 |pages=1–11 |doi=10.1145/73560.73561|isbn=0897912527 |s2cid=18384941 }}</ref> Finally, in 1989, Rosen, Wegman, Zadeck, Cytron, and Ferrante found an efficient means of converting programs to SSA form.<ref name="Cytron_1991">{{cite journal
|title=Efficiently computing static single assignment form and the control dependence graph
|author1=Cytron, Ron |author2=Ferrante, Jeanne |author3=Rosen, Barry K. |author4=Wegman, Mark N. |author5=Zadeck, F. Kenneth |name-list-style=amp |journal=ACM Transactions on Programming Languages and Systems |volume=13 |year=1991 |pages=451&ndash;490 |url=http://www.cs.utexas.edu/~pingali/CS380C/2010/papers/ssaCytron.pdf |issue=4
Line 33 ⟶ 32:
 
[[Compiler optimization]] algorithms that are either enabled or strongly enhanced by the use of SSA include:
* [[Constant propagationfolding]] – conversion of computations from runtime to compile time, e.g. treat the instruction <code>a=3*4+5;</code> as if it were <code>a=17;</code>
* [[Value range propagation]]<ref>[http://llvm.org/devmtg/2007-05/05-Lewycky-Predsimplify.pdf value range propagation]</ref> – precompute the potential ranges a calculation could be, allowing for the creation of branch predictions in advance
* [[Sparse conditional constant propagation]] – range-check some values, allowing tests to predict the most likely branch
Line 82 ⟶ 81:
</pre>
 
nodeNode 1 strictly dominates 2, 3, and 4 and the immediate predecessors of node 4 are nodes 2 and 3.
 
Dominance frontiers define the points at which Φ functions are needed. In the above example, when control is passed to node 4, the definition of <code>result</code> used depends on whether control was passed from node 2 or 3. Φ functions are not needed for variables defined in a dominator, as there is only one possible definition that can apply.
Line 123 ⟶ 122:
===Block arguments===
 
Block arguments are an alternative to Φ functions that is representationally identical but in practice can be more convenient during optimization. Blocks are named and take a list of block arguments, notated as function parameters. When calling a block the block arguments are bound to specified values. [[MLton]], [[Swift (programming language)|Swift SIL]] SIL, and LLVM [[MLIR (software)|MLIR]] use block arguments.<ref>{{cite web |title=Block Arguments vs PHI nodes - MLIR Rationale |url=https://mlir.llvm.org/docs/Rationale/Rationale/#block-arguments-vs-phi-nodes |website=mlir.llvm.org |access-date=4 March 2022}}</ref>
 
==Converting out of SSA form==
Line 145 ⟶ 144:
* [[Mono (software)|Mono]] uses SSA in its JIT compiler called Mini
* [[WebKit]] uses SSA in its JIT compilers.<ref>{{Cite web|url=https://webkit.org/blog/3362/introducing-the-webkit-ftl-jit/|title=Introducing the WebKit FTL JIT|date=13 May 2014}}</ref><ref>{{Cite web|url=https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/|title=Introducing the B3 JIT Compiler|date=15 February 2016}}</ref>
* [[Swift (programming language)|Swift]] defines its own SSA form above LLVM IR, called SIL (Swift Intermediate Language).<ref>{{Cite web|url=https://github.com/appleswiftlang/swift/blob/mastermain/docs/SIL/SIL.rstmd|title=Swift Intermediate Language (GitHub)|website=[[GitHub]]|date=30 October 2021}}</ref><ref>{{Cite web|url=https://www.youtube.com/watch?v=Ntj8ab-5cvE |archive-url=https://ghostarchive.org/varchive/youtube/20211221/Ntj8ab-5cvE |archive-date=2021-12-21 |url-status=live|title=Swift's High-Level IR: A Case Study of Complementing LLVM IR with Language-Specific Optimization, LLVM Developers Meetup 10/2015|website=[[YouTube]]|date=9 November 2015 }}{{cbignore}}</ref>
* The [[Erlang (programming language)|Erlang]] rewrotecompiler theirwas compilerrewritten in OTP 22.0 to "internally use an intermediate representation based on Static Single Assignment (SSA)", with plans for further optimizations built on top of SSA in future releases.<ref>{{Cite web|url=http://www.erlang.org/news/132|title=OTP 22.0 Release Notes}}</ref>
* The [[LLVM]] Compiler Infrastructure uses SSA form for all scalar register values (everything except memory) in its primary code representation. SSA form is only eliminated once register allocation occurs, late in the compile process (often at link time).
* The [[GNU Compiler Collection]] (GCC) makes extensive use of SSA since the version 4 (released in April 2005). The [[front and back ends|frontends]] generate "[[GIMPLE#GENERIC and GIMPLE|GENERIC]]" code that is then converted into "[[GIMPLE#GENERIC and GIMPLE|GIMPLE]]" code by the "gimplifier". High-level optimizations are then applied on the SSA form of "GIMPLE". The resulting optimized intermediate code is then translated into [[Register Transfer Language|RTL]], on which low-level optimizations are applied. The architecture-specific [[Front and back ends|backend]]s finally turn RTL into [[assembly language]].
* [[Go (programming language)|Go]] (1.7: for x86-64 architecture only; 1.8: for all supported architectures).<ref>{{Cite web|url=https://golang.org/doc/go1.7#compiler|title=Go 1.7 Release Notes - The Go Programming Language|website=golang.org|access-date=2016-08-17}}</ref><ref>{{Cite web|url=https://golang.org/doc/go1.8#compiler|title=Go 1.8 Release Notes - The Go Programming Language|website=golang.org|access-date=2017-02-17}}</ref>
* [[IBM]]'s open source adaptive [[Java virtual machine]], [[Jikes RVM]], uses extended Array SSA, an extension of SSA that allows analysis of scalars, arrays, and object fields in a unified framework. Extended Array SSA analysis is only enabled at the maximum optimization level, which is applied to the most frequently executed portions of code.
Line 154 ⟶ 153:
* The [[Chromium (web browser)|Chromium]] [[V8 JavaScript engine]] implements SSA in its Crankshaft compiler infrastructure as [https://blog.chromium.org/2010/12/new-crankshaft-for-v8.html announced in December 2010]
* [[PyPy]] uses a linear SSA representation for traces in its JIT compiler.
* The [[Android Runtime]]<ref>{{cite video |title=The Evolution of ART - Google I/O 2016 |time=3m47s |url=https://www.youtube.com/watch?v=fwMM6g7wpQ8 |date=25 May 2016 |work=Google}}</ref> and the [[Dalvik (software)|Dalvik Virtual Machine]] use SSA.<ref>{{cite web |title=JIT through the ages |url=http://www.cs.columbia.edu/~aho/cs6998/reports/12-12-11_Ramanan_JIT.pdf |last=Ramanan |first=Neeraja | date=12 Dec 2011 }}</ref>
* [[Android (operating system)|Android]]'s [[Dalvik (software)|Dalvik]] virtual machine uses SSA in its JIT compiler.
* [[Android (operating system)|Android]]'s new optimizing compiler for the [[Android Runtime]] uses SSA for its IR.<ref>{{cite video |title=The Evolution of ART - Google I/O 2016 |time=3m47s |url=https://www.youtube.com/watch?v=fwMM6g7wpQ8 |date=25 May 2016 |work=Google}}</ref>
* The Standard ML compiler [[MLton]] uses SSA in one of its intermediate languages.
* [[LuaJIT]] makes heavy use of SSA-based optimizations.<ref>{{cite web|url=http://wiki.luajit.org/Optimizations|title=Bytecode Optimizations|publisher=the LuaJIT project}}</ref>
* The [[PHP]] and [[Hack (programming language)|Hack]] compiler [[HHVM]] uses SSA in its IR.<ref>{{cite web|url=https://github.com/facebook/hhvm/blob/master/hphp/doc/ir.specification|title=HipHop Intermediate Representation (HHIR)|website=[[GitHub]]|date=30 October 2021}}</ref>
* The [[OCaml]] compiler uses SSA in its CMM IR (which stands for C--).<ref>{{Cite web |last1=Chambart |first1=Pierre |last2=Laviron |first2=Vincent |last3=Pinto |first3=Dario |date=2024-03-18 |title=Behind the Scenes of the OCaml Optimising Compiler |url=https://ocamlpro.com/blog/2024_03_18_the_flambda2_snippets_0/ |website=OCaml Pro}}</ref>
* libFirm, a library for use as the [[Compiler#Three-stage compiler structure|middle and back ends of a compiler]], uses SSA form for all scalar register values until code generation by use of an SSA-aware register allocator.<ref>{{cite web|url=http://pp.ipd.kit.edu/firm/|title=Firm - Optimization and Machine Code Generation}}</ref>
* Various [[Mesa (computer graphics)|Mesa]] drivers via NIR, an SSA representation for shading languages.<ref>{{Cite web|url=https://lists.freedesktop.org/archives/mesa-dev/2014-December/072761.html|title=Reintroducing NIR, a new IR for mesa|last=Ekstrand|first=Jason|date=16 December 2014 }}</ref>
Line 179 ⟶ 178:
* The COINS compiler uses SSA form optimizations as explained [https://web.archive.org/web/20040531024854/http://www.is.titech.ac.jp/~sassa/coins-www-ssa/english/ here].
* Reservoir Labs' R-Stream compiler supports non-SSA (quad list), SSA and SSI (Static Single Information<ref>{{cite tech report |url=https://cscott.net/Publications/ssi.pdf |title=Static Single Information Form |last1=Ananian |first1=C. Scott |last2=Rinard |first2=Martin |year=1999|citeseerx = 10.1.1.1.9976}}</ref>) forms.<ref>{{cite book|url=https://www.springer.com/us/book/9780387097657|title=Encyclopedia of Parallel Computing}}</ref>
* Although not a compiler, the [httphttps://boomerang.sourceforge.net/ Boomerang] decompiler uses SSA form in its internal representation. SSA is used to simplify expression propagation, identifying parameters and returns, preservation analysis, and more.
* [[DotGNU]] Portable.NET used SSA in its JIT compiler.