Out-of-order execution: Difference between revisions

Content deleted Content added
review: spell out. mark unsourced section. clarification and thinning.
 
(20 intermediate revisions by 9 users not shown)
Line 1:
{{Short description|Computing paradigm to improve computational efficiency}}
{{Redirect|OOE||Ooe (disambiguation)}}
{{Use American English|date=January 2025}}
 
In [[computer engineering]], '''out-of-order execution''' (or more formally '''dynamic execution''') is aan [[paradigminstruction scheduling]] paradigm used in high-performance [[central processing unit]]s to make use of [[instruction cycle]]s that would otherwise be wasted. In this paradigm, a processor executes [[Instruction (computing)|instructions]] in an order governed by the availability of input data and execution units,<ref>{{cite book |author-last=Kukunas |author-first=Jim |date=2015 |title=Power and Performance: Software Analysis and Optimization |url=https://books.google.com/books?id=X-WcBAAAQBAJ&pg=PA37 |publisher=Morgan Kaufman |page=37 |isbn=9780128008140}}</ref> rather than by their original order in a program.<ref>{{cite web |url=http://courses.cs.washington.edu/courses/csep548/06au/lectures/introOOO.pdf |title=Out-of-order execution |date=2006 |quote=don't wait for previous instructions to execute if this instruction does not depend on them |access-date=2014-01-17 |publisher=cs.washington.edu}}</ref><ref name="Regis High School 2011">{{cite web | title=The Centennial Celebration | website=Regis High School | date=2011-03-14 | url=https://www.regis.org/2014/multimedia/tomasulo.cfm | access-date=2022-06-25|quote=The algorithm "allows sequential instructions that would normally be stalled due to certain dependencies to execute non-sequentially" (also known as out of order execution).}}</ref> In doing so, the processor can avoid being idle while waiting for the preceding instruction to complete and can, in the meantime, process the next instructions that are able to run immediately and independently.<ref>{{cite web |url=http://www.pcguide.com/ref/cpu/arch/int/featOOE-c.html |url-status=dead |archive-url=https://web.archive.org/web/20190218182056/http://www.pcguide.com/ref/cpu/arch/int/featOOE-c.html |archive-date=2019-02-18 |quote=This flexibility improves performance since it allows execution with less 'waiting' time. |title=Out-of-order Execution |publisher=pcguideThe PC Guide |first=Charles M.com |last=Kozierok |date=April 17, 2001 |access-date=2014-01-17}}</ref>
 
== History ==
Line 8 ⟶ 9:
 
=== Early use in supercomputers ===
TheArguably the first machine to use out-of-order execution wasis the [[CDC 6600]] (1964), designed by [[James E. Thornton]], which usesused a [[Scoreboarding|scoreboard]] to avoidresolve conflicts. ItThe permits6600 anhowever instruction to execute if its source operandlacked [[Hazard_(readcomputer_architecture)#Write_after_write_(WAW)|WAW registersconflict]] aren'thandling, tochoosing be writteninstead to bystall. anyThis unexecutedsituation earlierwas instructiontermed (truea dependency)"First andOrder the destination (write) register not be a register usedConflict" by any unexecuted earlier instruction (false dependency)Thornton.<ref>{{harvtxt|Thornton|1970|p=125}}</ref> The 6600Whilst lacksit thehad means to avoidboth [[PipelineRAW stall|stallingconflict]] anresolution [[execution(termed unit]]"Second onOrder falseConflict"<ref>{{harvtxt|Thornton|1970|p=126}}</ref>) dependenciesand ([[Hazard_(computer_architecture)#Write_after_write_Write_after_read_(WAWWAR)|writeWAR after writeconflict]] (WAW) and [[write after read]]resolution (WAR) conflicts, respectively termed ''first-order"Third conflict'' and ''third-order conflict'' byOrder Conflict"<ref>{{harvnb|Thornton, who termed true dependencies ([[Hazard_(computer_architecture)#Read_after_write_(RAW)|read after write]] (RAW)1970|p=127}}</ref>) asall second-orderof conflict)which becauseis eachsufficient addressto has only a single ___location referable bydeclare it. Thecapable WAWof isfull worseout-of-order than WAR forexecution, the 6600, becausedid whennot anhave executionprecise unitexception encountershandling. aAn WAR,early theand otherlimited executionform unitsof stillBranch receiveprediction andwas executepossible instructions,as butlong upon a WAWas the assignmentbranch of instructionswas to executionlocations unitson stops,what andwas theytermed canthe not"Instruction receiveStack" anywhich furtherwas instructionslimited untilto thewithin WAW-causinga instruction'sdepth destinationof registerseven haswords beenfrom written to bythe earlierProgram instructionCounter.<ref>{{harvtxtharvnb|Thornton|1970|p=125-127112,123}}</ref>
 
About two years later, the [[IBM System/360 Model 91]] (1966) introduced [[register renaming]] with [[Tomasulo's algorithm]],<ref>{{citation |title=An Efficient Algorithm for Exploiting Multiple Arithmetic Units |journal=[[IBM Journal of Research and Development]] |volume=11 |issue=1 |pages=25–33 |date=1967 |author-first=Robert Marco |author-last=Tomasulo |author-link=Robert Marco Tomasulo |doi=10.1147/rd.111.0025 |url=https://pdfs.semanticscholar.org/8299/94a1340e5ecdb7fb24dad2332ccf8de0bb8b.pdf |archive-url=https://web.archive.org/web/20180612141530/https://pdfs.semanticscholar.org/8299/94a1340e5ecdb7fb24dad2332ccf8de0bb8b.pdf |url-status=dead |archive-date=2018-06-12 |citeseerx=10.1.1.639.7540|s2cid=8445049 }}</ref> which dissolves false dependencies (WAW and WAR), making full out-of-order execution possible. An instruction addressing a write into a register ''r<sub>n</sub>'' can be executed before an earlier instruction using the register ''r<sub>n</sub>'' is executed, by actually writing into an alternative (renamed) register ''alt-r<sub>n</sub>'', which is turned into a normal register ''r<sub>n</sub>'' only when all the earlier instructions addressing ''r<sub>n</sub>'' have been executed, but until then ''r<sub>n</sub>'' is given for earlier instructions and ''alt-r<sub>n</sub>'' for later ones addressing ''r<sub>n</sub>''.
 
In the Model 91 the register renaming is implemented by a [[Operand forwarding|bypass]] termed ''Common Data Bus'' (CDB) and memory source operand buffers, leaving the physical architectural registers unused for many cycles as the oldest state of registers addressed by any unexecuted instruction is found on the CDB. Another advantage the Model 91 has over the 6600 is the ability to execute instructions out-of-order in the same [[execution unit]], not just between the units like the 6600{{Disputed inline|date=July 2025}}. This is accomplished by [[reservation station]]s, from which instructions go to the execution unit when ready, as opposed to the FIFO queue{{Disputed inline|date=July 2025}} of each execution unit of the 6600. The Model 91 is also capable of reordering loads and stores to execute before the preceding loads and stores,<ref name=zs1/> unlike the 6600, which only has a limited ability to move loads past loads, and stores past stores, but not loads past stores and stores past loads.<ref>{{harvtxt|Thornton|1970|p=48-50}}</ref> Only the floating-point registers of the Model 91 are renamed, making it subject to the same WAW and WAR limitations as the CDC 6600 when running fixed-point calculations. The 91 and 6600 both also suffer from [[imprecise exception]]s, which needed to be solved before out-of-order execution could be applied generally and made practical outside supercomputers.
 
=== Precise exceptions ===
To have [[precise exception]]s, the proper in-order state of the program's execution must be available upon an exception. By 1985 various approaches were developed as described by [[James E. Smith (engineer)|James E. Smith]] and Andrew R. Pleszkun.<ref name="smith">{{cite journal |last1=Smith |first1=James E. |last2=Pleszkun |first2=Andrew R. |author1-link=James E. Smith (engineer) |title=Implementation of precise interrupts in pipelined processors |journal=12th ISCA|date=June 1985 |url=https://dl.acm.org/doi/epdf/10.5555/327010.327125}}<br/>(Expanded version published in May 1988 as [https://www.cs.virginia.edu/~evans/greatworks/smith.pdf ''Implementing Precise Interrupts in Pipelined Processors''].)</ref> The [[CDC Cyber 205]] was a precursor, as upon a virtual memory interrupt the entire state of the processor (including the information on the partially executed instructions) is saved into an ''invisible exchange package'', so that it can resume at the same state of execution.<ref>{{cite web |last1=Moudgill |first1=Mayan |last2=Vassiliadis |first2=Stamatis |title=On Precise Interrupts |page=18 |date=January 1996 |citeseerx=10.1.1.33.3304 |url=https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.3304&rep=rep1&type=pdf |archive-url=https://web.archive.org/web/20221013035408/https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.3304&rep=rep1&type=pdf |archive-date=13 October 2022 |format=pdf}}</ref> However to make all exceptions precise, there has to be a way to cancel the effects of instructions. The CDC Cyber 990 (1984) implements precise interrupts by using a history buffer, which holds the old (overwritten) values of registers that are restored when an exception necessitates the reverting of instructions.<ref name="smith"/> Through simulation, Smith determined that adding a reorder buffer (or history buffer or equivalent) to the [[Cray-1S]] would reduce the performance of executing the first 14 [[Livermore loops]] (unvectorized) by only 3%.<ref name="smith"/> Important academic research in this subject was led by [[Yale Patt]] with his [[HPSm]] simulator.<ref>{{cite book |url=http://dl.acm.org/citation.cfm?id=17391 |title=HPSm, a high performance restricted data flow architecture having minimal functionality |work=ISCA '86 Proceedings of the 13th annual international symposium on Computer architecture |isbn=978-0-8186-0719-6 |pages=297–306 |date=1986 |access-date=2013-12-06 |author-first1=W. |author-last1=Hwu |author-first2=Yale N. |author-last2=Patt |author-link2=Yale Patt |publisher=[[Association for Computing Machinery|ACM]]}}</ref>
 
In the 1980s many early [[RISC]] microprocessors, likehad out-of-order writeback to the registers, invariably resulting in imprecise exceptions. The [[Motorola 88100]], hadwas one of the few early microprocessors that did not suffer from imprecise exceptions despite out-of-order writebackwrites, toalthough theit registers,did resultingallow inboth precise and imprecise floating-point exceptions.<ref>http://www.bitsavers.org/components/motorola/88000/MC88100_RISC_Microprocessor_Users_Manual_2ed_1990.pdf {{Bare URL PDF|date=August 2025}}</ref> Instructions started execution in order, but some (e.g. floating-point) took more cycles to complete execution. However, the single-cycle execution of the most basic instructions greatly reduced the scope of the problem compared to the CDC 6600.
 
=== Decoupling ===
Smith also researched how to make different execution units operate more independently of each other and of the memory, front-end, and branching.<ref>{{cite journal |last1=Smith |first1=James E. |author1-link=James E. Smith (engineer) |title=Decoupled Access/Execute Computer Architectures |journal=ACM Transactions on Computer Systems |date=November 1984 |volume=2 |issue=4 |pages=289–308 |doi=10.1145/357401.357403 |s2cid=13903321 |url=https://course.ece.cmu.edu/~ece447/s15/lib/exe/fetch.php?media=p289-smith.pdf}}</ref> He implemented those ideas in the [[Astronautics Corporation of America|Astronautics]] ZS-1 (1988), featuring a decoupling of the integer/load/store [[Instruction pipelining|pipeline]] from the floating-point pipeline, allowing inter-pipeline reordering. The ZS-1 was also capable of executing loads ahead of preceding stores. In his 1984 paper, he opined that enforcing the precise exceptions only on the integer/memory pipeline should be sufficient for many use cases, as it even permits [[virtual memory]]. Each pipeline had an instruction buffer to decouple it from the instruction decoder, to prevent the stalling of the front end. To further decouple the memory access from execution, each of the two pipelines was associated with two addressable [[FIFO (computing and electronics)|queues]] that effectively performed limited register renaming.<ref name=zs1>{{cite journal |last1=Smith |first1=James E. |author1-link=James E. Smith (engineer) |title=Dynamic Instruction Scheduling and the Astronautics ZS-1 |journal=Computer |url=https://course.ece.cmu.edu/~ece740/f13/lib/exe/fetch.php?media=00030730.pdf |pages=21–35 |doi=10.1109/2.30730 |date=July 1989 |volume=22 |issue=7 |s2cid=329170 }}</ref> A similar decoupled architecture had been used a bit earlier in the Culler 7.<ref>{{cite web |last1=Smotherman |first1=Mark |title=Culler-7 |url=https://people.computing.clemson.edu/~mark/culler.html |website=[[Clemson University]]}}</ref> The ZS-1's ISA, like IBM's subsequent POWER, aided the early execution of branches.
 
=== Research comes to fruition ===
Line 26 ⟶ 27:
 
=== Wide adoption ===
The first [[superscalar]] [[Microprocessor|single-chip processors]] ([[Intel i960CA]] in 1989) used a simple scoreboarding scheduling like the CDC 6600 had a quarter of a century earlier. In 1992–1996 a rapid advancement of techniques, enabled by [[Moore's law|increasing transistor counts]], saw proliferation down to [[personal computer]]s. The [[Motorola 88110]] (1992) used a history buffer to revert instructions.<ref>{{cite journal |last1=Ullah |first1=Nasr |last2=Holle |first2=Matt |title=The MC88110 Implementation of Precise Exceptions in a Superscalar Architecture |journal=ACM SigarchSIGARCH Computer Architecture News |url=https://dl.acm.org/doi/pdf/10.1145/152479.152482 |publisher=Motorola Inc. |format=pdf |date=March 1993|volume=21 |pages=15–25 |doi=10.1145/152479.152482 |s2cid=7036627 |url-access=subscription }}</ref> Loads could be executed ahead of preceding stores. While stores and branches were waiting to start execution, subsequent instructions of other types could keep flowing through all the pipeline stages, including writeback. The 12-entry capacity of the history buffer placed a limit on the reorder distance.<ref>{{cite web |last1=Smotherman |first1=Mark |title=Motorola MC88110 Overview |url=http://www.m88k.com/orig/misc/msmotherman-88110.txt |date=29 April 1994}}</ref><ref>{{cite journal |last1=Diefendorff |first1=Keith |author1-link=Keith Diefendorff |last2=Allen |first2=Michael |title=Organization of the Motorola 88110 superscalar RISC microprocessor |journal=IEEE Micro |date=April 1992 |volume=12 |issue=2 |pages=40–63 |doi=10.1109/40.127582 |s2cid=25668727 |url=http://cjat.ir/images/PDF_English/20143.pdf |archive-url=https://web.archive.org/web/20221021015941/http://cjat.ir/images/PDF_English/20143.pdf |archive-date=2022-10-21 }}</ref><ref>{{cite book |last1=Smotherman |first1=Mark |last2=Chawla |first2=Shuchi |last3=Cox |first3=Stan |last4=Malloy |first4=Brian |title=Proceedings of the 26th Annual International Symposium on Microarchitecture |chapter=Instruction scheduling for the Motorola 88110 |date=December 1993 |pages=257–262 |doi=10.1109/MICRO.1993.282761 |isbn=0-8186-5280-2 |s2cid=52806289 |chapter-url=https://dl.acm.org/doi/epdf/10.5555/255235.255299}}</ref> The [[PowerPC 601]] (1993) was an evolution of the [[RISC Single Chip]], itself a simplification of POWER1. The 601 permitted branch and floating-point instructions to overtake the integer instructions already in the fetched instruction queue, the lowest four entries of which were scanned for dispatchability. In the case of a cache miss, loads and stores could be reordered. Only the link and count registers could be renamed.{{Refn|<ref>{{cite web |title=PowerPC™ 601 RISC Microprocessor Technical Summary |url=https://www.nxp.com/docs/en/data-sheet/MPC601.pdf |access-date=23 October 2022}}</ref><ref>[[Charles R. Moore (computer engineer)|Moore, Charles R.]]; Becker, Michael C. et al. {{cite journal |title=The PowerPC 601 microprocessor |journal=IEEE Micro |date=September 1993 |volume=13 |issue=5 |url=https://www.researchgate.net/publication/3214696}}</ref><ref>{{cite web |last1=Diefendorff |first1=Keith |author1-link=Keith Diefendorff |title=PowerPC 601 Microprocessor |url=https://old.hotchips.org/wp-content/uploads/hc_archives/hc05/3_Tue/HC05.S8/HC05.8.2-Diefendorff-Motorola-PowerPC601.pdf |publisher=[[Hot Chips]] |date=August 1993}}</ref><ref>{{cite journal |last1=Smith |first1=James E. |last2=Weiss |first2=Shlomo |author1-link=James E. Smith (engineer) |title=PowerPC 601 and Alpha 21064: A Tale of Two RISCs |journal=IEEE Computer |date=June 1994 |volume=27 |issue=6 |pages=46–58 |doi=10.1109/2.294853 |s2cid=1114841 |url=https://www.eecg.utoronto.ca/~moshovos/ACA05/read/ppc601and21064.pdf}}</ref><ref>{{cite journal |last1=Sima |first1=Dezsö |title=The design space of register renaming techniques |url=https://www.researchgate.net/publication/3215151 |journal=IEEE Micro |date=September–October 2000 |volume=20 |issue=5 |pages=70–83 |doi=10.1109/40.877952 |citeseerx=10.1.1.387.6460 |s2cid=11012472 }}</ref>}} In the fall of 1994 [[NexGen]] and [[AIM alliance|IBM with Motorola]] brought the renaming of general-purpose registers to single-chip CPUs. NexGen's Nx586 was the first [[x86]] processor capable of out-of-order execution and featured a reordering distance of up to 14 [[micro-operation]]s.<ref>{{cite web |last1=Gwennap |first1=Linley |title=NexGen Enters Market with 66-MHz Nx586 |url=https://www.ardent-tool.com/CPU/docs/MPR/080403.pdf |website=[[Microprocessor Report]] |archive-url=https://web.archive.org/web/20211202223054/https://www.ardent-tool.com/CPU/docs/MPR/080403.pdf |archive-date=2 December 2021 |date=28 March 1994}}</ref> The [[PowerPC 603]] renamed both the general-purpose and FP registers. Each of the four non-branch execution units can have one instruction wait in front of it without blocking the instruction flow to the other units. A five-entry [[reorder buffer]] lets no more than four instructions overtake an unexecuted instruction. Due to a store buffer, a load can access cache ahead of a preceding store.<ref>{{cite journal |last1=Burgess |first1=Brad |last2=Ullah |first2=Nasr |last3=Van Overen |first3=Peter |last4=Ogden |first4=Deene |title=The PowerPC 603 microprocessor |journal=Communications of the ACM |date=June 1994 |volume=37 |issue=6 |pages=34–42 |doi=10.1145/175208.175212 |s2cid=34385975 |doi-access=free }}</ref><ref>{{cite web |title=PowerPC™ 603 RISC Microprocessor Technical Summary |url=https://www.nxp.com/docs/en/data-sheet/MPC603.pdf |access-date=27 October 2022}}</ref>
 
[[PowerPC 604]] (1995) was the first single-chip processor with [[execution unit]]-level reordering, as three out of its six units each had a two-entry reservation station permitting the newer entry to execute before the older. The reorder buffer capacity is 16 instructions. A four-entry load queue and a six-entry store queue track the reordering of loads and stores upon cache misses.<ref>{{cite journal |last1=Song |first1=S. Peter |last2=Denman |first2=Marvin |last3=Chang |first3=Joe |title=The PowerPC 604 RISC microprocessor |journal=IEEE Micro |date=October 1994 |volume=14 |issue=5 |page=8 |doi=10.1109/MM.1994.363071 |s2cid=11603864 |url=https://www.complang.tuwien.ac.at/andi/tuonly/SkriptPPC604.pdf}}</ref> [[HAL SPARC64]] (1995) exceeded the reordering capacity of the [[ES/9000]] model 900 by having three 8-entry reservation stations for integer, floating-point, and [[address generation unit]], and a 12-entry reservation station for load/store, which permits greater reordering of cache/memory access than preceding processors. Up to 64 instructions can be in a reordered state at a time.<ref>{{cite web |title=SPARC64+: HAL's Second Generation 64-bit SPARC Processor |url=https://old.hotchips.org/wp-content/uploads/hc_archives/hc07/2_Mon/HC7.S3/HC7.3.2.pdf |website=[[Hot Chips]]}}</ref><ref>{{cite web |url=https://www.irisa.fr/caps/projects/TechnologicalSurvey/micro/PI-957-html/section2_8_7.html |website=[[Research Institute of Computer Science and Random Systems]] |title=Le Sparc64 |language=French}}</ref> [[Pentium Pro]] (1995) introduced a ''[[reservation station|unified reservation station]]'', which at the 20 micro-OP capacity permitted very flexible reordering, backed by a 40-entry reorder buffer. Loads can be reordered ahead of both loads and stores.<ref>{{cite web |last1=Gwennap |first1=Linley |title=Intel's P6 Uses Decoupled Superscalar Design |url=http://www.cs.cmu.edu/afs/cs/academic/class/15213-f01/docs/mpr-p6.pdf |website=[[Microprocessor Report]] |date=16 February 1995}}</ref>
Line 32 ⟶ 33:
The practically attainable [[instructions per cycle|per-cycle rate of execution]] rose further as full out-of-order execution was further adopted by [[Silicon Graphics|SGI]]/[[MIPS Technologies|MIPS]] ([[R10000]]) and [[Hewlett-Packard|HP]] [[PA-RISC]] ([[PA-8000]]) in 1996. The same year [[Cyrix 6x86]] and [[AMD K5]] brought advanced reordering techniques into mainstream personal computers. Since [[DEC Alpha]] gained out-of-order execution in 1998 ([[Alpha 21264]]), the top-performing out-of-order processor cores have been unmatched by in-order cores other than [[Hewlett-Packard|HP]]/[[Intel]] [[Itanium 2]] and [[IBM POWER6]], though the latter had an out-of-order [[floating-point unit]].<ref>Le, Hung Q. et al. {{cite journal |title=IBM POWER6 microarchitecture |journal=IBM Journal of Research and Development |date=November 2007 |volume=51 |issue=6 |url=https://course.ece.cmu.edu/~ece742/f12/lib/exe/fetch.php?media=le_power6.pdf}}</ref> The other high-end in-order processors fell far behind, namely [[Sun Microsystems|Sun]]'s [[UltraSPARC III]]/[[UltraSPARC IV|IV]], and IBM's [[mainframe]]s which had lost the out-of-order execution capability for the second time, remaining in-order into the [[IBM z10|z10]] generation. Later big in-order processors were focused on multithreaded performance, but eventually the [[SPARC T series]] and [[Xeon Phi]] changed to out-of-order execution in 2011 and 2016 respectively.{{cn|reason=No mention of out-of-order execution in either linked article.|date=July 2024}}
 
Almost all processors for phones and other lower-end applications remained in-order until {{circa|2010}}. First, [[Qualcomm]]'s [[Scorpion (processor)|Scorpion]] (reordering distance of 32) shipped in [[Qualcomm Snapdragon|Snapdragon]],<ref>{{cite web |last1=Mallia |first1=Lou |title=Qualcomm High Performance Processor Core and Platform for Mobile Applications |url=http://rtcgroup.com/arm/2007/presentations/253%20-%20ARM_DevCon_2007_Snapdragon_FINAL_20071004.pdf |archive-url=https://web.archive.org/web/20131029193001/http://rtcgroup.com/arm/2007/presentations/253%20-%20ARM_DevCon_2007_Snapdragon_FINAL_20071004.pdf |archive-date=29 October 2013}}</ref> and a bit later [[Arm (company)|Arm]]'s [[ARM Cortex-A9|A9]] succeeded [[ARM Cortex-A8|A8]]. For low-end [[x86]] [[personal computer]]s in-order [[Bonnell microarchitecture]] in early [[Intel Atom]] processors were first challenged by [[AMD]]'s [[Bobcat microarchitecture]], and in 2013 were succeeded by an out-of-order [[Silvermont microarchitecture]].<ref>{{cite web |url=http://www.anandtech.com/show/6936/intels-silvermont-architecture-revealed-getting-serious-about-mobile/2 |archive-url=https://archive.today/20161222023104/http://www.anandtech.com/show/6936/intels-silvermont-architecture-revealed-getting-serious-about-mobile/2 |url-status=dead |archive-date=December 22, 2016 |website=AnandTech |title=Intel's Silvermont Architecture Revealed: Getting Serious About Mobile |author=Anand Lal Shimpi |date=2013-05-06}}</ref> Because the complexity of out-of-order execution precludes achieving the lowest minimum power consumption, cost and size, in-order execution is still prevalent in [[microcontroller]]s and [[embedded system]]s, as well as in phone-class cores such as Arm's [[ARM Cortex-A55|A55]] and [[ARM Cortex-A510|A510]] in [[big.LITTLE]] configurations.
 
== Basic concept ==
Line 48 ⟶ 49:
 
=== Out-of-order processors ===
This new paradigm breaks up the processing of instructions into these steps:<ref>{{Cite journal |last1=González |first1=Antonio |last2=Latorre |first2=Fernando |last3=Magklis |first3=Grigorios |date=2011 |title=Processor Microarchitecture |url=https://link.springer.com/book/10.1007/978-3-031-01729-2 |journal=Synthesis Lectures on Computer Architecture |language=en |doi=10.1007/978-3-031-01729-2 |isbn=978-3-031-00601-2 |issn=1935-3235|url-access=subscription }}</ref>
{{unreferenced section}}
This new paradigm breaks up the processing of instructions into these steps:
# Instruction fetch.
# Instruction decoding.
Line 61:
The key concept of out-of-order processing is to allow the processor to avoid a class of stalls that occur when the data needed to perform an operation are unavailable. In the outline above, the processor avoids the stall that occurs in step 2 of the in-order processor when the instruction is not completely ready to be processed due to missing data.
 
Out-of-order processors fill these ''slots'' in time with other instructions that ''are'' ready, then either reorder the results at the end to make it appear that the instructions were processed as normal., record and thus apply the original ''program order'', or commit in uninterruptible batches where order will not cause data corruption.
The way the instructions are ordered in the original computer code is known as ''program order'', in the processor they are handled in ''data order'', the order in which the data becomes available in the processor's registers. Fairly complex circuitry is needed to convert from one ordering to the other and maintain a logical ordering of the output.
 
The benefit of out-of-order processing grows as the [[instruction pipeline]] deepens and the speed difference between [[main memory]] (or [[cache memory]]) and the processor widens. On modern machines, the processor runs many times faster than the memory, so during the time an in-order processor spends waiting for data to arrive, it could have theoretically processed a large number of instructions.<!--[[User:Kvng/RTH]]-->
 
== Dispatch and issue decoupling allows out-of-order issue ==
One of the differences created by the new paradigm is the creation of queues that allowsallow the dispatch step to be decoupled from the issue step and the graduation stage to be decoupled from the execute stage. An early name for the paradigm was ''decoupled architecture''. In the earlier ''in-order'' processors, these stages operated in a fairly [[Lockstep (computing)|lock-step]], pipelined fashion.
 
The [[Instruction cycle|fetch and decode stages]] is separated from the execute stage in a [[Pipeline (computing)|pipelined]] processor by using a [[Data buffer|buffer]]. The buffer's purpose is to partition the [[Memory access pattern|memory access]] and execute functions in a computer program and achieve high- performance by exploiting the fine-grain [[parallel computing|parallelism]] between the two.<ref>{{cite journal |author-last=Smith |author-first1=J. E. |title=Decoupled access/execute computer architectures |journal= ACM Transactions on Computer Systems|date=1984 |volume=2 |issue=4 |pages=289–308 |citeseerx=10.1.1.127.4475 |doi=10.1145/357401.357403|s2cid=13903321 }}</ref> In doing so, it effectively hides all [[memory latency]] from the processor's perspective.
The instructions of the program may not be run in the originally specified order, as long as the end result is correct. It separates the [[Instruction cycle|fetch and decode stages]] from the execute stage in a [[Pipeline (computing)|pipelined]] processor by using a [[Data buffer|buffer]].
 
A larger buffer can, in theory, increase throughput. However, if the processor has a [[branch misprediction]] then the entire buffer may need to be flushed, wasting a lot of [[clock cycle]]s and reducing the effectiveness. Furthermore, larger buffers create more heat and use more [[Die (integrated circuit)|die]] space. For this reason processor designers today favourfavor a [[multi-threaded]] design approach.
The buffer's purpose is to partition the [[Memory access pattern|memory access]] and execute functions in a computer program and achieve high-performance by exploiting the fine-grain [[parallel computing|parallelism]] between the two.<ref>{{cite journal |author-last=Smith |author-first1=J. E. |title=Decoupled access/execute computer architectures |journal= ACM Transactions on Computer Systems|date=1984 |volume=2 |issue=4 |pages=289–308 |citeseerx=10.1.1.127.4475 |doi=10.1145/357401.357403|s2cid=13903321 }}</ref> In doing so, it effectively hides all [[memory latency]] from the processor's perspective.
 
Decoupled architectures are generally thought of as not useful for general -purpose computing as they do not handle control -intensive code well.<ref>{{cite journal |author-last1=Kurian |author-first1=L. |author-last2=Hulina |author-first2=P. T. |author-last3=Coraor |author-first3=L. D. |title=Memory latency effects in decoupled architectures |journal=[[IEEE Transactions on Computers]] |volume=43 |issue=10 |date=1994 |pages=1129–1139 |doi=10.1109/12.324539 |s2cid=6913858 |url=https://pdfs.semanticscholar.org/6aa3/18cce633e3c2d86d970d6d50104d818d9407.pdf |archive-url=https://web.archive.org/web/20180612141055/https://pdfs.semanticscholar.org/6aa3/18cce633e3c2d86d970d6d50104d818d9407.pdf |url-status=dead |archive-date=2018-06-12 }}</ref> Control intensive code include such things as nested branches that occur frequently in [[operating system]] [[kernel (operating system)|kernels]]s. Decoupled architectures play an important role in scheduling in [[very long instruction word]] (VLIW) architectures.<ref>{{cite journal |author-first1=M. N. |author-last1=Dorojevets |author-first2=V. |author-last2=Oklobdzija |title=Multithreaded decoupled architecture |journal=International Journal of High Speed Computing |volume=7 |issue=3 |pages=465–480 |date=1995 |doi=10.1142/S0129053395000257 |url=https://www.researchgate.net/publication/220171480}}</ref>
A larger buffer can, in theory, increase throughput. However, if the processor has a [[branch misprediction]] then the entire buffer may need to be flushed, wasting a lot of [[clock cycle]]s and reducing the effectiveness. Furthermore, larger buffers create more heat and use more [[Die (integrated circuit)|die]] space. For this reason processor designers today favour a [[multi-threaded]] design approach.
 
Decoupled architectures are generally thought of as not useful for general purpose computing as they do not handle control intensive code well.<ref>{{cite journal |author-last1=Kurian |author-first1=L. |author-last2=Hulina |author-first2=P. T. |author-last3=Coraor |author-first3=L. D. |title=Memory latency effects in decoupled architectures |journal=[[IEEE Transactions on Computers]] |volume=43 |issue=10 |date=1994 |pages=1129–1139 |doi=10.1109/12.324539 |s2cid=6913858 |url=https://pdfs.semanticscholar.org/6aa3/18cce633e3c2d86d970d6d50104d818d9407.pdf |archive-url=https://web.archive.org/web/20180612141055/https://pdfs.semanticscholar.org/6aa3/18cce633e3c2d86d970d6d50104d818d9407.pdf |url-status=dead |archive-date=2018-06-12 }}</ref> Control intensive code include such things as nested branches that occur frequently in [[operating system]] [[kernel (operating system)|kernels]]. Decoupled architectures play an important role in scheduling in [[very long instruction word]] (VLIW) architectures.<ref>{{cite journal |author-first1=M. N. |author-last1=Dorojevets |author-first2=V. |author-last2=Oklobdzija |title=Multithreaded decoupled architecture |journal=International Journal of High Speed Computing |volume=7 |issue=3 |pages=465–480 |date=1995 |doi=10.1142/S0129053395000257 |url=https://www.researchgate.net/publication/220171480}}</ref>
 
To avoid false operand dependencies, which would decrease the frequency when instructions could be issued out of order, a technique called [[register renaming]] is used. In this scheme, there are more physical registers than defined by the architecture. The physical registers are tagged so that multiple versions of the same architectural register can exist at the same time.
 
== Execute and writeback decoupling allows program restart ==
The queue for results is necessary to resolve issues such as branch mispredictions and exceptions/traps. The results queue allows programs to be restarted after an exception, whichand requiresfor the instructions to be completed in program order. The queue allows results to be discarded due to mispredictions on older branch instructions and exceptions taken on older instructions. The ability to issue instructions past branches that have yet to be resolved is known as [[speculative execution]].
 
The ability to issue instructions past branches that are yet to resolve is known as [[speculative execution]].
 
== Micro-architectural choices ==
* Are the instructions dispatched to a centralized queue or to multiple distributed queues?
:[[IBM]] [[PowerPC]] processors use queues that are distributed among the different functional units while other out-of-order processors use a centralized queue. IBM uses the term ''reservation stations'' for their distributed queues.
 
* Is there an actual results queue or are the results written directly into a register file? For the latter, the queueing function is handled by register maps that hold the register renaming information for each instruction in flight.
:[[IBM]] [[PowerPC]] processors use queues that are distributed among the different functional units while other out-of-order processors use a centralized queue. IBM uses the term ''reservation stations'' for their distributed queues.
:Early Intel out-of-order processors use a results queue called a [[reorder buffer]],{{efn|Intel [[P6 (microarchitecture)|P6]] family microprocessors have both a reorder buffer (ROB) and a [[register renaming|register alias table]] (RAT). The ROB was motivated mainly by branch misprediction recovery. The Intel [[P6 (microarchitecture)|P6]] family is among the earliest OoOEout-of-order microprocessors but were supplanted by the [[NetBurst]] architecture. Years later, NetBurst proved to be a dead end due to its long pipeline that assumed the possibility of much higher operating frequencies. Materials were not able to match the design's ambitious clock targets due to thermal issues and later designs based on NetBurst, namely Tejas and Jayhawk, were cancelled. Intel reverted to the P6 design as the basis of the [[Intel Core (microarchitecture)|Core]] and [[Nehalem (microarchitecture)|Nehalem]] microarchitectures.}} while most later out-of-order processors use register maps.{{efn|The succeeding [[Sandy Bridge]], [[Ivy Bridge (microarchitecture)|Ivy Bridge]], and [[Haswell (microarchitecture)|Haswell]] microarchitectures are a departure from the reordering techniques used in P6 and employ reordering techniques from the [[Alpha 21264|EV6]] and the [[Pentium 4|P4]] but with a somewhat shorter pipeline.<ref>{{cite web |author-last=Kanter |author-first=David |date=2010-09-25 |title=Intel's Sandy Bridge Microarchitecture |url=http://www.realworldtech.com/sandy-bridge/10/}}</ref><ref name="urlThe Haswell Front End - Intels Haswell Architecture Analyzed: Building a New PC and a New Intel">{{cite web |url=https://www.anandtech.com/show/6355/intels-haswell-architecture/6 |archive-url=https://web.archive.org/web/20121007163104/http://www.anandtech.com/show/6355/intels-haswell-architecture/6 |url-status=dead |archive-date=October 7, 2012 |title=The Haswell Front End - Intel's Haswell Architecture Analyzed: Building a New PC and a New Intel }}</ref>}}
 
* Is there an actual results queue or are the results written directly into a register file? For the latter, the queueing function is handled by register maps that hold the register renaming information for each instruction in flight.
 
:Early Intel out-of-order processors use a results queue called a [[reorder buffer]], while most later out-of-order processors use register maps.
 
:More precisely: Intel [[P6 (microarchitecture)|P6]] family microprocessors have both a reorder buffer (ROB) and a [[register renaming|register alias table]] (RAT). The ROB was motivated mainly by branch misprediction recovery.
 
:The Intel [[P6 (microarchitecture)|P6]] family is among the earliest OoOE microprocessors but were supplanted by the [[NetBurst]] architecture. Years later, NetBurst proved to be a dead end due to its long pipeline that assumed the possibility of much higher operating frequencies. Materials were not able to match the design's ambitious clock targets due to thermal issues and later designs based on NetBurst, namely Tejas and Jayhawk, were cancelled. Intel reverted to the P6 design as the basis of the [[Intel Core (microarchitecture)|Core]] and [[Nehalem (microarchitecture)|Nehalem]] microarchitectures. The succeeding [[Sandy Bridge]], [[Ivy Bridge (microarchitecture)|Ivy Bridge]], and [[Haswell (microarchitecture)|Haswell]] microarchitectures are a departure from the reordering techniques used in P6 and employ reordering techniques from the [[Alpha 21264|EV6]] and the [[Pentium 4|P4]] but with a somewhat shorter pipeline.<ref>{{cite web |author-last=Kanter |author-first=David |date=2010-09-25 |title=Intel's Sandy Bridge Microarchitecture |url=http://www.realworldtech.com/sandy-bridge/10/}}</ref><ref name="urlThe Haswell Front End - Intels Haswell Architecture Analyzed: Building a New PC and a New Intel">{{cite web |url=https://www.anandtech.com/show/6355/intels-haswell-architecture/6 |title=The Haswell Front End - Intel's Haswell Architecture Analyzed: Building a New PC and a New Intel }}</ref>
 
== See also ==
{{Wikibooks | Microprocessor Design | Out Of Order Execution }}
* [[InstructionMemory schedulingbarrier]]
* [[Memory fence]]
* [[Replay system]]
* [[Shelving buffer]]
 
== Notes ==
{{Notelist}}
 
== References ==