Content deleted Content added
consistent use of acronym alu |
Tags: Mobile edit Mobile web edit Advanced mobile edit |
||
(26 intermediate revisions by 20 users not shown) | |||
Line 1:
{{Short description|Instruction pipeline}}
{{Use American English|date = March 2019}}
{{
In the [[history of computing hardware|history of computer hardware]], some early [[reduced instruction set computer]] [[central processing unit]]s (RISC CPUs) used a very similar architectural solution, now called a '''classic RISC pipeline'''. Those CPUs were: [[MIPS architecture|MIPS]], [[SPARC]], Motorola [[Motorola 88000|88000]], and later the notional CPU [[DLX]] invented for education.
Each of these classic scalar RISC designs fetches and tries to execute one [[Instructions per cycle|
==The classic five stage RISC pipeline==
Line 11:
===Instruction fetch===
The instructions reside in memory that takes one cycle to read. This memory can be dedicated to SRAM, or an Instruction [[Cache (computing)|Cache]]. The term "latency" is used in computer science often
The [[
===Instruction decode===
Another thing that separates the first RISC machines from earlier CISC machines, is that RISC has no [[microcode]]
All MIPS, SPARC, and DLX instructions have at most two register inputs. During the decode stage, the indexes of these two registers are identified within the instruction, and the indexes are presented to the register memory, as the address. Thus the two registers named are read from the [[register file]]. In the MIPS design, the register file had 32 entries.
Line 29:
The Execute stage is where the actual computation occurs. Typically this stage consists of an ALU, and also a bit shifter. It may also include a multiple cycle multiplier and divider.
The ALU is responsible for performing
The bit shifter is responsible for shift and rotations.
Line 41:
===Memory access===
If data memory needs to be accessed, it is done
During this stage, single cycle latency instructions simply have their results forwarded to the next stage. This forwarding ensures that both one and two cycle instructions always write their results in the same stage of the pipeline so that just one write port to the register file can be used, and it is always available.
Line 50:
During this stage, both single cycle and two cycle instructions write their results into the register file.
Note that two different stages are accessing the register file at the same
On real silicon, this can be a hazard (see below for more on hazards). That is because one of the source registers being read in decode might be the same as the destination register being written in writeback. When that happens, then the same memory cells in the register file are being both read and written the same time. On silicon, many implementations of memory cells will not operate correctly when read and written at the same time.
Line 104:
The data read from the address <code>adr</code> is not present in the data cache until after the Memory Access stage of the <code>LD</code> instruction. By this time, the <code>AND</code> instruction is already through the ALU. To resolve this would require the data from memory to be passed backwards in time to the input to the ALU. This is not possible. The solution is to delay the <code>AND</code> instruction by one cycle. The data hazard is detected in the decode stage, and the fetch and decode stages are '''stalled''' - they are prevented from flopping their inputs and so stay in the same state for a cycle. The execute, access, and write-back stages downstream see an extra no-operation instruction (NOP) inserted between the <code>LD</code> and <code>AND</code> instructions.
This NOP is termed a pipeline ''[[bubble (computing)|bubble]]'' since it floats in the pipeline, like an air bubble in a water pipe, occupying resources but not producing useful results. The hardware to detect a data hazard and stall the pipeline until the hazard is cleared is called a '''pipeline interlock'''.
{| align=center style="text-align:center"
Line 130:
* Predict Not Taken: Always fetch the instruction after the branch from the instruction cache, but only execute it if the branch is not taken. If the branch is not taken, the pipeline stays full. If the branch is taken, the instruction is flushed (marked as if it were a NOP), and one cycle's opportunity to finish an instruction is lost.
* Branch Likely: Always fetch the instruction after the branch from the instruction cache, but only execute it if the branch was taken. The compiler can always fill the branch delay slot on such a branch, and since branches are more often taken than not, such branches have a smaller IPC penalty than the previous kind.
* [[Branch delay slot|Branch Delay Slot]]:
* [[Branch Prediction]]: In parallel with fetching each instruction, guess if the instruction is a branch or jump, and if so, guess the target. On the cycle after a branch or jump, fetch the instruction at the guessed target. When the guess is wrong, flush the incorrectly fetched target.
Delayed branches were controversial, first, because their semantics
Delayed branches have been criticized{{By whom|date=May 2012}} as a poor short-term choice in ISA design:
Line 142:
==Exceptions==
Suppose a 32-bit RISC processes an ADD instruction that adds two large numbers, and the result does not fit in 32 bits.
The simplest solution, provided by most architectures, is wrapping arithmetic. Numbers greater than the maximum possible encoded value have their most significant bits chopped off until they fit. In the usual integer number system, 3000000000+3000000000=6000000000. With unsigned 32 bit wrapping arithmetic, 3000000000+3000000000=1705032704 (6000000000 mod 2^32). This may not seem terribly useful. The largest benefit of wrapping arithmetic is that every operation has a well defined result.
Line 148:
But the programmer, especially if programming in a language supporting [[large numbers|large integers]] (e.g. [[Lisp (programming language)|Lisp]] or [[Scheme (programming language)|Scheme]]), may not want wrapping arithmetic. Some architectures (e.g. MIPS), define special addition operations that branch to special locations on overflow, rather than wrapping the result. Software at the target ___location is responsible for fixing the problem. This special branch is called an exception. Exceptions differ from regular branches in that the target address is not specified by the instruction itself, and the branch decision is dependent on the outcome of the instruction.
The most common kind of software-visible exception on one of the classic RISC machines is a [[
Exceptions are different from branches and jumps, because those other control flow changes are resolved in the decode stage. Exceptions are resolved in the writeback stage. When an exception is detected, the following instructions (earlier in the pipeline) are marked as invalid, and as they flow to the end of the pipe their results are discarded.
To make it easy (and fast) for the software to fix the problem and restart the program, the CPU must take a precise exception. A precise exception means that all instructions up to the excepting instruction have been executed, and the excepting instruction and everything afterwards have not been executed.
Line 163:
Another strategy to handle suspend/resume is to reuse the exception logic. The machine takes an exception on the offending instruction, and all further instructions are invalidated. When the cache has been filled with the necessary data, the instruction that caused the cache miss restarts. To expedite data cache miss handling, the instruction can be restarted so that its access cycle happens one cycle after the data cache is filled.
== See also ==
* [[Iron law of processor performance]]
== References ==
{{Reflist}}
{{refbegin}}
* {{cite book |
{{refend}}
|