Classic RISC pipeline: Difference between revisions

Content deleted Content added
Tags: Mobile edit Mobile web edit Advanced mobile edit
 
(6 intermediate revisions by 6 users not shown)
Line 1:
{{Short description|Instruction pipeline}}
{{Use American English|date = March 2019}}
{{NoMore footnotes|date=December 2012}}
 
In the [[history of computing hardware|history of computer hardware]], some early [[reduced instruction set computer]] [[central processing unit]]s (RISC CPUs) used a very similar architectural solution, now called a '''classic RISC pipeline'''. Those CPUs were: [[MIPS architecture|MIPS]], [[SPARC]], Motorola [[Motorola 88000|88000]], and later the notional CPU [[DLX]] invented for education.
Line 13:
The instructions reside in memory that takes one cycle to read. This memory can be dedicated to SRAM, or an Instruction [[Cache (computing)|Cache]]. The term "latency" is used in computer science often and means the time from when an operation starts until it completes. Thus, instruction fetch has a latency of one [[clock cycle]] (if using single-cycle SRAM or if the instruction was in the cache). Thus, during the [[Instruction fetch|Instruction Fetch]] stage, a 32-bit instruction is fetched from the instruction memory.
 
The [[Programprogram Countercounter]], or (PC) is a register that holds the address that is presented to the instruction memory. The address is presented to instruction memory at the start of a cycle. Then during the cycle, the instruction is read out of instruction memory, and at the same time, a calculation is done to determine the next PC. The next PC is calculated by incrementing the PC by 4, and by choosing whether to take that as the next PC or to take the result of a branch/jump calculation as the next PC. Note that in classic RISC, all instructions have the same length. (This is one thing that separates RISC from CISC <ref>{{cite web |first=David |last=Patterson| title=RISC I: A Reduced Instruction Set VLSI Computer |series=Isca '81|date=12 May 1981|pages=443–457|url=https://dl.acm.org/doi/10.5555/800052.801895}}</ref>). In the original RISC designs, the size of an instruction is 4 bytes, so always add 4 to the instruction address, but don't use PC + 4 for the case of a taken branch, jump, or exception (see '''delayed branches''', below). (Note that some modern machines use more complicated algorithms ([[branch prediction]] and [[branch target predictor|branch target prediction]]) to guess the next instruction address.)
 
===Instruction decode===
Line 29:
The Execute stage is where the actual computation occurs. Typically this stage consists of an ALU, and also a bit shifter. It may also include a multiple cycle multiplier and divider.
 
The ALU is responsible for performing booleanBoolean operations (and, or, not, nand, nor, xor, xnor) and also for performing integer addition and subtraction. Besides the result, the ALU typically provides status bits such as whether or not the result was 0, or if an overflow occurred.
 
The bit shifter is responsible for shift and rotations.
Line 37:
* Register-Register Operation (Single-cycle latency): Add, subtract, compare, and logical operations. During the execute stage, the two arguments were fed to a simple ALU, which generated the result by the end of the execute stage.
* Memory Reference (Two-cycle latency). All loads from memory. During the execute stage, the ALU added the two arguments (a register and a constant offset) to produce a virtual address by the end of the cycle.
*[[Cycles per instruction |Multi-cycle Instructions]] (Many cycle latency). Integer multiply and divide and all [[floating-point]] operations. During the execute stage, the operands to these operations were fed to the multi-cycle multiply/divide unit. The rest of the pipeline was free to continue execution while the multiply/divide unit did its work. To avoid complicating the writeback stage and issue logic, multicycle instruction wrote their results to a separate set of registers.
 
===Memory access===
Line 50:
 
During this stage, both single cycle and two cycle instructions write their results into the register file.
Note that two different stages are accessing the register file at the same time -- thetime—the decode stage is reading two source registers, at the same time that the writeback stage is writing a previous instruction's destination register.
On real silicon, this can be a hazard (see below for more on hazards). That is because one of the source registers being read in decode might be the same as the destination register being written in writeback. When that happens, then the same memory cells in the register file are being both read and written the same time. On silicon, many implementations of memory cells will not operate correctly when read and written at the same time.
 
Line 130:
* Predict Not Taken: Always fetch the instruction after the branch from the instruction cache, but only execute it if the branch is not taken. If the branch is not taken, the pipeline stays full. If the branch is taken, the instruction is flushed (marked as if it were a NOP), and one cycle's opportunity to finish an instruction is lost.
* Branch Likely: Always fetch the instruction after the branch from the instruction cache, but only execute it if the branch was taken. The compiler can always fill the branch delay slot on such a branch, and since branches are more often taken than not, such branches have a smaller IPC penalty than the previous kind.
* [[Branch delay slot|Branch Delay Slot]]: AlwaysDepending fetchon the instructiondesign afterof the delayed branch fromand the instructionbranch cacheconditions, andit alwaysis executedetermined it,whether the instruction immediately following the branch instruction is executed even if the branch is taken. Instead of taking an IPC penalty for some fraction of branches either taken (perhaps 60%) or not taken (perhaps 40%), branch delay slots take an IPC penalty for those branches into which the compiler could not schedule the branch delay slot. The SPARC, MIPS, and MC88K designers designed a branch delay slot into their ISAs.
* [[Branch Prediction]]: In parallel with fetching each instruction, guess if the instruction is a branch or jump, and if so, guess the target. On the cycle after a branch or jump, fetch the instruction at the guessed target. When the guess is wrong, flush the incorrectly fetched target.
 
Line 148:
But the programmer, especially if programming in a language supporting [[large numbers|large integers]] (e.g. [[Lisp (programming language)|Lisp]] or [[Scheme (programming language)|Scheme]]), may not want wrapping arithmetic. Some architectures (e.g. MIPS), define special addition operations that branch to special locations on overflow, rather than wrapping the result. Software at the target ___location is responsible for fixing the problem. This special branch is called an exception. Exceptions differ from regular branches in that the target address is not specified by the instruction itself, and the branch decision is dependent on the outcome of the instruction.
 
The most common kind of software-visible exception on one of the classic RISC machines is a [[Translation_lookaside_bufferTranslation lookaside buffer#TLB-miss_handlingmiss handling|''TLB miss'']].
 
Exceptions are different from branches and jumps, because those other control flow changes are resolved in the decode stage. Exceptions are resolved in the writeback stage. When an exception is detected, the following instructions (earlier in the pipeline) are marked as invalid, and as they flow to the end of the pipe their results are discarded. The program counter is set to the address of a special exception handler, and special registers are written with the exception ___location and cause.
 
To make it easy (and fast) for the software to fix the problem and restart the program, the CPU must take a precise exception. A precise exception means that all instructions up to the excepting instruction have been executed, and the excepting instruction and everything afterwards have not been executed.
Line 169:
 
== References ==
{{Reflist}}
{{refbegin}}
* {{cite book |first1=John L. |last1=Hennessy |first2=David A. |last2=Patterson |title=Computer Architecture, A Quantitative Approach |publisher=Morgan Kaufmann |edition=5th |year=2011 |isbn=978-0123838728 |url=https://books.google.com/books?id=v3-1hVwHnHwC}}