Instruction set architecture: Difference between revisions

Content deleted Content added
m Reverted edit by 176.59.136.17 (talk) to last version by Antured
URL, template & link corrections
Line 4:
{{Machine code}}
 
In [[computer science]], an '''instruction set architecture''' ('''ISA''') is an [[Conceptual model|abstract model]] that generally defines how [[software]] controls the [[Central processing unit|CPU]] in a computer or a family of computers.<ref>{{Cite web |title=GLOSSARY: Instruction Set Architecture (ISA) |url=https://www.arm.com/glossary/isa |archive-url=https://web.archive.org/web/20231111175250/https://www.arm.com/glossary/isa |archive-date=2023-11-11 |access-date=2024-02-03 |website=arm.com}}</ref> A device or program that executes instructions described by that ISA, such as a central processing unit (CPU), is called an ''[[implementation]]'' of that ISA.
 
In general, an ISA defines the supported [[Machine code|instructions]], [[data type]]s, [[Register (computer)|registers]], the hardware support for managing [[Computer memory|main memory]],{{Clarify|date=April 2024|reason=See "What does "Hardware support for managing main memory" refer to?" on the talk page.]]}} fundamental features (such as the [[memory consistency]], [[addressing mode]]s, [[virtual memory]]), and the [[input/output]] model of implementations of the ISA.
 
An ISA specifies the behavior of [[machine code]] running on implementations of that ISA in a fashion that does not depend on the characteristics of that implementation, providing [[binary compatibility]] between implementations. This enables multiple implementations of an ISA that differ in characteristics such as [[Computer performance|performance]], physical size, and monetary cost (among other things), but that are capable of running the same machine code, so that a lower-performance, lower-cost machine can be replaced with a higher-cost, higher-performance machine without having to replace software. It also enables the evolution of the [[microarchitecture]]s of the implementations of that ISA, so that a newer, higher-performance implementation of an ISA can run software that runs on previous generations of implementations.
 
If an [[operating system]] maintains a standard and compatible [[application binary interface]] (ABI) for a particular ISA, machine code will run on future implementations of that ISA and operating system. However, if an ISA supports running multiple operating systems, it does not guarantee that machine code for one operating system will run on another operating system, unless the first operating system supports running machine code built for the other operating system.
 
An ISA can be extended by adding instructions or other capabilities, or adding support for larger addresses and data values; an implementation of the extended ISA will still be able to execute [[machine code]] for versions of the ISA without those extensions. Machine code using those extensions will only run on implementations that support those extensions.
 
The binary compatibility that they provide makes ISAs one of the most fundamental abstractions in [[computing]].
 
==Overview==
An instruction set architecture is distinguished from a [[microarchitecture]], which is the set of [[processor design]] techniques used, in a particular processor, to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, the [[Intel]] [[P5 (microarchitecture)|Pentium]] and the [[Advanced Micro Devices|AMD]] [[Athlon]] implement nearly identical versions of the [[x86 instruction set]], but they have radically different internal designs.
 
The concept of an ''architecture'', distinct from the design of a specific machine, was developed by [[Fred Brooks]] at IBM during the design phase of [[System/360]]. {{quote|Prior to NPL [System/360], the company's computer designers had been free to honor cost objectives not only by selecting technologies but also by fashioning functional and architectural refinements. The SPREAD compatibility objective, in contrast, postulated a single architecture for a series of five processors spanning a wide range of cost and performance. None of the five engineering design teams could count on being able to bring about adjustments in architectural specifications as a way of easing difficulties in achieving cost and performance objectives.<ref name=Pugh>{{cite book|last1=Pugh|first1=Emerson W.|last2=Johnson|first2=Lyle R.|last3=Palmer|first3=John H.|title=IBM's 360 and Early 370 Systems|url=https://archive.org/details/ibms360early370s0000pugh|url-access=registration|year=1991|publisher=MIT Press|isbn=0-262-16123-0}}</ref>{{rp|p.137}}}}
 
Some [[virtual machine]]s that support [[bytecode]] as their ISA such as [[Smalltalk]], the [[Java virtual machine]], and [[Microsoft]]'s [[Common Language Runtime]], implement this by translating the bytecode for commonly used code paths into native machine code. In addition, these virtual machines execute less frequently used code paths by interpretation (see: [[Just-in-time compilation]]). [[Transmeta]] implemented the x86 instruction set atop [[VLIWvery long instruction word]] (VLIW) processors in this fashion.
 
==Classification of ISAs==
An ISA may be classified in a number of different ways. A common classification is by architectural ''complexity''. A [[complex instruction set computer]] (CISC) has many specialized instructions, some of which may only be rarely used in practical programs. A [[reduced instruction set computer]] (RISC) simplifies the processor by efficiently implementing only the instructions that are frequently used in programs, while the less common operations are implemented as subroutines, having their resulting additional processor execution time offset by infrequent use.<ref>{{cite web |last1=Chen |first1=Crystal |last2=Novick |first2=Greg |last3=Shimano |first3=Kirk |date=December 16, 2006 |title=RISC Architecture: RISC vs. CISC |url=http://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/ |url-status=dead |archive-url=https://web.archive.org/web/20150221071744/http://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/ |archive-date=February 21, 2015 |access-date=February 21, 2015 |website=cs.stanford.edu}}</ref>
|url= http://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/
|title= RISC Architecture: RISC vs. CISC
|date= December 16, 2006
|access-date= February 21, 2015
|author1= Crystal Chen
|author2= Greg Novick
|author3= Kirk Shimano
|website= cs.stanford.edu
|archive-date= February 21, 2015
|archive-url= https://web.archive.org/web/20150221071744/http://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/
|url-status= dead
}}</ref>
 
Other types include [[very long instruction word]] (VLIW) architectures, and the closely related {{citation needed span|''long instruction word'' (LIW)|date=March 2023}} and ''[[explicitly parallel instruction computing]]'' (EPIC) architectures. These architectures seek to exploit [[instruction-level parallelism]] with less hardware than RISC and CISC by making the [[compiler]] responsible for instruction issue and scheduling.<ref>{{cite journal|title=EPIC: Explicitly Parallel Instruction Computing|last1=Schlansker|first1=Michael S.|last2=Rau|first2=B. Ramakrishna|journal=[[Computer (magazine)|Computer]]|date=February 2000|volume=33|issue=2|pages=37–45 |doi=10.1109/2.820037}}</ref>
 
Architectures with even less complexity have been studied, such as the [[minimal instruction set computer]] (MISC) and [[one-instruction set computer]] (OISC). These are theoretically important types, but have not been commercialized.<ref>{{cite journal|url=https://www.researchgate.net/publication/267239549|title=On the Classification of Computer Architecture|last1=Shaout|first1=Adnan|last2=Eldos|first2=Taisir|journal=International Journal of Science and Technology|date=Summer 2003|access-date=March 2, 2023|volume=14|page=3}}</ref><ref>{{cite book|title=Computer Architecture: A Minimalist Perspective|last1=Gilreath|first1=William F.|last2=Laplante|first2=Phillip A.|publisher=[[Springer Science+Business Media]]|date=December 6, 2012|isbn=978-1-4615-0237-1}}</ref>
Line 84 ⟶ 72:
*moving large blocks of memory (e.g. [[string copy]] or [[DMA transfer]])
*complicated integer and [[floating-point arithmetic]] (e.g. [[square root]], or [[transcendental function]]s such as [[logarithm]], [[sine]], [[cosine]], etc.)
*''{{vanchor|[[Single instruction, multiple data|SIMD]] instruction|SIMD instruction}}s'', a single instruction performing an operation on many homogeneous values in parallel, possibly in dedicated [[SIMD register]]s
*performing an atomic [[test-and-set]] instruction or other [[read–modify–write]] [[atomic instruction]]
*instructions that perform [[arithmetic logic unit|ALU]] operations with an operand from memory rather than a register
Line 122 ⟶ 110:
**RISC — Requiring explicit memory loads, the instructions would be: <code>load a,reg1</code>; <code>load b,reg2</code>; <code>add reg1,reg2</code>; <code>store reg2,c</code>.
***<code>C = A+B</code> needs ''four instructions''.
*3-operand, allowing better reuse of data:<ref name="Cocke ">
{{Cite journal | last1 = Cocke | first1 = John | last2 = Markstein | first2 = Victoria |date=January doi1990 |title=The 10.1147/rd.341.0004evolution |of urlRISC technology at IBM |url= https://www.cis.upenn.edu/~milom/cis501-Fall11/papers/cocke-RISC.pdf | title = The evolution of RISC technology at IBM | journal = IBM Journal of Research and Development | volume = 34 | issue = 1 | pages = 4–11 | date doi= January 199010.1147/rd.341.0004 | access-date = 2022-10-05}}
</ref>
**CISC — It becomes either a single instruction: <code>add a,b,c</code>
Line 153 ⟶ 141:
Computers with high code density often have complex instructions for procedure entry, parameterized returns, loops, etc. (therefore retroactively named ''Complex Instruction Set Computers'', [[complex instruction set computer|CISC]]). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using [[addressing mode]]s such as direct, indirect, indexed, etc.). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment, etc. Software-implemented instruction sets may have even more complex and powerful instructions.
 
''Reduced instruction-set computers'', [[reduced instruction set computer|RISC]], were first widely implemented during a period of rapidly growing memory subsystems. They sacrifice code density to simplify implementation circuitry, and try to increase performance via higher clock frequencies and more registers. A single RISC instruction typically performs only a single operation, such as an "add" of registers or a "load" from a memory ___location into a register. A RISC instruction set normally has a fixed [[#Instruction length|instruction length]], whereas a typical CISC instruction set has instructions of widely varying length. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories.
 
Certain embedded RISC ISAs like [[ARM architecture#Thumb|Thumb]] and [[AVR32]] typically exhibit very high density owing to a technique called code compression. This technique packs two 16-bit instructions into one 32-bit word, which is then unpacked at the decode stage and executed as two instructions.<ref name=weaver>{{cite conference|last1=Weaver|first1=Vincent M.|last2=McKee|first2=Sally A.|title=Code density concerns for new architectures|year=2009|conference=IEEE International Conference on Computer Design|doi=10.1109/ICCD.2009.5413117|citeseerx=10.1.1.398.1967}}</ref>
 
[[Minimal instruction set computer]]s (MISC) are commonly a form of [[stack machine]], where there are few separate instructions (8–32), so that multiple instructions can be fit into a single machine word. These types of cores often take little silicon to implement, so they can be easily realized in an FPGA ([[field-programmable gate array|FPGA]]) or in a [[multi-core]] form. The code density of MISC is similar to the code density of RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.<ref>{{Cite web|title=RISC vs. CISC|url=https://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/|access-date=2021-12-18|website=cs.stanford.edu}}</ref>{{Failed verification|reason=That discusses RISC and CISC, but not MISC.|date=December 2021}}
<!-- Need examples here -->
 
Line 168 ⟶ 156:
 
==Design==
The design of instruction sets is a complex issue. There were two stages in history for the microprocessor. The first was the CISC (Complexcomplex Instructioninstruction Setset Computercomputer), which had many different instructions. In the 1970s, however, places like IBM did research and found that many instructions in the set could be eliminated. The result was the RISC (Reducedreduced Instructioninstruction Setset Computercomputer), an architecture that uses a smaller set of instructions. A simpler instruction set may offer the potential for higher speeds, reduced processor size, and reduced power consumption. However, a more complex set may optimize common operations, improve memory and [[CPU cache|cache]] efficiency, or simplify programming.
 
Some instruction set designers reserve one or more opcodes for some kind of [[system call]] or [[software interrupt]]. For example, [[MOS Technology 6502]] uses 00<sub>H</sub>, [[Zilog Z80]] uses the eight codes C7,CF,D7,DF,E7,EF,F7,FF<sub>H</sub><ref>{{cite web|last=Ganssle|first=Jack|url=https://www.embedded.com/electronics-blogs/break-points/4023293/Proactive-Debugging|title=Proactive Debugging|date=February 26, 2001|website=embedded.com}}</ref> while [[Motorola 68000]] use codes in the range A000..AFFF<sub>H</sub>. <!-- Trivial parts catalog notes, while recondite terms like CISC and RISC are completely unsupported by any textbook. -->
Line 186 ⟶ 174:
 
# Some computer designs "hardwire" the complete instruction set decoding and sequencing (just like the rest of the microarchitecture).
# Other designs employ [[microcode]] routines or tables (or both) to do this, using [[read-only memory|ROM]]s or writable [[random-access memory|RAMRAMs]]s ([[writable control store]]), [[programmable logic array|PLAPLAs]]s, or both.
 
Some microcoded CPU designs with a writable control store use it to allow the instruction set to be changed (for example, the [[Rekursiv]] processor and the [[Imsys]] [[Cjip]]).<ref>{{cite web|url=http://cpushack.net/CPU/cpu7.html |title=Great Microprocessors of the Past and Present (V 13.4.0) |website=cpushack.net |access-date=2014-07-25}}</ref>
Line 206 ⟶ 194:
*[[Micro-operation]]
*[[No instruction set computing]]
*[[OVPsim]] full systems simulator providing ability to create/model/emulate any instruction set using C and standard APIs
*[[Processor design]]
*[[Simulation]]
Line 225 ⟶ 213:
*{{Commonscatinline|Instruction set architectures}}
*[http://www.textfiles.com/programming/CARDS/ Programming Textfiles: Bowen's Instruction Summary Cards]
*[httphttps://wwwpeople.cscomputing.clemson.edu/~mark/hist.html Mark Smotherman's Historical Computer Designs Page]
 
{{CPU technologies}}