Processor design: Difference between revisions

Content deleted Content added
GreenC bot (talk | contribs)
Rescued 1 archive link. Wayback Medic 2.5 per WP:URLREQ#anandtech.com
 
(520 intermediate revisions by more than 100 users not shown)
Line 1:
{{short description|Task of creating a processor}}
To a large extent, the [[design]] of a '''CPU''', or [[central processing unit]], is the design of its [[control unit]]. The modern (ie, 1965 to 1985) way to design control logic is to write a [[microprogram]].
'''Processor design''' is a subfield of [[computer science]] and [[computer engineering]] (fabrication) that deals with creating a [[processor (computing)|processor]], a key component of [[computer hardware]].
 
The design process involves choosing an [[instruction set]] and a certain execution paradigm (e.g. [[Very long instruction word|VLIW]] or [[Reduced instruction set computing|RISC]]) and results in a [[microarchitecture]], which might be described in e.g. [[VHDL]] or [[Verilog]]. For [[microprocessor]] design, this description is then manufactured employing some of the various [[semiconductor device fabrication]] processes, resulting in a [[Die (integrated circuit)|die]] which is bonded onto a [[chip carrier]]. This chip carrier is then soldered onto, or inserted into a [[CPU socket|socket]] on, a [[printed circuit board]] (PCB).
CPU design was originally an [[ad-hoc]] process. Just getting a CPU to work was a substantial governmental and technical event.
 
The mode of operation of any processor is the execution of lists of instructions. Instructions typically include those to compute or manipulate data values using [[Processor register|registers]], change or retrieve values in read/write memory, perform relational tests between data values and to control program flow.
Key design innovations include [[CPU cache|cache]], [[virtual memory]], [[instruction pipelining]], [[superscalar]], [[CISC]], [[RISC]], [[virtual machine]], [[emulators]], [[microprogram]], and [[Stack (computing)|Stack]].
 
Processor designs are often tested and validated on one or several FPGAs before sending the design of the processor to a foundry for [[semiconductor fabrication]].<ref>{{cite web|url=https://www.anandtech.com/show/14798/xilinx-announces-world-largest-fpga-virtex-ultrascale-vu19p-with-9m-cells|archive-url=https://web.archive.org/web/20190827160514/https://www.anandtech.com/show/14798/xilinx-announces-world-largest-fpga-virtex-ultrascale-vu19p-with-9m-cells|url-status=dead|archive-date=August 27, 2019|title=Xilinx Announces World Largest FPGA: Virtex Ultrascale+ VU19P with 9m Cells|first=Ian|last=Cutress|date=August 27, 2019|website=[[AnandTech]]}}</ref>
== History of general purpose CPUs ==
 
=== 1950s:Details early designs ===
{{Prose|section|date=May 2011}}
 
=== Basics ===
Each of the computer designs of the early 1950s was a unique design; there were no upward-compatible machines or computer architectures with multiple, differing implementations. Programs written for one machine would not run on another kind, even other kinds from the same company. This was not a major drawback at the time because there was not a large body of software developed to run on computers, so starting programming from scratch was not seen as a large barrier.
CPU design is divided into multiple components. Information is transferred through [[datapath]]s (such as [[Arithmetic logic unit|ALUs]] and [[Pipeline (computing)|pipelines]]). These datapaths are controlled through logic by [[control unit]]s. [[Memory (computing)|Memory]] components include [[register file]]s and [[Cache (computing)|caches]] to retain information, or certain actions. [[Clock signal|Clock circuitry]] maintains internal rhythms and timing through clock drivers, [[Phase-locked loop|PLLs]], and [[clock distribution network]]s. Pad transceiver circuitry which allows signals to be received and sent and a [[logic gate]] cell [[Library (electronics)|library]] which is used to implement the logic. Logic gates are the foundation for processor design as they are used to implement most of the processor's components.<ref>{{cite book | url=https://books.google.com/books?id=GBVADQAAQBAJ&q=processor+logic+gates | title=Digital Systems: From Logic Gates to Processors | isbn=978-3-319-41198-9 | last1=Deschamps | first1=Jean-Pierre | last2=Valderrama | first2=Elena | last3=Terés | first3=Lluís | date=12 October 2016 | publisher=Springer }}</ref>
 
CPUs designed for high-performance markets might require custom (optimized or application specific (see below)) designs for each of these items to achieve frequency, [[power consumption|power-dissipation]], and chip-area goals whereas CPUs designed for lower performance markets might lessen the implementation burden by acquiring some of these items by purchasing them as [[intellectual property]]. Control logic implementation techniques ([[logic synthesis]] using CAD tools) can be used to implement datapaths, register files, and clocks. Common logic styles used in CPU design include unstructured random logic, [[finite-state machine]]s, [[microprogramming]] (common from 1965 to 1985), and [[Programmable logic array]]s (common in the 1980s, no longer common).
The design freedom of the time was very important, for designers were very constrained by the cost of electronics, yet just beginning to explore how a computer could best be organized. Some of the basic features introduced during this period included [[index registers]] (on the [[Ferranti Mark I]]), a return-address saving instruction ([[UNIVAC I]]), immediate operands ([[IBM 704]]), and the detection of invalid operations ([[IBM 650]]).
 
=== Implementation logic ===
By the end of the [[1950]]s commercial builders had developed factory-constructed, truck-deliverable computers. The most widely installed computer was the [[IBM 650]], which used [[drum memory]] onto which programs were loaded using either [[punched tape|paper tape]] or [[punch card]]s. Some very high-end machines also included [[core memory]] which provided higher speeds. [[Hard disk]]s were also starting to become popular.
Device types used to implement the logic include:
* Individual [[vacuum tube]]s, individual [[transistor]]s and semiconductor [[diode]]s, and [[transistor-transistor logic]] [[small-scale integration]] logic chips – no longer used for CPUs
* [[Programmable array logic]] and [[programmable logic device]]s – no longer used for CPUs
* [[Emitter-coupled logic]] (ECL) [[gate array]]s – no longer common
* [[CMOS]] [[gate array]]s – no longer used for CPUs
* [[CMOS]] [[Integrated circuit|mass-produced IC]]s – the vast majority of CPUs by volume
* [[CMOS]] [[Application-specific integrated circuit|ASIC]]s – only for a minority of special applications due to expense
* [[Field-programmable gate array]]s (FPGA) – common for [[soft microprocessor]]s, and more or less required for [[reconfigurable computing]]
 
A CPU design project generally has these major tasks:
Computers are automatic [[Abacus|abaci]]. The type of number system affects the way they work. In the early [[1950s]] most computers were built for specific numerical processing tasks, and many machines used decimal numbers as their basic number system &ndash; that is, the mathematical functions of the machines worked in base-10 instead of base-2 as is common today. These were not merely [[binary coded decimal]]. The machines actually had ten vacuum tubes per digit in each [[Processor register|register]]. Some early [[Soviet Union|Soviet]] computer designers implemented systems based on ternary logic; that is, a bit could have three states: +1, 0, or -1, corresponding to positive, no, or negative voltage.
* Programmer-visible [[instruction set architecture]], which can be implemented by a variety of [[microarchitecture]]s
* Architectural study and performance modeling in [[ANSI C]]/[[C++]] or [[SystemC]]{{clarify|date=January 2013}}
* [[High-level synthesis]] (HLS) or [[register transfer level]] (RTL, e.g. logic) implementation
* [[Register transfer language|RTL]] verification
* [[Circuit design]] of speed critical components (caches, registers, ALUs)
* [[Logic synthesis]] or logic-gate-level design
* [[Static timing analysis|Timing analysis]] to confirm that all logic and circuits will run at the specified operating frequency
* Physical design including [[Floorplan (microelectronics)#Floorplanning|floorplanning]], [[place and route]] of logic gates
* Checking that RTL, gate-level, transistor-level and physical-level representations are equivalent
* Checks for [[signal integrity]], [[design rule checking|chip manufacturability]]
 
Re-designing a CPU core to a smaller die area helps to shrink everything (a "[[photomask]] shrink"), resulting in the same number of transistors on a smaller die. It improves performance (smaller transistors switch faster), reduces power (smaller wires have less [[parasitic capacitance]]) and reduces cost (more CPUs fit on the same wafer of silicon). Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within one [[very-large-scale integration]] chip (additional cache, multiple CPUs or other components), improving performance and reducing overall system cost.
An early project for the [[U.S. Air Force]], [[BINAC]] attempted to make a lightweight, simple computer by using binary arithmetic. It deeply impressed the industry.
 
As with most complex electronic designs, the [[functional verification|logic verification]] effort (proving that the design does not have bugs) now dominates the project schedule of a CPU.
As late as 1970, major computer languages such as "[[C_language|C]]" were unable to standardize their numeric behavior because decimal computers had groups of users too large to alienate.
 
Key CPU architectural innovations include [[index register]], [[CPU cache|cache]], [[virtual memory]], [[instruction pipelining]], [[superscalar]], [[Complex instruction set computer|CISC]], [[Reduced instruction set computer|RISC]], [[virtual machine]], [[emulator]]s, [[microprogram]], and [[Stack (data structure)|stack]].
Even when designers used a binary system, they still had many odd ideas. Some used sign-magnitude arthmetic (-1 = 10001), rather than modern [[two's complement]] arithmetic (-1 = 11111). Most computers used six-bit character sets, because they adequately encoded [[Hollerith]] cards. It was a major revelation to designers of this period to realize that the data word should be a multiple of the character size. They began to design computers with 12, 24 and 36 bit data words (e.g. see the [[TX-2]]).
 
=== Microarchitectural concepts ===
In this era, [[Grosch's law]] dominated computer design: Computer cost increased as the square of its speed.
{{Main|Microarchitecture}}
 
=== 1960s: the computer revolution and CISC ===
 
One major problem with early computers was that a program for one would not work on others. Computer companies found that their customers had little reason to remain loyal to a particular brand, as the next computer they purchased would be incompatible anyway. At that point price and performance were usually the only concerns.
 
In 1962, IBM tried a new approach to designing computers. The plan was to make an entire family of computers that could all run the same software, but with different performances, and at different prices. As users' requirements grew they could move up to larger computers, and still keep all of their investment in programs, data and storage media.
 
In order to do this they designed a single ''reference computer'' called the '''[[System 360]]''' (or '''S/360'''). The System 360 was a virtual computer, a reference instruction set and capabilities that all machines in the family would support. In order to provide different classes of machines, each computer in the family would use more or less hardware emulation, and more or less [[microprogram]] emulation, to create a machine capable of running the entire System 360 [[instruction set]].
 
For instance a low-end machine could include a very simple processor for low cost. However this would require the use of a larger microcode emulator to provide the rest of the instruction set, which would slow it down. A high-end machine would use a much more complex processor that could directly process more of the System 360 design, thus running a much simpler and faster emulator.
 
IBM chose to make the reference [[instruction set]] quite complex, and very capable. This was a conscious choice. Even though the computer was complex, its "[[control store]]" containing the [[microprogram]] would stay relatively small, and could be made with very fast memory. Another important effect was that a single instruction could describe quite a complex sequence of operations. Thus the computers would generally have to fetch fewer instructions from the main memory, which could be made slower, smaller and less expensive for a given combination of speed and price.
 
As the S/360 was to be a successor to both scientific machines like the [[IBM 7090|7090]] and data processing machines like the [[IBM 1401|1401]], it needed a design that could reasonably support all forms of processing. Hence the instruction set was designed to manipulate not just simple binary numbers, but text, scientific floating-point (similar to the numbers used in a calculator), and the [[binary coded decimal]] arithmetic needed by accounting systems.
 
Almost all following computers included these innovations in some form. This basic set of features is now called a "[[complex instruction set computer]]," or CISC (pronounced "sisk"), a term not invented until many years later.
 
In many CISCs, an instruction could access either registers or memory, usually in several different ways.
This made the CISCs easier to program, because a programmer could remember just thirty to a hundred instructions, and a set of three to ten [[addressing mode]]s rather than thousands of distinct instructions.
This was called an "[[orthogonal instruction set]]."
The [[PDP-11]] and [[Motorola 68000]] architecture are examples of nearly orthogonal instruction sets.
 
There was also the ''BUNCH'' (Burroughs, Univac, NCR, CDC, and Honeywell) that competed against IBM at this time though IBM dominated the era with [[S/360]].
 
The Burroughs Corporation (which later became Unisys when they merged with Sperry/Univac) offered an alternative to S/360 with their [[Burroughs B5000|B5000]] series machines. The B5000 series [[1961]] had virtual memory, a multi-programming operating system (Master Control Program or MCP), written in [[ALGOL 60]], and the industry's first recursive-descent compilers as early as 1963.
 
=== 1970s: large scale integration ===
 
In the 1960s, the [[Apollo guidance computer]] and [[Minuteman missile]] made the [[integrated circuit]] economical and practical.
 
Around 1971, the first calculator and clock chips began to show that very small computers might be possible. The first [[microprocessor]] was the 4004, designed in 1971 for a calculator company, and produced by [[Intel]]. The 4004 is the direct ancestor of the [[Intel 80386]], even now maintaining some code compatibility. Just a few years later, the word size of the 4004 was doubled to form the 8008.
 
By the mid-1970s, the use of integrated circuits in computers was commonplace. The whole decade consists of upheavals caused by the shrinking price of transistors.
 
It became possible to put an entire CPU on a single printed circuit board. The result was that minicomputers, usually with 16-bit words, and 4k to 64K of memory, came to be commonplace.
 
CISCs were believed to be the most powerful types of computers, because their microcode was small and could be stored in very high-speed memory. The CISC architecture also addressed the "semantic gap" as it was perceived at the time. This was a defined distance between the machine language, and the higher level language people used to program a machine. It was felt that compilers could do a better job with a richer instruction set.
 
Custom CISCs were commonly constructed using "bit slice" computer logic such as the AMD 2900 chips, with custom microcode. A bit slice component is a piece of an [[ALU]], register file or microsequencer. Most bit-slice integrated circuits were 4-bits wide.
 
By the early 1970s, the [[PDP-11]] was developed, arguably the most advanced small computer of its day. Almost immediately, wider-word CISCs were introduced, the 32-bit [[VAX]] and 36-bit [[PDP-10]].
 
Also, to control a cruise missile, Intel developed a more-capable version of its 8008 microprocessor, the 8080.
 
IBM continued to make large, fast computers. However the definition of large and fast now meant more than a megabyte of RAM, clock speeds near one megahertz [http://www.hometoys.com/mentors/caswell/sep00/trends01.htm][http://research.microsoft.com/users/GBell/Computer_Structures_Principles_and_Examples/csp0727.htm], and tens of megabytes of disk drives.
 
IBM's System 370 was a version of the 360 tweaked to run virtual computing environments. The [[VM (Operating system) |virtual computer]] was developed in order to reduce the possibility of an unrecoverable software failure.
 
The Burroughs B5000/B6000/B7000 series reached its largest market share. It was a stack computer programmed in a dialect of Algol. It used 64-bit fixed-point arithmetic, rather than floating-point.
 
All these different developments competed madly for marketshare.
 
=== Early 1980s: the lessons of RISC ===
 
In the early [[1980s]], researchers at [[UC Berkeley]] and [[IBM]] both discovered that most computer language compilers and interpreters used only a small subset of the instructions of a [[CISC]]. Much of the power of the CPU was simply being ignored in real-world use. They realized that by making the computer simpler and less orthogonal, they could make it faster and less expensive at the same time.
 
At the same time, CPUs were growing faster in relation to the memory they addressed. Designers also experimented with using large sets of internal registers. The idea was to [[cache]] intermediate results in the registers under the control of the compiler.
This also reduced the number of [[addressing mode]]s and orthogonality.
 
The computer designs based on this theory were called [[Reduced Instruction Set Computer]]s, or RISC. RISCs generally had larger numbers of registers, accessed by simpler instructions, with a few instructions specifically to load and store data to memory. The result was a very simple core CPU running at very high speed, supporting the exact sorts of operations the compilers were using anyway.
 
A common variation on the RISC design employs the [[Harvard architecture]], as opposed to the [[Von Neumann architecture|Von Neumann]] or Stored Program architecture common to most other designs. In a Harvard Architecture machine, the program and data occupy separate memory devices and can be accessed simultaneously. In Von Neumann machines the data and programs are mixed in a single memory device, requiring sequential accessing which produces the so-called "Von Neumann bottleneck."
 
One downside to the RISC design has been that the programs that run on them tend to be larger. This is because [[compiler]]s have to generate longer sequences of the simpler instructions to accomplish the same results. Since these instructions need to be loaded from memory anyway, the larger code size offsets some of the RISC design's fast memory handling.
 
Recently, engineers have found ways to compress the reduced instruction sets so they fit in even smaller memory systems than CISCs. Examples of such compression schemes include [[ARM architecture|the ARM]]'s "Thumb" instruction set. In applications that do not need to run older binary software, compressed RISCs are coming to dominate sales.
 
Another approach to RISCs was the "[[niladic]]" or "zero-address" instruction set. This approach realized that the majority of space in an instruction was to identify the operands of the instruction. These machines placed the operands on a push-down (last-in, first out) [[stack (computing)|stack]]. The instruction set was supplemented with a few instructions to fetch and store memory. Most used simple caching to provide extremely fast RISC machines, with very compact code. Another benefit was that the interrupt latencies were extremely small, smaller than most CISC machines (a rare trait in RISC machines). The first zero-address computer was developed by [[Chuck Moore|Charles Moore]]. It placed six 5-bit instructions in a 32-bit word, and was a precursor to [[VLIW]] design (see below: 1990 to Today).
 
Commercial variants were mostly characterized as "[[FORTH]]" machines, and probably failed because that language became unpopular. Also, the machines were developed by defense contractors at exactly the time that the cold war ended. Loss of funding may have broken up the development teams before the companies could perform adequate commercial marketing.
 
RISC chips now dominate the market for 32-bit embedded systems. Smaller RISC chips are even becoming common in the cost-sensitive 8-bit embedded-system market. The main market for RISC CPUs has been systems that require low power or small size.
 
Even some CISC processors (based on architectures that were created before RISC became dominant) translate instructions internally into a RISC-like instruction set. These CISC chips include newer [[X86|x86]] and [[VAX]] models.
 
These numbers may surprise many, because the "market" is perceived to be desktop computers. With Intel x86 designs dominating the vast majority of all desktop sales, RISC is found only in the [[Apple Computer|Apple]] desktop computer lines. However, desktop computers are only a tiny fraction of the computers now sold. Most people own more computers in embedded systems in their car and house than on their desks.
 
=== Mid-1980s to today: exploiting instruction level parallelism ===
 
In the mid-to-late 1980s, designers began using a technique known as '''[[instruction pipelining]]''', in which the processor works on multiple instructions in different stages of completion. For example, the processor may be retrieving the operands for the next instruction while calculating the result of the current one. Modern CPUs may use over a dozen such stages.
 
A similar idea, introduced only a few years later, was to execute multiple instructions in parallel on separate arithmetic-logic units ([[ALU]]s). Instead of operating on only one instruction at a time, the CPU will look for several similar instructions that are not dependent on each other, and execute them in parallel. This approach is known as [[superscalar]] processor design.
 
Such techniques are limited by the degree of [[instruction level parallelism]] (ILP), the number of non-dependent instructions in the program code. Some programs are able to run very well on superscalar processors due to their inherent high ILP, notably graphics. However more general problems do not have such high ILP, thus making the achievable speedups due to these techniques to be lower.
 
Branching is one major culprit. For example, the program might add two numbers and branch to a different code segment if the number is bigger than a third number. In this case even if the branch operation is sent to the second ALU for processing, it still must wait for the results from the addition. It thus runs no faster than if there were only one ALU. The most common solution for this type of problem is to use a type of [[branch prediction]].
 
To further the efficiency of multiple functional units which are available in superscalar designs, operand register dependencies was found to be another limiting factor. To minimize these dependencies, [[out-of-order execution]] of instructions was introduced. In such a scheme, the instruction results which complete out-of-order must be re-ordered in program order by the processor for the program to be restartable after an exception. ''Out-of-Order'' execution was the main advancement of the computer industry during the [[1990s]].
A similar concept is [[speculative execution]], where instructions from both sides of a branch are executed at the same time, and the results of one side or the other are thrown out once the branch answer is known.
 
These advances, which were originally developed from research for RISC-style designs, allow modern CISC processors to execute twelve or more instructions per clock cycle, when traditional CISC designs could take twelve or more cycles to execute just one instruction.
 
The resulting instruction scheduling logic of these processors is large, complex and difficult to verify. Furthermore, the higher complexity requires more transistors, increasing power consumption and heat. In this respect RISC is superior because the instructions are simpler, have less interdependence and make superscalar implementations easier. However, as Intel has demonstrated, the concepts can be applied to a CISC design, given enough time and money.
 
:Historical note: Some of these techniques (e.g. pipelining) were originally developed in the late [[1950s]] by [[International Business Machines|IBM]] on their [[IBM 7030|Stretch]] mainframe computer.
 
=== 1990 to today: looking forward ===
 
====VLIW and EPIC====
 
The instruction scheduling logic that makes a superscalar processor is just boolean logic. In the early 1990s, a significant innovation was to realize that the coordination of a multiple-ALU computer could be moved into the [[compiler]], the software that translates a programmer's instructions into machine-level instructions.
 
This type of computer is called a '''[[very long instruction word]]''' (VLIW) computer.
 
Statically scheduling the instructions in the compiler (as opposed to letting the processor do the scheduling dynamically) has many practical advantages over doing so in the CPU.
 
Oddly, speed is not one of them. With enough transistors, the CPU could do everything at once. However all those transistors make the chip larger, and therefore more expensive. The transistors also use power, which means that they generate heat that must be removed. The heat also makes the design less reliable.
 
Since compiling happens only once on the developer's machine, the control logic is "canned" in the final realization of the program. This means that it consumes no transistors, and no power, and therefore is free, and generates no heat.
 
The resulting CPU is simpler, and runs at least as fast as if the scheduling were in the CPU.
 
There were several unsuccessful attempts to commercialize VLIW. The basic problem is that a VLIW computer does not scale to different price and performance points, as a dynamically scheduled computer can.
 
Also, VLIW computers optimise for throughput, not low latency, so they were not attractive to the engineers designing controllers and other computers embedded in machinery. The [[embedded system]]s markets had often pioneered other computer improvements by providing a large market that did not care about compatibility with older software.
 
In January [[2000]], a company called [[Transmeta]] took the interesting
step of placing a compiler in the central processing unit, and making the compiler translate from a reference byte code (in their case, [[x86]] instructions) to an internal VLIW instruction set. This approach
combines the hardware simplicity, low power and speed of VLIW RISC with
the compact main memory system and software reverse-compatibility provided
by popular CISC.
 
[[Intel]] released a chip, called the [[Itanium]], based on what they call an [[Explicitly Parallel Instruction Computing]] (EPIC) design. This design supposedly provides the VLIW advantage of increased instruction throughput. However, it avoids some of the issues of scaling and complexity, by explicitly providing in each "bundle" of instructions information concerning their dependencies. This information is calculated by the compiler, as it would be in a VLIW design. The early versions are also backward-compatible with current [[x86]] software by means of an on-chip [[emulation]] mode. Integer performance has been disappointing as have sales in volume markets.
 
====Multi-threading====
 
Also, we may soon see multi-threaded CPUs. Current designs work best when the computer is running only a single program, however nearly all modern [[operating system]]s allow the user to run multiple programs at the same time. For the CPU to change over and do work on another program requires expensive [[context switching]]. In contrast, a multi-threaded CPU could handle instructions from multiple programs at once.
 
To do this, such CPUs include several sets of registers. When a context switch occurs, the contents of the "working registers" are simply copied into one of a set of registers for this purpose.
 
Such designs often include thousands of registers instead of hundreds as in a typical design. On the downside, registers tend to be somewhat expensive in chip space needed to implement them. This chip space might otherwise be used for some other purpose.
 
====Reconfigurable logic====
 
Another track of development is to combine reconfigurable logic with a general-purpose CPU. In this scheme, a special computer language compiles fast-running subroutines into a bit-mask to configure the logic. Slower, or less-critical parts of the program can be run by sharing their time on the CPU. This process has the capability to create devices such as software [[radio]]s, by using digital signal processing to perform functions usually performed by analog [[electronics]].
 
====Public ___domain processors====
 
As the lines between hardware and software increasingly blur due to progress in design methodology and availability of chips such as [[FPGA]]s and cheaper production processes, even [[open source hardware]] has begun to appear. Loosely-knit communities like [[OpenCores]] have recently announced completely open CPU architectures such as the [[OpenRISC]] which can be readily implemented on FPGAs or in custom produced chips, by anyone, without paying license fees.
 
====High end processor economics====
 
Developing new, high-end CPUs is a '''very''' expensive proposition. Both the logical complexity (needing very large logic design and logic verification teams and simulation farms with perhaps thousands of computers) and the high operating frequencies (needing large circuit design teams and access to the state-of-the-art fabrication process) account for the high cost of design for this type of chip. The design cost of a high-end CPU will be on the order of US $100 million. Since the design of such high-end chips nominally take about five years to complete, to stay competitive a company has to fund at least two of these large design teams to release products at the rate of 2.5 years per product generation. Only the personal computer mass market (with production rates in the hundreds of millions, producing billions of dollars in revenue) can support such economics. As of 2004, only four companies are actively designing and fabricating state of the art general purpose computing CPU chips: [[Intel]], [[AMD]], [[IBM]] and [[Fujitsu]]. [[Motorola]] has spun off its semiconductor division as [[Freescale]] as that division was dragging down profit margins for the rest of the company. [[Texas Instruments]], [[TSMC]] and [[Toshiba]] are a few examples of a companies doing manufacturing for another company's CPU chip design.
 
== Embedded design ==
 
The majority of computer systems in use today are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. An [[embedded system]] usually has minimal requirements for memory and program length and may require simple but unusual input/output systems. For example, most embedded systems lack keyboards, screens, disks, printers, or other recognizable I/O devices of a personal computer. They may control electric motors, relays or voltages, and reed switches, variable resistors or other electronic devices. Often, the only I/O device readable by a human is a single light-emitting diode, and severe cost or power constraints can even eliminate that.
 
In contrast to general-purpose computers, embedded systems often seek to minimize [[interrupt latency]] over instruction throughput.
 
When an electronic device causes an interrupt, the intermediate results, the registers, have to be saved before the software responsible for handling the interrupt can run, and then must be put back after it is finished. If there are more registers, this saving and restoring process takes more time, increasing the latency.
 
Low-latency CPUs generally have relatively few registers in their central processing units, or they have "shadow registers" that are only used by the interrupt software.
 
=== Other design issues ===
 
=== Research topics ===
<!-- [[virtual memory]] moved to [[Computer architecture]] -->
{{Main|History of general-purpose CPUs#1990 to today: Looking forward}}
 
A variety of [[History of general-purpose CPUs#1990 to today: Looking forward|new CPU design ideas]] have been proposed,
One interesting near-term possibility would be to eliminate the bus. Modern vertical [[laser diode]]s enable this change. In theory, an optical computer's components could directly connect through a holographic or phased open-air switching system. This would provide a large increase in effective speed and design flexibility, and a large reduction in cost. Since a computer's connectors are also its most likely failure point, a busless system might be more reliable, as well.
including [[reconfigurable logic]], [[clockless CPU]]s, [[computational RAM]], and [[optical computing]].
 
===Performance analysis and benchmarking===
Another farther-term possibility is to use light instead of electricity for the digital logic itself.
{{Main| Computer performance}}
In theory, this could run about 30% faster and use less power, as well as permit a direct interface with quantum computational devices.
[[benchmark (computing)|Benchmarking]] is a way of testing CPU speed. Examples include SPECint and [[SPECfp]], developed by [[Standard Performance Evaluation Corporation]], and ConsumerMark developed by the Embedded Microprocessor Benchmark Consortium [[EEMBC]].
The chief problem with this approach is that for the foreseeable future, electronic devices are faster, smaller (i.e. cheaper) and more reliable.
An important theoretical problem is that electronic computational elements are already smaller than some wavelengths of light, and therefore even wave-guide based optical logic may be uneconomic compared to electronic logic.
We can therefore expect the majority of development to focus on electronics, no matter how unfair it might seem.
See also [[optical computing]].
 
Some of the commonly used metrics include:
Yet another possibility is the "clockless CPU" (asynchronous CPU). Unlike conventional processors, clockless processors have no central clock to coordinate the progress of data through the pipeline.
* [[Instructions per second]] - Most consumers pick a computer architecture (normally [[Intel]] [[IA32]] architecture) to be able to run a large base of pre-existing pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see [[Megahertz Myth]]).
Instead, stages of the CPU are coordinated using logic devices called "pipe line controls" or "FIFO sequencers." Basically, the pipeline controller clocks the next stage of logic when the existing stage is complete. In this way, a central clock is unnecessary. There are two advantages to clockless CPUs over clocked CPUs:
* [[FLOPS]] - The number of floating point operations per second is often important in selecting computers for scientific computations.
* components can run at different speeds in the clockless CPU. In a clocked CPU, no component can run faster than the clock rate.
* [[Performance per watt]] - System designers building [[parallel computing|parallel computers]], such as [[Google search technology#Current hardware|Google]], pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.<ref>{{cite web|url=http://www.eembc.org/benchmark/consumer.asp?HTYPE=SIM|title=EEMBC ConsumerMark|archive-url=https://web.archive.org/web/20050327005323/http://www.eembc.org/benchmark/consumer.asp?HTYPE=SIM |archive-date=March 27, 2005}}</ref><ref>{{cite web|url=https://www.zdnet.com/article/power-could-cost-more-than-servers-google-warns/|title=Power could cost more than servers, Google warns|author=Stephen Shankland|website=[[ZDNet]]|date=December 9, 2005}}</ref>
* In a clocked CPU, the clock can go no faster than the worst-case performance of the slowest stage. In a clockless CPU, when a stage finishes quicker than normal, the next stage can immediately take the results rather than waiting for the next clock tick. A stage might finish quicker than normal because of the particular data inputs (multiplication can be very fast if it is multiplying by 0 or 1), or because it is running at a higher voltage or lower temperature than normal.
* Some system designers building parallel computers pick CPUs based on the speed per dollar.
* System designers building [[real-time computing]] systems want to guarantee worst-case response. That is easier to do when the CPU has low [[interrupt latency]] and when it has deterministic response. ([[Digital signal processor|DSP]])
* Computer programmers who program directly in assembly language want a CPU to support a full featured [[instruction set]].
* Low power - For systems with limited power sources (e.g. solar, batteries, human power).
* Small size or low weight - for portable embedded systems, systems for spacecraft.
* Environmental impact - Minimizing environmental impact of computers during manufacturing and recycling as well during use. Reducing waste, reducing hazardous materials. (see [[Green computing]]). <!-- ... Are there other measures of "goodness", "figures of merit", that I'm missing here? -->
 
There may be tradeoffs in optimizing some of these metrics. In particular, many design techniques that make a CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa.
Two examples of asynchronous CPUs are the [[ARM_architecture|ARM]]-implementing [[AMULET_microprocessor|AMULET]] and the asynchronous implementation of [[MIPS_architecture|MIPS]] R3000, dubbed [http://www.async.caltech.edu/mips.html MiniMIPS].
 
==Markets==
The biggest disadvantage of the clockless CPU is that most CPU design tools assume a clocked CPU, so making a clockless CPU involves modifying the design tools to handle clockless logic and doing extra testing to ensure the design avoids [[Metastability in electronics|metastable]] problems. For example, the group that designs the aforementioned AMULET developed a tool called [http://www.cs.man.ac.uk/apt/projects/tools/lard/ LARD] to cope with the complex design of AMULET3.
{{Update section|date=December 2023|reason=No update since 2010, the market has significantly evolved since then}}
There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets.
 
===General-purpose computing===
==Design concepts==
{{As of|2010}}, in the general-purpose computing market, that is, desktop, laptop, and server computers commonly used in businesses and homes, the Intel [[IA-32]] and the 64-bit version [[x86-64]] architecture dominate the market, with its rivals [[PowerPC]] and [[SPARC]] maintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market. A growing percentage of these processors are for mobile implementations such as netbooks and laptops.<ref>Kerr, Justin. [http://www.maximumpc.com/article/news/amd_loses_market_share_mobile_cpu_sales_outsell_desktop_first_time "AMD Loses Market Share as Mobile CPU Sales Outsell Desktop for the First Time."] Maximum PC. Published 2010-10-26.</ref>
In general, all processors, micro or otherwise, run the same sort of task over and over:
 
Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption.
#read an instruction and decode it
#find any associated data that is needed to process the instruction
#process the instruction
#write the results out
 
====High-end processor economics====
Complicating this simple-looking series of events is the fact that [[main memory]] has always been slower than the processor itself. Step (2) often introduces a lengthy (in CPU terms) delay while the data arrives over the [[computer bus]]. A considerable amount of research has been put into designs that avoid these delays as much as possible. This often requires complex circuitry and was at one time found only on hand-wired [[supercomputer]] designs. However, as the manufacturing processes have improved, they have become a common feature of almost all designs.
In 1984, most high-performance CPUs required four to five years to develop.<ref>
"New system manages hundreds of transactions per second" article
by Robert Horst and Sandra Metz, of Tandem Computers Inc.,
"Electronics" magazine, 1984 April 19:
"While most high-performance CPUs require four to five years to develop,
The [[NonStop (server computers)|NonStop]] TXP processor took just 2+1/2 years --
six months to develop a complete written specification,
one year to construct a working prototype,
and another year to reach volume production."
</ref>
 
===RISCScientific computing===
{{Main|Supercomputer}}
The basic concept of [[RISC]] is to clearly identify what step 2 does. In older processor designs, now retroactively known as [[CISC]], the instructions were offered in a number of different modes that meant that step 2 took an unknown length of time to complete. In RISC, almost all instructions come in exactly one mode that reads data from one place -- the registers. These ''addressing modes'' are then handled by the [[compiler]], which writes code to load the data into the registers and store it back out. For this reason the term '''load-store''' is often used to describe this philosophy in design; there are many processors with limited instruction sets that are not really RISC.
Scientific computing is a much smaller niche market (in revenue and units shipped). It is used in government research labs and universities. Before 1990, CPU design was often done for this market, but mass market CPUs organized into large clusters have proven to be more affordable. The main remaining area of active hardware design and research for scientific computing is for high-speed data transmission systems to connect mass market CPUs.
 
===Embedded design===
The side effect of this change is twofold. One is that the resulting logic core is much smaller, largely by making step 1 and 2 much simpler. Secondly it means that step 2 always takes one cycle, also reducing the complexity of the overall chip design which would otherwise require complex "locks" that ensure the processor completes one instruction before starting the other. For any given level of performance, a RISC design will have a much smaller "gate count" (number of transistors), the main driver in overall cost -- in other words a fast RISC chip is much cheaper than a fast CISC chip.
{{Main|Embedded system}}
As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors.
 
These single-function devices differ from the more familiar general-purpose CPUs in several ways:
The downside is that the program gets much longer as a side effect of the compiler having to write out explicit instructions for memory handling, the "code density" is lower. This increases the number of instructions that have to be read over the computer bus. When RISC was first being introduced there were arguments that the increased bus access would overwhelm the speed, and that such designs would actually be slower. In theory this might be true, but the real reason for RISC was to allow [[instruction pipeline]]s to be built much more easily.
* Low cost is of high importance.
* It is important to maintain a low power dissipation as embedded devices often have a limited battery life and it is often impractical to include cooling fans.
* To give lower system cost, peripherals are integrated with the processor on the same silicon chip.
* Keeping peripherals on-chip also reduces power consumption as external GPIO ports typically require buffering so that they can source or sink the relatively high current loads that are required to maintain a strong signal outside of the chip.
** Many embedded applications have a limited amount of physical space for circuitry; keeping peripherals on-chip will reduce the space required for the circuit board.
** The program and data memories are often integrated on the same chip. When the only allowed program memory is [[Read-only memory|ROM]], the device is known as a [[microcontroller]].
* For many embedded applications, interrupt latency will be more critical than in some general-purpose processors.
 
====Embedded processor economics====
===Instruction pipelining===
The embedded CPU family with the largest number of total units shipped is the [[8051]], averaging nearly a billion units per year.<ref>{{cite web |url=http://people.wallawalla.edu/~curt.nelson/engr355/lecture/8051_overview.pdf |title=8051 Overview |author=Curtis A. Nelson |access-date=2011-07-10 |url-status=dead |archive-url=https://web.archive.org/web/20111009101426/http://people.wallawalla.edu/~curt.nelson/engr355/lecture/8051_overview.pdf |archive-date=2011-10-09 }}</ref> The 8051 is widely used because it is very inexpensive. The design time is now roughly zero, because it is widely available as commercial intellectual property. It is now often embedded as a small part of a larger system on a chip. The silicon cost of an 8051 is now as low as US$0.001, because some implementations use as few as 2,200 logic gates and take 0.4730 square millimeters of silicon.<ref>
One of the first, and most powerful, techniques to improve performance is the [[instruction pipeline]]. Early microcoded designs would carry out all of the steps above for one instruction before moving onto the next. Large portions of the circuitry were left idle at any one step, for instance, the instruction decoding circuitry would be idle during execution and so on.
{{cite web| url = http://www.keil.com/dd/docs/datashts/evatronix/t8051_ds.pdf| title = T8051 Tiny 8051-compatible Microcontroller| archive-url = https://web.archive.org/web/20110929033902/https://www.keil.com/dd/docs/datashts/evatronix/t8051_ds.pdf| archive-date = 2011-09-29}}</ref><ref>To figure dollars per square millimeter, see [http://www.overclockers.com/forums/showthread.php?t=550542], and note that an SOC component has no pin or packaging costs.</ref>
 
As of 2009, more CPUs are produced using the [[ARM architecture family]] instruction sets than any other 32-bit instruction set.<ref>
Pipelines improve performance by allowing a number of instructions to work their way through the processor at the same time. In the same basic example, the processor would start to decode (step 1) a new instruction while the last one was waiting for results. This would allow up to four instructions to be "in flight" at one time, making the processor look four times as fast. Although any one instruction takes just as long to complete, there's still four steps, the CPU as a whole "retires" instructions much faster and can be run at a much higher clock speed.
[http://www.extremetech.com/extreme/52180-arm-cores-climb-into-3g-territory "ARM Cores Climb Into 3G Territory"] by Mark Hachman, 2002.
</ref><ref>
[http://www.embedded.com/electronics-blogs/significant-bits/4024488/The-Two-Percent-Solution "The Two Percent Solution"] by Jim Turley 2002.
</ref>
The ARM architecture and the first ARM chip were designed in about one and a half years and 5 human years of work time.<ref>[https://web.archive.org/web/20090606152116/http://atterer.net/acorn/arm.html "ARM's way"] 1998</ref>
 
The 32-bit [[Parallax Propeller]] microcontroller architecture and the first chip were designed by two people in about 10 human years of work time.<ref>{{Cite web
RISC make pipelines smaller, and much easier to construct by cleanly separating each stage of the instruction process and making them take the same amount of time -- one cycle. The processor as a whole operates in an [[assembly line]] fashion, with instructions coming in one side and results out the other. Due to the reduced complexity of the [[Classic RISC pipeline]], the pipelined core and an instruction cache could be placed on the same size die that would otherwise fit the core alone on a CISC design. This was the real reason that RISC was faster, early designs like the [[SPARC]] and [[MIPS architecture|MIPS]] often running over 10 times as fast as [[Intel]] and [[Motorola]] CISC solutions at the same clock speed and price.
| first=Chip | last=Gracey
| title = Why the Propeller Works
| url = http://www.parallax.com/Portals/0/Downloads/docs/article/WhythePropellerWorks.pdf
| archive-url = https://web.archive.org/web/20090419060820/http://www.parallax.com/Portals/0/Downloads/docs/article/WhythePropellerWorks.pdf
| archive-date = 2009-04-19
}}</ref>
 
The 8-bit [[Atmel AVR|AVR architecture]] and first AVR microcontroller was conceived and designed by two students at the Norwegian Institute of Technology.
Pipelines are by no means limited to RISC designs. By 1986 the top-of-the-line VAX (the 8800) was a heavily pipelined design, slightly predating the first commercial MIPS and SPARC designs. Most modern CPUs (even embedded CPUs) are now pipelined, and microcoded CPUs with no pipelining are seen only in the most area-constrained embedded processors. Large CISC machines, from the VAX 8800 to the modern Pentium 4 and Athlon, are implemented with both microcode and pipelines. Improvements in pipelining and caching are the two major microarchitectural advances that have enabled processor performance to keep pace with the circuit technology on which they are based.
 
The 8-bit 6502 architecture and the first [[MOS Technology 6502]] chip were designed in 13 months by a group of about 9 people.<ref>{{Cite web |url=http://silicongenesis.stanford.edu/transcripts/mensch.htm |title=Interview with William Mensch |access-date=2009-02-01 |archive-url=https://web.archive.org/web/20160304091031/http://silicongenesis.stanford.edu/transcripts/mensch.htm |archive-date=2016-03-04 |url-status=dead }}</ref>
===Speculative execution===
One problem with an instruction pipeline is that there are a class of instructions that must make their way entirely through the pipeline before execution can continue. In particular, conditional branches need to know the result of some prior instruction before "which side" of the branch to run is known. For instance, an instruction that says "if x is larger than 5 then do this, otherwise do that" will have to wait for the results of x to be known before it knows if the instructions for this or that can be fetched.
 
====Research and educational CPU design====
For a small four-deep pipeline this means a delay of up to three cycles -- the decode can still happen. But as clock speeds increase the depth of the pipeline increases with it, and modern processors may have 20 stages or more. In this case the CPU is being stalled for the vast majority of its cycles every time one of these instructions is encountered.
The 32-bit [[Berkeley RISC]] I and RISC II processors were mostly designed by a series of students as part of a four quarter sequence of graduate courses.<ref>{{cite web|url=http://www.eecs.berkeley.edu/Pubs/TechRpts/1982/CSD-82-106.pdf |archive-url=https://web.archive.org/web/20060305132258/http://www.eecs.berkeley.edu/Pubs/TechRpts/1982/CSD-82-106.pdf |archive-date=2006-03-05 |url-status=live|title=Design and Implementation of RISC I|author1=C.H. Séquin|author-link1=Carlo H. Sequin|author2=D.A. Patterson|author-link2=David A. Patterson (scientist)}}</ref>
This design became the basis of the commercial [[SPARC]] processor design.
 
For about a decade, every student taking the 6.004 class at MIT was part of a team—each team had one semester to design and build a simple 8 bit CPU out of [[7400 series]] [[integrated circuit]]s.
The solution, or one of them, is ''[[speculative execution]]'', also known as ''branch prediction''. In reality one side or the other of the branch will be called much more often than the other, so it is often correct to simply go ahead and say "x will likely be smaller than five, start processing that". If the prediction turns out to be correct, a huge amount of time will be saved. Modern designs have rather complex prediction systems, which watch the results of past branches to predict the future with greater accuracy.
One team of 4 students designed and built a simple 32 bit CPU during that semester.<ref>{{cite web|url=http://sub-zero.mit.edu/fbyte/hacks/vhs/|title=the VHS|archive-url=https://web.archive.org/web/20100227055013/http://sub-zero.mit.edu/fbyte/hacks/vhs/|archive-date=2010-02-27}}
</ref>
 
Some undergraduate courses require a team of 2 to 5 students to design, implement, and test a simple CPU in a FPGA in a single 15-week semester.<ref>{{cite web|url=http://www.fpgacpu.org/teaching.html|title=Teaching Computer Design with FPGAs|author=Jan Gray}}</ref>
===Cache===
It was not long before improvements in chip manufacturing allowed for even more circuitry to be placed on the die, and designers started looking for ways to use it. One of the most common was to add an ever-increasing amount of [[CPU cache|cache memory]] on-die. Cache is simply very fast memory, memory that can be accessed in a few cycles as opposed to "many" needed to talk to main memory. The CPU includes a cache controller which automates reading and writing from the cache, if the data is already in the cache it simply "appears", whereas if it is not the processor is "stalled" while the cache controller reads it in.
 
The MultiTitan CPU was designed with 2.5 man years of effort, which was considered "relatively little design effort" at the time.<ref>{{cite journal |last1=Jouppi |first1=N.P. |last2=Tang |first2=J.Y.-F. |title=A 20-MIPS sustained 32-bit CMOS microprocessor with high ratio of sustained to peak performance |journal=IEEE Journal of Solid-State Circuits |date=October 1989 |volume=24 |issue=5 |pages=1348–1359 |doi=10.1109/JSSC.1989.572612 |bibcode=1989IJSSC..24.1348J }}</ref>
RISC designs started adding cache in the mid-to-late 1980s, often only 4k in total. This number grew over time, and modern CPU's typically include about 512kbytes, while CPU's intended for server use come with 1 or 2 Mbytes. Generally speaking, more cache means more speed.
24 people contributed to the 3.5 year MultiTitan research project, which included designing and building a prototype CPU.<ref>{{cite web|url=http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-87-8.pdf |archive-url=https://web.archive.org/web/20040825183403/http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-87-8.pdf |archive-date=2004-08-25 |url-status=live|title=MultiTitan: Four Architecture Papers|year=1988|pages=4–5}}</ref>
 
==== Soft microprocessor cores ====
===Out-of-order execution===
{{Main|Soft microprocessor}}
Use of cache also introduces a new delay when the data asked for by the CPU is not already in the cache. In early designs this would force the cache controller to stall the processor and wait. Of course there may be some other instruction in the program whose data ''is'' available in the cache at that point. [[out-of-order execution]] allows that instruction to be processed while the processor waits on the cache, then re-orders the results to make it appear that everything happened in the normal order.
For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented by [[logic synthesis]] techniques. These synthesized processors can be implemented in a much shorter amount of time, giving quicker [[time-to-market]].
 
== See also ==
===Superscalar designs===
{{Wikibooks|Microprocessor Design}}
Even with all of the added complexity and gates needed to support the concepts outlined above, chip manufacturing had soon made even them have room left over. This led to the rise of [[superscalar]] processors in the early 1990s, processors that could run more than one instruction at once.
* [[Amdahl's law]]
* [[Central processing unit]]
* [[Comparison of instruction set architectures]]
* [[Complex instruction set computer]]
* [[CPU cache]]
* [[Electronic design automation]]
* [[Heterogeneous computing]]
* [[High-level synthesis]]
* [[History of general-purpose CPUs]]
* [[Integrated circuit design]]
* [[Microarchitecture]]
* [[Microprocessor]]
* [[Minimal instruction set computer]]
* [[Moore's law]]
* [[Reduced instruction set computer]]
* [[System on a chip]]
* [[Network on a chip]]
* [[Process design kit]] – a set of documents created or accumulated for a semiconductor device production process
* [[Uncore]]
 
== References ==
In the outline above the processor runs parts of a single instruction at a time. If one were simply to place two entire cores on a die, then the processor would be able to run two instructions at once. However this is not actually required, as in the average program certain instructions are much more common than others. For instance, the load-store instructions on a RISC design are more common than [[floating point]], so building two complete cores isn't as efficient a use of space as building two load-store units and only one floating point.
{{Reflist}}
 
===General references===
In modern designs it is common to find two load units, one store (many instructions have no results to store), two or more integer math units, two or more floating point units, and often a [[SIMD]] unit of some sort. The decoder grows in complexity by reading in a huge list of instructions from memory and handing them off to the different units that are idle at that point. The results are then collected and re-ordered at the end, as in out-of-order.
*{{cite book | first=Enoch| last=Hwang| year=2006| title=Digital Logic and Microprocessor Design with VHDL| publisher=Thomson| isbn=0-534-46593-5| url=http://faculty.lasierra.edu/~ehwang/digitaldesign| author-link=Enoch Hwang}}
 
*[https://web.archive.org/web/20100113033730/http://www.gamezero.com/team-0/articles/math_magic/micro/index.html Processor Design: An Introduction]
===Simultaneous multithreading===
One of the newest techniques in high-speed processor design is [[simultaneous multithreading]]. Oddly it may have been easier to add this in the past than some of the other techniques described above.
 
The cache controller knows where in main memory any piece of data came from. It therefore "knows" that different data in the cache are actually from different programs entirely, a side effect of modern [[computer multitasking|multitasking]] [[operating system]]s. In simultaneous multithreading designs, the cache controller will not look just for the instruction that is ready, but the program (or thread) that is "most ready". This can be quite effective in many cases, as programs often switch between handling data and processing, simultaneous multithreading can make more effecient use of the various units in these cases by going out and finding entirely different programs to run while the "running one" waits for data.
 
== See also ==
 
* [[Microprocessor]]
* [[Moore's Law]]
* [[Amdahl's law]]
* [[Simultaneous multithreading]]
* [[RISC]]
* [[CISC]]
 
{{CPU technologies}}
[[Category:Computer architecture]]
{{Design}}
 
{{DEFAULTSORT:Processor Design}}
[[nl:Processorarchitectuur]]
[[Category:Central processing unit]]
[[Category:Computer engineering]]
[[Category:Design engineering]]