Computer architecture: Difference between revisions

Content deleted Content added
Computer organization: More unnecessary piping.
When used in that fashion, "(computer) architecture" is a count noun, so use an article with it.
 
(29 intermediate revisions by 18 users not shown)
Line 1:
{{Short description|Set of rules describing computer system}}
{{Lead too short|date=November 2023}}
[[File:Computer architecture block diagram.png|alt=|thumb|481x481pxupright=1.35|Block diagram of a basic computer with uniprocessor CPU. Black lines indicate controlthe flow of control signals, whereas red lines indicate datathe flow of processor instructions and data. Arrows indicate the direction of flow.]]
In [[computer science]] and [[computer engineering]], a '''computer architecture''' is a description of the structure of a [[computer]] system made from component parts.<ref>{{cite web|last=Dragoni|first=Nicole|title=Introduction to peer to peer computing|url=http://www2.imm.dtu.dk/courses/02220/2017/L6/P2P.pdf|website=DTU Compute – Department of Applied Mathematics and Computer Science|___location=Lyngby, Denmark|date=n.d.}}</ref> It can sometimes be a high-level description that ignores details of the implementation.<ref>{{cite book|last1=Clements|first1=Alan|title=Principles of Computer Hardware|page=1|edition=Fourth|quote=Architecture describes the internal organization of a computer in an abstract way; that is, it defines the capabilities of the computer and its programming model. You can have two computers that have been constructed in different ways with different technologies but with the same architecture.}}</ref> At a more detailed level, the description may include the [[instruction set architecture]] design, [[microarchitecture]] design, [[logic design]], and [[implementation]].<ref>{{cite book|last1=Hennessy|first1=John|last2=Patterson|first2=David|title=Computer Architecture: A Quantitative Approach|page=11|edition=Fifth|quote=This task has many aspects, including instruction set design, functional organization, logic design, and implementation.}}</ref>
 
== History ==
The first documented computer architecture was in the correspondence between [[Charles Babbage]] and [[Ada Lovelace]], describing the [[analytical engine]]. While building the computer [[Z1 (computer)|Z1]] in 1936, [[Konrad Zuse]] described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the [[Stored-program computer|stored-program]] concept.<ref>{{citation |title=Electronic Digital Computers |journal=Nature |date=25 September 1948 |volume=162 |page=487 |url=http://www.computer50.org/kgill/mark1/natletter.html |access-date=2009-04-10 |doi=10.1038/162487a0 |archive-url=https://web.archive.org/web/20090406014626/http://www.computer50.org/kgill/mark1/natletter.html |archive-date=6 April 2009 |url-status=dead |last1=Williams |first1=F. C. |last2=Kilburn |first2=T. |issue=4117 |bibcode=1948Natur.162..487W |s2cid=4110351 |doi-access=free }}</ref><ref>Susanne Faber, "Konrad Zuses Bemuehungen um die Patentanmeldung der Z3", 2000</ref> Two other early and important examples are:
* [[John von Neumann]]'s 1945 paper, [[First Draft of a Report on the EDVAC]], which described an organization of logical elements;<ref>{{Cite book|title=First Draft of a Report on the EDVAC|last=Neumann|first=John|year=1945|pages=9}}</ref> and
*[[Alan M. Turing|Alan Turing]]'s more detailed ''Proposed Electronic Calculator'' for the [[Automatic Computing Engine]], also 1945 and which cited [[John von Neumann]]'s paper.<ref>Reproduced in B. J. Copeland (Ed.), "Alan Turing's Automatic Computing Engine", Oxford University Press, 2005, pp. 369-454369–454.</ref>
 
The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and [[Fred Brooks|Frederick P. Brooks, Jr.]], members of the Machine Organization department in IBM's main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the [[IBM 7030 Stretch|Stretch]], an IBM-developed [[supercomputer]] for [[Los Alamos National Laboratory]] (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of "system architecture", a term that seemed more useful than "machine organization".<ref>{{cite web|url=https://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/05-10/102634114.pdf |last1= Johnson |first1=Lyle| title= A Description of Stretch|page=1|year=1960|access-date=7 October 2017}}</ref>
Line 37:
Computer architecture is concerned with balancing the performance, efficiency, cost, and reliability of a computer system. The case of instruction set architecture can be used to illustrate the balance of these competing factors. More complex [[instruction set]]s enable programmers to write more space efficient programs, since a single instruction can encode some higher-level abstraction (such as the [[X86 instruction listings|x86 Loop instruction]]).<ref>{{cite book |last1=Null |first1=Linda |title=The Essentials of Computer Organization and Architecture |date=2019 |publisher=Jones & Bartlett Learning |___location=Burlington, MA |isbn=9781284123036 |page=280 |edition=5th}}</ref> However, longer and more complex instructions take longer for the [[Processor (computing)|processor]] to decode and can be more costly to implement effectively. The increased complexity from a large instruction set also creates more room for unreliability when instructions interact in unexpected ways.
 
The implementation involves [[integrated circuit design]], packaging, [[Electric power|power]], and [[Computer cooling|cooling]]. Optimization of the design requires familiarity with topics from [[compiler]]s and [[operating system]]s to [[logic design]] and packaging.<ref>{{Cite web|url=https://www.cis.upenn.edu/~milom/cis501-Fall11/lectures/00_intro.pdf|title=What is computer architecture?|last=Martin|first=Milo|website=UPENN|access-date=11 May 2017}}</ref>
 
===Instruction set architecture===
{{Main|Instruction set architecture}}
{{Unreferenced section|date=March 2018}}
An [[instruction set architecture]] (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand [[high-level programming language]]s such as [[Java (programming language)|Java]], [[C++]], or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as [[Binary numeral system|binary number]]s. Software tools, such as [[compiler]]s, translate those high level languages into instructions that the processor can understand.
 
An [[instruction set architecture]] (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand [[high-level programming language]]s such as [[Java (programming language)|Java]], [[C++]], or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as [[Binary numeral system|binary number]]s. Software tools, such as [[compiler]]s, translate those high level languages into instructions that the processor can understand.<ref>{{cite web |title=Glossary |url=https://codasip.com/glossary/isa |website=Codasip |access-date=30 May 2025}}</ref><ref>{{cite web |title=What is Instruction Set Architecture (ISA)? |url=https://www.arm.com/glossary/isa |website=The Architecture for the Digital World |access-date=30 May 2025 |language=en}}</ref>
Besides instructions, the ISA defines items in the computer that are available to a program&mdash;e.g., [[data type]]s, [[Processor register|registers]], [[addressing mode]]s, and [[Computer memory|memory]]. Instructions locate these available items with register indexes (or names) and memory addressing modes.
 
Besides instructions, the ISA defines items in the computer that are available to a program&mdash;e.g., [[data type]]s, [[Processor register|registers]], [[addressing mode]]s, and [[Computer memory|memory]]. Instructions locate these available items with register indexes (or names) and memory addressing modes.<ref>{{cite web |title=Organization of Computer Systems: ISA, Machine Language, Number Systems |url=https://www.cise.ufl.edu/~mssz/CompOrg/CDA-lang.html |website=www.cise.ufl.edu |access-date=30 May 2025}}</ref><ref>{{cite web |title=Instruction Set Architecture – Computer Architecture |url=https://www.cs.umd.edu/~meesh/411/CA-online/chapter/instruction-set-architecture/index.html |website=www.cs.umd.edu |access-date=30 May 2025}}</ref>
The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short (vaguely) mnemonic names for the instructions. The names can be recognized by a software development tool called an [[assembler (computer programming)|assembler]]. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. [[Disassembler]]s are also widely available, usually in [[debugger]]s and software programs to isolate and correct malfunctions in binary computer programs.
 
The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short (vaguely) mnemonic names for the instructions. The names can be recognized by a software development tool called an [[assembler (computer programming)|assembler]]. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. [[Disassembler]]s are also widely available, usually in [[debugger]]s and software programs to isolate and correct malfunctions in binary computer programs.<ref>{{cite book |last1=Hennessy |first1=John L. |last2=Patterson |first2=David A. |title=Computer Architecture: A Quantitative Approach |date=23 November 2017 |publisher=[[Morgan Kaufmann Publishers]] |isbn=978-0-12-811906-8 |url=https://google.com/books/edition/Computer_Architecture/cM8mDwAAQBAJ |access-date=30 May 2025 |language=en}}</ref>
 
ISAs vary in quality and completeness. A good ISA compromises between [[programmer]] convenience (how easy the code is to understand), size of the code (how much code is required to do a specific action), cost of the [[computer]] to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time). [[Memory organisation|Memory organization]] defines how instructions interact with the memory, and how memory interacts with itself.
Line 85:
Performance is affected by a very wide range of design choices — for example, [[Pipeline (computing)|pipelining]] a processor usually makes latency worse, but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a [[real-time computing|real-time]] environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable and limited time period after the brake pedal is sensed or else failure of the brake will occur.
 
[[Benchmark (computing)|BenchmarkBenchmarking]]ing takes all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it should not be how you choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might render [[Videovideo game|video games]]s more smoothly. Furthermore, designers may target and add special features to their products, through hardware or software, that permit a specific benchmark to execute quickly but do not offer similar advantages to general tasks.
 
===Power efficiency===
Line 91:
Power efficiency is another important measurement in modern computers. Higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt).
 
Modern circuits have less power required per [[transistor]] as the number of transistors per chip grows.<ref>{{Cite web|url=http://eacharya.inflibnet.ac.in/data-server/eacharya-documents/53e0c6cbe413016f23443704_INFIEP_33/192/ET/33-192-ET-V1-S1__ssed_unit_4_module_10_integrated_circuits_and_fabrication_e-text.pdf|title=Integrated circuits and fabrication|access-date=8 May 2017}}</ref> This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it.{{Clarify|reason=The last two sentences seem to contradict each other|date=March 2025}} However, the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible.<ref>{{Cite web|url=http://www.samsung.com/semiconductor/minisite/Exynos/w/solution/mod_ap/8895/?CID=AFL-hq-mul-0813-11000170|title=Exynos 9 Series (8895)|website=Samsung|access-date=8 May 2017}}</ref> In the world of [[embedded computers]], power efficiency has long been an important goal next to throughput and latency.
 
===Shifts in market demand===
Increases in clock frequency have grown more slowly over the past few years, compared to power reduction improvements. This has been driven by the end of [[Moore's Law]] and demand for longer [[battery life]] and reductions in size for [[mobile technology]]. This change in focus from higher clock rates to power consumption and miniaturization can be shown by the significant reductions in power consumption, as much as 50%, that were reported by [[Intel]] in their release of the [[Haswell (microarchitecture)|Haswell microarchitecture]]; where they dropped their power consumption benchmark from 30–40 [[Watt|wattswatt]]s down to 10–20 watts.<ref>{{Cite web|url=http://www.intel.com/content/dam/doc/white-paper/resources-xeon-measuring-processor-power-paper.pdf|title=Measuring Processor Power TDP vs ACP|date=April 2011|website=Intel|access-date=5 May 2017}}</ref> Comparing this to the processing speed increase of 3 GHz to 4 GHz (2002 to 2006), it can be seen that the focus in research and development is shifting away from clock frequency and moving towards consuming less power and taking up less space.<ref>{{Cite web |date=24 April 2012 |title=History of Processor Performance |url=https://www.cs.columbia.edu/~sedwards/classes/2012/3827-spring/advanced-arch-2011.pdf |access-date=5 May 2017 |website=cs.columbia.edu}}</ref>
 
==See also==
Line 128:
==External links==
{{Commons category}}
* [https://www.youtube.com/user/cmu18447 Carnegie Mellon Computer Architecture Lectures]
* [http://portal.acm.org/toc.cfm?id=SERIES416&type=series&coll=GUIDE&dl=GUIDE&CFID=41492512&CFTOKEN=82922478 ISCA: Proceedings of the International Symposium on Computer Architecture]
* [http://www.microarch.org/ Micro: IEEE/ACM International Symposium on Microarchitecture]