Content deleted Content added
Citation bot (talk | contribs) Removed URL that duplicated identifier. Removed access-date with no URL. Removed parameters. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Computer architecture | #UCB_Category 43/88 |
Guy Harris (talk | contribs) When used in that fashion, "(computer) architecture" is a count noun, so use an article with it. |
||
(26 intermediate revisions by 17 users not shown) | |||
Line 1:
{{Short description|Set of rules describing computer system}}
{{Lead too short|date=November 2023}}
[[File:Computer architecture block diagram.png|alt=|thumb|
In [[computer science]] and [[computer engineering]], a '''computer architecture''' is
== History ==
The first documented computer architecture was in the correspondence between [[Charles Babbage]] and [[Ada Lovelace]], describing the [[analytical engine]]. While building the computer [[Z1 (computer)|Z1]] in 1936, [[Konrad Zuse]] described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the [[Stored-program computer|stored-program]] concept.<ref>{{citation |title=Electronic Digital Computers |journal=Nature |date=25 September 1948 |volume=162 |page=487 |doi=10.1038/162487a0 |last1=Williams |first1=F. C. |last2=Kilburn |first2=T. |issue=4117 |bibcode=1948Natur.162..487W |s2cid=4110351 |doi-access=free }}</ref><ref>Susanne Faber, "Konrad Zuses Bemuehungen um die Patentanmeldung der Z3", 2000</ref> Two other early and important examples are:
* [[John von Neumann]]'s 1945 paper, [[First Draft of a Report on the EDVAC]], which described an organization of logical elements;<ref>{{Cite book|title=First Draft of a Report on the EDVAC|last=Neumann|first=John|year=1945|pages=9}}</ref> and
*[[Alan M. Turing|Alan Turing]]'s more detailed ''Proposed Electronic Calculator'' for the [[Automatic Computing Engine]], also 1945 and which cited [[John von Neumann]]'s paper.<ref>Reproduced in B. J. Copeland (Ed.), "Alan Turing's Automatic Computing Engine", Oxford University Press, 2005, pp.
The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and [[Fred Brooks|Frederick P. Brooks, Jr.]], members of the Machine Organization department in IBM's main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the [[IBM 7030 Stretch|Stretch]], an IBM-developed [[supercomputer]] for [[Los Alamos National Laboratory]] (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of "system architecture", a term that seemed more useful than "machine organization".<ref>{{cite web|url=https://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/05-10/102634114.pdf |last1= Johnson |first1=Lyle| title= A Description of Stretch|page=1|year=1960|access-date=7 October 2017}}</ref>
Line 37:
Computer architecture is concerned with balancing the performance, efficiency, cost, and reliability of a computer system. The case of instruction set architecture can be used to illustrate the balance of these competing factors. More complex [[instruction set]]s enable programmers to write more space efficient programs, since a single instruction can encode some higher-level abstraction (such as the [[X86 instruction listings|x86 Loop instruction]]).<ref>{{cite book |last1=Null |first1=Linda |title=The Essentials of Computer Organization and Architecture |date=2019 |publisher=Jones & Bartlett Learning |___location=Burlington, MA |isbn=9781284123036 |page=280 |edition=5th}}</ref> However, longer and more complex instructions take longer for the [[Processor (computing)|processor]] to decode and can be more costly to implement effectively. The increased complexity from a large instruction set also creates more room for unreliability when instructions interact in unexpected ways.
The implementation involves [[integrated circuit design]], packaging, [[Electric power|power]], and [[Computer cooling|cooling]]. Optimization of the design requires familiarity with topics from [[compiler]]s and [[operating system]]s to [[logic design]] and packaging.<ref>{{Cite web|url=https://www.cis.upenn.edu/~milom/cis501-Fall11/lectures/00_intro.pdf|title=What is computer architecture?|last=Martin|first=Milo|website=UPENN|access-date=11 May 2017}}</ref>
===Instruction set architecture===
{{Main|Instruction set architecture}}
An [[instruction set architecture]] (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand [[high-level programming language]]s such as [[Java (programming language)|Java]], [[C++]], or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as [[Binary numeral system|binary number]]s. Software tools, such as [[compiler]]s, translate those high level languages into instructions that the processor can understand.▼
▲An [[instruction set architecture]] (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand [[high-level programming language]]s such as [[Java (programming language)|Java]], [[C++]], or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as [[Binary numeral system|binary number]]s. Software tools, such as [[compiler]]s, translate those high level languages into instructions that the processor can understand.<ref>{{cite web |title=Glossary |url=https://codasip.com/glossary/isa |website=Codasip |access-date=30 May 2025}}</ref><ref>{{cite web |title=What is Instruction Set Architecture (ISA)? |url=https://www.arm.com/glossary/isa |website=The Architecture for the Digital World |access-date=30 May 2025 |language=en}}</ref>
Besides instructions, the ISA defines items in the computer that are available to a program—e.g., [[data type]]s, [[Processor register|registers]], [[addressing mode]]s, and [[Computer memory|memory]]. Instructions locate these available items with register indexes (or names) and memory addressing modes.▼
▲Besides instructions, the ISA defines items in the computer that are available to a program—e.g., [[data type]]s, [[Processor register|registers]], [[addressing mode]]s, and [[Computer memory|memory]]. Instructions locate these available items with register indexes (or names) and memory addressing modes.<ref>{{cite web |title=Organization of Computer Systems: ISA, Machine Language, Number Systems |url=https://www.cise.ufl.edu/~mssz/CompOrg/CDA-lang.html |website=www.cise.ufl.edu |access-date=30 May 2025}}</ref><ref>{{cite web |title=Instruction Set Architecture – Computer Architecture |url=https://www.cs.umd.edu/~meesh/411/CA-online/chapter/instruction-set-architecture/index.html |website=www.cs.umd.edu |access-date=30 May 2025}}</ref>
The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short (vaguely) mnemonic names for the instructions. The names can be recognized by a software development tool called an [[assembler (computer programming)|assembler]]. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. [[Disassembler]]s are also widely available, usually in [[debugger]]s and software programs to isolate and correct malfunctions in binary computer programs.▼
▲The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short (vaguely) mnemonic names for the instructions. The names can be recognized by a software development tool called an [[assembler (computer programming)|assembler]]. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. [[Disassembler]]s are also widely available, usually in [[debugger]]s and software programs to isolate and correct malfunctions in binary computer programs.<ref>{{cite book |last1=Hennessy |first1=John L. |last2=Patterson |first2=David A. |title=Computer Architecture: A Quantitative Approach |date=23 November 2017 |publisher=[[Morgan Kaufmann Publishers]] |isbn=978-0-12-811906-8 |url=https://google.com/books/edition/Computer_Architecture/cM8mDwAAQBAJ |access-date=30 May 2025 |language=en}}</ref>
ISAs vary in quality and completeness. A good ISA compromises between [[programmer]] convenience (how easy the code is to understand), size of the code (how much code is required to do a specific action), cost of the [[computer]] to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time). [[Memory organisation|Memory organization]] defines how instructions interact with the memory, and how memory interacts with itself.
Line 91:
Power efficiency is another important measurement in modern computers. Higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt).
Modern circuits have less power required per [[transistor]] as the number of transistors per chip grows.<ref>{{Cite web|url=http://eacharya.inflibnet.ac.in/data-server/eacharya-documents/53e0c6cbe413016f23443704_INFIEP_33/192/ET/33-192-ET-V1-S1__ssed_unit_4_module_10_integrated_circuits_and_fabrication_e-text.pdf|title=Integrated circuits and fabrication|access-date=8 May 2017}}</ref> This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it.{{Clarify|reason=The last two sentences seem to contradict each other|date=March 2025}} However, the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible.<ref>{{Cite web|url=http://www.samsung.com/semiconductor/minisite/Exynos/w/solution/mod_ap/8895/?CID=AFL-hq-mul-0813-11000170|title=Exynos 9 Series (8895)|website=Samsung|access-date=8 May 2017}}</ref> In the world of [[embedded computers]], power efficiency has long been an important goal next to throughput and latency.
===Shifts in market demand===
Line 128:
==External links==
{{Commons category}}
* [https://www.youtube.com/user/cmu18447 Carnegie Mellon Computer Architecture Lectures]
* [http://portal.acm.org/toc.cfm?id=SERIES416&type=series&coll=GUIDE&dl=GUIDE&CFID=41492512&CFTOKEN=82922478 ISCA: Proceedings of the International Symposium on Computer Architecture]
* [http://www.microarch.org/ Micro: IEEE/ACM International Symposium on Microarchitecture]
|