Computer hardware: Difference between revisions

Content deleted Content added
m Reverted edit by 110.235.238.242 (talk) to last version by LucaShaw
 
Line 29:
 
===Instruction set architecture===
The most common [[instruction set architecture]] (ISA)—the interface between a computer's hardware and software—is based on the one devised by von Neumann in 1945.{{sfn|Mendelson|2022|p=2}} Despite the separation of the computing unit and the I/O system in many diagrams, typically the hardware is shared, with a bit in the computing unit indicating whether it is in computation or I/O mode.{{sfn|Mendelson|2022|pp=2-3}} Common types of ISAs include CISC ([[complex instruction set computer]]), RISC ([[reduced instruction set computer]]), [[Vector processor|vector operations]], and hybrid modes.{{sfn|Mendelson|2022|p=3}} CISC involves using a larger expression set to minimize the number of instructions the machines need to use.{{sfn|Mendelson|2022|p=8}} Based on a recognition that only a few instructions are commonly used, RISC shrinks the instruction set for added simplicity, which also enables the inclusion of more [[register (computing)|register]]s.{{sfn|Mendelson|2022|p=15}} After the invention of RISC in the 1980s, RISC based architectures that used [[Pipeline (computing)|pipelining]] and [[caching]] to increase performance displaced CISC architectures, particularly in applications with restrictions on power usage or space (such as [[mobile phone]]s). From 1986 to 2003, the annual rate of improvement in hardware performance exceeded 50 percent, enabling the development of new computing devices such as [[Tablet computer|tablet]]s and mobiles.{{sfn|Hennessy |Patterson|2011|p=2}} Alongside the density of transistors, DRAM memory as well as flash and magnetic disk storage also became exponentially more compact and cheaper. The rate of improvement slackened off in the twenty-first century.{{sfn|Hennessy |Patterson|2011|pp=17–18}}
 
In the twenty-first century, increases in performance have been driven by increasing exploitation of [[Parallel computing|parallelism]].{{sfn|Hennessy |Patterson|2011|pp=9, 44}} Applications are often parallelizable in two ways: either the same function is running across multiple areas of data ([[data parallelism]]) or different tasks can be performed simultaneously with limited interaction ([[task parallelism]]).{{sfn|Hennessy |Patterson|2011|p=9}} These forms of parallelism are accommodated by various hardware strategies, including [[instruction-level parallelism]] (such as [[instruction pipelining]]), vector architectures and [[graphical processing unit]]s (GPUs) that are able to implement data parallelism, thread-level parallelism and request-level parallelism (both implementing task-level parallelism).{{sfn|Hennessy |Patterson|2011|p=9}}
Line 82:
 
Components directly attached to or to part of the motherboard include:
* At least one [[central processing unit|CPU]] (central processing unit), which performs the majority of computational tasks required for a computer to operate.{{sfn|Wang|2021|p=8}} Often described informally as the "''brain"'' of the computer,{{sfn|Wang|2021|p=9}} the CPU fetches program instructions from [[random-access memory]] (RAM), decodes and executes them, then returns results for further processing by other components. This process is known as the [[instruction cycle]]. Modern CPUs are [[microprocessor]]s fabricated on a [[metal–oxide–semiconductor]] (MOS) [[integrated circuit]] (IC) using advanced [[semiconductor device fabrication]] techniques, often employing [[photolithography]]. They are typically cooled using a [[heatsink]] and [[computer fan|fan]] or a [[liquid cooling|liquid-cooling system]]. Many contemporary CPUs integrate an on-die [[graphics processing unit]] ([[integrated graphics|GPU]]), eliminating the need for a discrete GPU in basic systems. CPU performance is influenced by clock speed—measured in gigahertz (GHz)—with common consumer processors ranging from 1 GHz to 5 GHz.{{cn|date=August 2024}} Additionally, there is a growing trend toward [[multi-core processor|multi-core designs]], where multiple processing cores are included on a single chip, enabling greater [[parallel computing|parallelism]] and improved multitasking performance.{{sfn|Wang|2021|p=9}}
*The internal bus connects the CPU to main memory via multiple communication lines—typically 50 to 100—divided into address, data, and control buses, each handling specific types of signals.{{sfn|Wang|2021|p=75}} Historically, parallel buses were dominant, but in the twenty-first century, high-speed serial buses (often using [[serializer/deserializer]] (SerDes) technology) have largely replaced them, enabling greater data throughput over fewer physical connections. Examples include [[PCI Express]] and [[USB]].{{sfn|Wang|2021|p=78}} In systems with multiple processors, an interconnect bus is used, traditionally coordinated by a [[Northbridge (computing)|northbridge]] chip, which links the CPU, memory, and high-speed peripherals such as [[PCI]]. The [[Southbridge (computing)|southbridge]] handles communication with slower I/O devices such as storage and USB ports.{{sfn|Wang|2021|p=90}} However, in modern architectures like [[Intel QuickPath Interconnect]] or [[AMD Ryzen]]-based systems, these functions are increasingly integrated into the CPU itself, forming a [[system on a chip]] (SoC)-like design.
*[[Random-access memory]] (RAM) stores code and data actively used by the CPU, organized in a [[memory hierarchy]] optimized for access speed and predicted reuse. At the top of this hierarchy are [[processor register|registers]], located within the CPU core, offering the fastest access but extremely limited capacity.{{sfn|Wang|2021|p=47}} Below registers are multiple levels of [[cache memory]]—L1, L2, and sometimes L3—typically implemented using [[static random-access memory]] (SRAM). Caches have greater capacity than registers but less than main memory, and while slower than registers, they are significantly faster than [[dynamic random-access memory]] (DRAM), which is used for main RAM.{{sfn|Wang|2021|pp=49–50}} Caching improves performance by [[prefetching]] frequently used data, thereby reducing [[memory latency]].{{sfn|Wang|2021|pp=49–50}}{{sfn|Hennessy|Patterson|2011|p=45}} When data is not found in the cache (a [[cache miss]]), it is retrieved from main memory. RAM is volatile, meaning its contents are lost when the system loses power.{{sfn|Wang|2021|p=54}} In modern systems, DRAM is often of the [[DDR SDRAM]] type, such as DDR4 or DDR5.