Computer performance: Difference between revisions

Content deleted Content added
Reverting edit(s) by 72.255.5.167 (talk) to rev. 1004614878 by MrOllie: Vandalism (RW 16)
Information about Processor
 
(38 intermediate revisions by 22 users not shown)
Line 1:
{{short description|Amount of useful work accomplished by a computer}}

In [[computing]], '''computer performance''' is the amount of useful work accomplished by a [[computer system]]. Outside of specific contexts, computer performance is estimated in terms of accuracy, [[efficiency]] and speed of executing [[computer program]] instructions. When it comes to high computer performance, one or more of the following factors might be involved:
 
* Short [[Response time (technology)|response time]] for a given piece of work.
* High [[throughput]] (rate of processing work tasks).
* Low utilization of [[computing resource]](s).
** Fast (or highly compact) [[data compression]] and decompression.
* [[High availability]] of the computing system or application.
Line 16 ⟶ 18:
* In absolute terms, e.g. for fulfilling a contractual obligation
 
Whilst the above definition relates to a scientific, technical approach, the following definition given by [[Arnold Allen (mathematician)|Arnold Allen]] would be useful for a non-technical audience:
 
<blockquote>''The word ''performance'' in computer performance means the same thing that performance means in other contexts, that is, it means "How well is the computer doing the work it is supposed to do?"''<ref>Computer Performance Analysis with Mathematica by Arnold O. Allen, Academic Press, 1994. ''$1.1 Introduction, pg 1.''</ref></blockquote>
Line 37 ⟶ 39:
 
== Aspects of performance ==
Computer performance [[Software metric|metrics]] (things to measure) include [[availability]], [[Response time (technology)|response time]], [[channel capacity]], [[Latency (engineering)|latency]], [[completion time]], [[service time]], [[Bandwidth (computing)|bandwidth]], [[throughput]], [[relative efficiency]], [[scalability]], [[performance per watt]], [[Data compression|compression ratio]], [[instruction path length]] and [[speed up]]. [[CPU]] benchmarks are available.<ref>{{citation|title=Measuring Program Similarity: Experiments with SPEC CPU Benchmark Suites|year=2005 |pages=10–20 |citeseerx=10.1.1.123.501 }}</ref>
 
=== Availability ===
{{Main|Availability}}
 
Availability of a system is typically measured as a factor of its reliability - as reliability increases, so does availability (that is, less [[downtime]]). Availability of a system may also be increased by the strategy of focusing on increasing testability and maintainability and not on reliability. Improving maintainability is generally easier than reliability. Maintainability estimates (Repairrepair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, it is likely to dominate the availability (prediction uncertainty) problem, even while maintainability levels are very high.
 
=== Response time ===
{{Main|Response time (technology)}}
 
Response time is the total amount of time it takes to respond to a request for service. In computing, that service can be any unit of work from a simple [[computer data storage|disk IO]] to loading a complex [[web page]]. The response time is the sum of three numbers:<ref>{{cite book | last = Wescott | first = Bob | title = The Every Computer Performance Book, Chapter 3: Useful laws | url= https://www.amazon.com/Every-Computer-Performance-Book-Computers/dp/1482657759/ | publisher = [[CreateSpace]] | date = 2013 | isbn = 1482657759978-1482657753}}</ref>
* Service time - How long it takes to do the work requested.
* Wait time - How long the request has to wait for requests queued ahead of it before it gets to run.
Line 55 ⟶ 57:
{{Main|Instructions per second|FLOPS}}
 
Most consumers pick a computer architecture (normally [[Intel]] [[IA32IA-32]] architecture) to be able to run a large base of pre-existing, pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see [[megahertz myth]]).
 
Some system designers building parallel computers pick CPUs based on the speed per dollar.
Line 69 ⟶ 71:
{{Main|Latency (engineering)}}
 
Latency is a time delay between the cause and the effect of some physical change in the system being observed. Latency is a result of the limited velocity with which any physical interaction can take place. This velocity is always lower or equal to speed of light. Therefore, every physical system that has non-zero spatial dimensions different from zero will experience some sort of latency.
 
The precise definition of latency depends on the system being observed and the nature of stimulation. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of human-machine interaction, perceptible latency (delay between what the user commands and when the computer provides the results) has a strong effect on user satisfaction and usability.
Line 75 ⟶ 77:
Computers run sets of instructions called a process. In operating systems, the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding. For example, suppose a process commands that a computer card's voltage output be set high-low-high-low and so on at a rate of 1000&nbsp;Hz. The operating system may choose to adjust the scheduling of each transition (high-low or low-high) based on an internal clock. The latency is the delay between the process instruction commanding the transition and the hardware actually transitioning the voltage from high to low or low to high.
 
System designers building [[real-time computing]] systems want to guarantee worst-case response. That is easier to do when the CPU has low [[interrupt latency]] and when it has a deterministic response.
 
=== Bandwidth ===
{{Main|Bandwidth (computing)}}
 
In computer networking, bandwidth is a measurement of bit-rate of available or consumed [[data communication]] resources, expressed in bits per second or multiples of it (bit/s, kbit/s, Mbit/s, Gbit/s, etc.).
 
Bandwidth sometimes defines the net bit rate (aka. peak bit rate, information rate, or physical layer useful bit rate), channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The reason for this usage is that according to Hartley's law, the maximum data rate of a physical communication link is proportional to its bandwidth in hertz, which is sometimes called frequency bandwidth, spectral bandwidth, RF bandwidth, signal bandwidth or analog bandwidth.
Line 91 ⟶ 93:
In communication networks, throughput is essentially synonymous to digital bandwidth consumption. In [[wireless network]]s or [[cellular communication networks]], the [[system spectral efficiency]] in bit/s/Hz/area unit, bit/s/Hz/site or bit/s/Hz/cell, is the maximum system throughput (aggregate throughput) divided by the analog bandwidth and some measure of the system coverage area.
 
In integrated circuits, often a block in a [[data flow diagram]] has a single input and a single output, and operateoperates on discrete packets of information. Examples of such blocks are [[Fast Fourier transform|FFT]] modules or [[binary multiplier]]s. Because the units of throughput are the reciprocal of the unit for [[propagation delay]], which is 'seconds per message' or 'seconds per output', throughput can be used to relate a computational device performing a dedicated function such as an [[ASIC]] or [[embedded processor]] to a communications channel, simplifying system analysis.
 
=== Relative efficiency ===
{{Main|Relative efficiency}}
 
=== Scalability ===
{{Main|Scalability}}
 
Scalability is the ability of a system, network, or process to handle a growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth.
 
=== Power consumption ===
 
The amount of electricity[[electric power]] used by the computer ([[power consumption]]). This becomes especially important for systems with limited power sources such as solar, batteries, and human power.
 
==== Performance per watt ====
{{Main|Performance per watt}}
 
System designers building [[parallel computing|parallel computers]], such as [[Google search technology#Production hardware|Google's hardware]], pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.<ref>{{cite web |title=EEMBC -- the Embedded Microprocessor Benchmark Consortium |url=http://www.eembc.org/benchmark/consumer.asp?HTYPE=SIM |title=Archived copy |access-date=2009-01-21 |url-status=dead |archive-url=https://web.archive.org/web/20050327005323/http://www.eembc.org/benchmark/consumer.asp?HTYPE=SIM |archive-date=2005-03-27 |access-date=2009-01-21}}[http://news.cnet.com/Power+could+cost+more+than+servers,+Google+warns/2100-1010_3-5988090.html]</ref>
 
For spaceflight computers, the processing speed per watt ratio is a more useful performance criterion than raw processing speed due to limited on-board resources of power.<ref>D. J. Shirley; and M. K. McLelland. [https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=2656&context=smallsat "The Next-Generation SC-7 RISC Spaceflight Computer"]. p. 2.</ref>
 
=== Compression ratio ===
Line 122 ⟶ 123:
{{Further|Green computing}}
 
The effect of a computer or computerscomputing on the environment, during manufacturing and recycling as well as during use. Measurements are taken with the objectives of reducing waste, reducing hazardous materials, and minimizing a computer's [[ecological footprint]].
 
=== Transistor count ===
{{Main|Transistor count}}
 
The transistor count is the number of [[transistor]]s on an [[integrated circuit]] (IC). Transistor count is the most common measure of IC complexity.
 
== Benchmarks ==
Line 139 ⟶ 140:
{{Main|Software performance testing}}
 
In software engineering, performance testing is, in general, testing performedconducted to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate, or verify other quality attributes of the system, such as scalability, reliability, and resource usage.
 
Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the implementation, design, and architecture of a system.
 
=== Profiling (performance analysis)===
Line 149 ⟶ 150:
 
Profiling is achieved by instrumenting either the program [[source code]] or its binary executable form using a tool called a ''profiler'' (or ''code profiler''). A number of different techniques may be used by profilers, such as event-based, statistical, instrumented, and simulation methods.
 
== Processor ==
The central processing unit (CPU), also called the central processor, main processor, or simply the processor, is the primary processor in a given computer. Its electronic circuits execute instructions of a computer program, such as arithmetic, logical, control, and input-output (I/O) operations.<ref>{{Cite web|title=What is processor (CPU)?|url=https://www.techtarget.com/whatis/definition/processor|access-date=2025-08-15|work=www.techtarget.com}}</ref>
 
The performance or speed of a processor depends, among other things, on the clock frequency (usually measured in hertz) and the number of instructions per cycle (IPC), which together determine the number of instructions per second (IPS) the CPU can execute.<ref>{{Cite web|title=How is Processor Speed Measured: Understanding CPU Performance Metrics|url=https://bytebitebit.com/cpu/how-is-processor-speed-measured/|access-date=2025-08-15|work=bytebitebit.com}}</ref> Many reported IPS values represent "peak" execution speeds for artificial instruction sequences with few branches, whereas real workloads consist of a mix of instructions and applications, some of which run longer than others. The performance of the memory hierarchy also greatly affects processor performance, a factor that is rarely considered when calculating IPS. Due to these issues, various standardized tests, often called "benchmarks," such as SPECint, have been developed to attempt to measure real effective performance in commonly used applications.
 
When choosing a computer or mobile device, processor performance plays a major role and is selected according to the tasks to be solved. For example, the Intel Core i7 processor has been a reliable "workhorse" of computing technology for many years, providing overall performance for both mobile users and professionals. The Core i7 serves as a main high-performance processor offering balanced performance for general computing tasks.<ref>{{Cite web|title=Intel Core Ultra 7 vs i7|url=https://www.geekom.co.uk/intel-core-ultra-7-vs-i7|access-date=2025-08-15|work=www.geekom.co.uk}}</ref>
 
Computer performance increases through the use of multicore processors, which essentially connect two or more separate processors (in this sense called cores) on a single integrated circuit. Ideally, a dual-core processor should be almost twice as powerful as a single-core one. In practice, performance gains are much smaller, about 50%, due to imperfect software algorithms and implementation.<ref>{{Cite web|title=Quad Core Vs. Dual Core|url=https://techspirited.com/quad-core-vs-dual-core|access-date=2025-08-15|work=techspirited.com}}</ref> Increasing the number of cores in a processor (e.g., dual-core, quad-core, etc.) increases the workload it can handle. This means the processor can now process numerous asynchronous events, interrupts, and so forth, which might negatively impact the CPU under overload. These cores can be viewed as different floors in a processing plant, where each floor handles its own task. Sometimes these cores will process the same tasks as neighboring cores if one core is insufficient for handling the information. Multicore CPUs enhance a computer's ability to perform multiple tasks simultaneously by providing additional computational power. However, the speed increase is not directly proportional to the number of cores added. This is because cores need to interact via specific channels, and this inter-core communication consumes part of the available computing power.<ref>{{Cite web|title=Factors Affecting Multi-Core Processors Performance|url=https://pcsite.co.uk/factors-affecting-multi-core-processors-performance/|access-date=2025-08-15|work=pcsite.co.uk}}</ref>
 
Due to the specific capabilities of modern CPUs, such as simultaneous multithreading and uncore— which imply shared use of actual CPU resources to improve utilization—monitoring performance levels and hardware usage has gradually become a more complex task.<ref>{{Cite web|title=CPU utilization of multi-threaded architectures explained|url=https://blogs.oracle.com/solaris/post/cpu-utilization-of-multi-threaded-architectures-explained|access-date=2025-08-15|work=blogs.oracle.com}}</ref> In response, some CPUs implement additional hardware logic that tracks the actual utilization of various parts of the CPU and provides various counters accessible to software; an example is Intel's Performance Counter Monitor technology.<ref>{{Cite web|title=Intel Performance Counter Monitor - A Better Way to Measure CPU Utilization|url=https://www.intel.com/content/www/us/en/developer/articles/tool/performance-counter-monitor.html|access-date=2025-08-15|work=www.intel.com}}</ref>
 
== Performance tuning ==
Line 168 ⟶ 180:
Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly to [[User acceptance test|user acceptance]] aspects.
 
The amount of time an application takes to start up, or a file to download, is not made faster by showing a startup screen (see Splash screen) or a file progress dialog box. However, it satisfies some human needs: it appears faster to the user as well as providingprovides a visual cue to let them know the system is handling their request.
 
In most cases, increasing real performance increases perceived performance, but when real performance cannot be increased due to physical limitations, techniques can be used to increase perceived performance.
Line 180 ⟶ 192:
"The Incredible Shrinking CPU".
2004.
[http://www.realworldtech.com/page.cfm?ArticleID=RWT062004172947&p=5] {{Webarchive|url=https://web.archive.org/web/20120531101531/http://www.realworldtech.com/page.cfm?ArticleID=RWT062004172947&p=5 |date=2012-05-31 }}</ref>
 
where
Line 199 ⟶ 211:
</ref>
For a given instruction set (and therefore fixed N) and semiconductor process, the maximum single-thread performance (1/t) requires a balance between brainiac techniques and speedracer techniques.<ref name="shrinking" />
 
== Internal and External Factors Affecting Computer Performance ==
Many factors can a affect computer performance including:<ref>{{Cite web|last=Hart|first=Benjamin|date=2020-11-17|title=Top 5 Things That Are Slowing Down Your PC|url=https://www.harttechsupport.com/post/whypcslow|access-date=2020-11-17|website=Hart Tech Support|language=en}}</ref>
 
* Background Processes
* Foreground Processes
* Malware
* Physical age
* Hardware
 
Many other factors are also potentially in play. All of these factors lower performance from its base value and most importantly to the user lowers [[Perceived performance|Perceived Performance]].
 
==See also==
Line 222 ⟶ 223:
* [[Speedup]]
* [[Cache replacement policies]]
* [https://www.laptopicker.com/how-to-optimize-my-pc-hardware/ Understanding Your PC Hardware]
===* [[Relative efficiency ===]]
 
==References==
{{Reflist}}
<references/>
 
{{Authority control}}
 
[[Category:Computer performance| ]]