Content deleted Content added
Fixed typo Tags: Reverted possibly inaccurate edit summary Mobile edit Mobile web edit |
m Removing link(s) Wikipedia:Articles for deletion/Physics Abstraction Layer closed as delete (XFDcloser) |
||
(13 intermediate revisions by 11 users not shown) | |||
Line 2:
{{About|the use of benchmarks in computing||Benchmark (disambiguation){{!}}Benchmark}}
{{Multiple issues|{{more citations needed|date=July 2015}}
{{Expert needed|
[[File:OGRE screenshot 08.png|thumb|A graphical demo running as a benchmark of the [[OGRE]] engine]] In [[computing]], a '''benchmark''' is the act of running a [[computer program]], a set of programs, or other operations, in order to assess the relative [[Computer performance|performance]] of an object, normally by running a number of standard [[Software performance testing|tests]] and trials against it.<ref>{{Cite journal| doi = 10.1145/5666.5673| issn = 0001-0782| volume = 29| issue = 3| pages = 218–221| last1 = Fleming| first1 = Philip J.| last2 = Wallace| first2 = John J.| title = How not to lie with statistics: the correct way to summarize benchmark results| journal = Communications of the ACM| date = 1986-03-01| s2cid = 1047380| doi-access = free}}</ref>
Line 11:
Benchmarks provide a method of comparing the performance of various subsystems across different chip/system [[Computer architecture|architectures]]. Benchmarking as a part of [[continuous integration]] is called Continuous Benchmarking.<ref>{{cite conference|doi=10.1109/IC2E.2019.00039|chapter-url=https://www.researchgate.net/publication/333918034|chapter=Continuous Benchmarking: Using System Benchmarking in Build Pipelines|year=2019|access-date=2023-12-03|first1=Martin|last1=Grambow|first2=Fabian|last2=Lehmann|first3=David|last3=Bermbach|title=2019 IEEE International Conference on Cloud Engineering (IC2E) |pages=241–246 |isbn=978-1-7281-0218-4 }}</ref>
== Purpose ==
As [[computer architecture]] advanced, it became more difficult to compare the performance of various computer systems simply by looking at their specifications. Therefore, tests were developed that allowed comparison of different architectures. For example, [[Pentium 4]] processors generally operated at a higher clock frequency than [[Athlon XP]] or [[PowerPC]] processors, which did not necessarily translate to more computational power; a processor with a slower clock frequency might perform as well as or even better than a processor operating at a higher frequency. See [[BogoMips]] and the [[megahertz myth]].
Benchmarks are designed to mimic a particular type of workload on a component or system. Synthetic benchmarks do this by specially created programs that impose the workload on the component. Application benchmarks run real-world programs on the system. While application benchmarks usually give a much better measure of real-world performance on a given system, synthetic benchmarks are useful for testing individual components, like a [[hard disk]] or networking device.
Benchmarks are particularly important in [[CPU design]], giving processor architects the ability to measure and make tradeoffs in [[microarchitecture|microarchitectural]] decisions. For example, if a benchmark extracts the key [[algorithms]] of an application, it will contain the performance-sensitive aspects of that application. Running this much smaller snippet on a cycle-accurate simulator can give clues on how to improve performance.
Prior to 2000, computer and microprocessor architects used [[SPEC]] to do this, although SPEC's Unix-based benchmarks were quite lengthy and thus unwieldy to use intact.
Computer companies are known to configure their systems to give unrealistically high performance on benchmark tests that are not replicated in real usage. For instance, during the 1980s some compilers could detect a specific mathematical operation used in a well-known floating-point benchmark and replace the operation with a faster mathematically equivalent operation. However, such a transformation was rarely useful outside the benchmark until the mid-1990s, when [[RISC]] and [[VLIW]] architectures emphasized the importance of [[compiler]] technology as it related to performance. Benchmarks are now regularly used by [[compiler]] companies to improve not only their own benchmark scores, but real application performance.
CPUs that have many execution units — such as a [[superscalar]] CPU, a [[VLIW]] CPU, or a [[reconfigurable computing]] CPU — typically have slower clock rates than a sequential CPU with one or two execution units when built from transistors that are just as fast. Nevertheless, CPUs with many execution units often complete real-world and benchmark tasks in less time than the supposedly faster high-clock-rate CPU.
Given the large number of benchmarks available, a vendor can usually find at least one benchmark that shows its system will outperform another system; the other systems can be shown to excel with a different benchmark.
Software vendors also use benchmarks in their marketing, such as the "benchmark wars" between rival [[relational database]] makers in the 1980s and 1990s. Companies commonly report only those benchmarks (or aspects of benchmarks) that show their products in the best light. They also have been known to mis-represent the significance of benchmarks, again to show their products in the best possible light.<ref name="rdbmsinformix20070612">{{Cite interview |interviewer=Luanne Johnson |title=RDBMS Workshop: Informix |url=https://archive.computerhistory.org/resources/access/text/2013/05/102702566-05-01-acc.pdf |access-date=2025-05-30 |publisher=Computer History Museum |date=2007-06-12}}</ref><ref name="rdbmsingressybase20070613">{{Cite interview |interviewer=Doug Jerger |title=RDBMS Workshop: Ingres and Sybase |url=https://archive.computerhistory.org/resources/access/text/2013/05/102702565-05-01-acc.pdf |access-date=2025-05-30 |publisher=Computer History Museum |date=2007-06-13}}</ref>
Ideally benchmarks should only substitute for real applications if the application is unavailable, or too difficult or costly to port to a specific processor or computer system. If performance is critical, the only benchmark that matters is the target environment's application suite.
== Functionality ==
Line 20 ⟶ 37:
* Vendors tend to tune their products specifically for industry-standard benchmarks. Norton SysInfo (SI) is particularly easy to tune for, since it mainly biased toward the speed of multiple operations. Use extreme caution in interpreting such results.
* Some vendors have been accused of "cheating" at benchmarks —
* Many benchmarks focus entirely on the speed of [[computer performance|computational performance]], neglecting other important features of a computer system, such as:
** Qualities of service, aside from raw performance. Examples of unmeasured qualities of service include security, availability, reliability, execution integrity, serviceability, scalability (especially the ability to quickly and nondisruptively add or reallocate capacity), etc. There are often real trade-offs between and among these qualities of service, and all are important in business computing. [[Transaction Processing Performance Council]] Benchmark specifications partially address these concerns by specifying [[ACID]] property tests, database scalability rules, and service level requirements.
Line 80 ⟶ 97:
=== Industry standard (audited and verifiable) ===
* [[EEMBC|Embedded Microprocessor Benchmark Consortium (EEMBC)]]
* [[Standard Performance Evaluation Corporation]] (SPEC), in particular their [[SPECint]] and [[SPECfp]]
Line 102 ⟶ 118:
* [[NAS benchmarks|NAS parallel benchmarks]]
* [[NBench]] – synthetic benchmark suite measuring performance of integer arithmetic, memory operations, and floating-point arithmetic
*
* [[PerfKitBenchmarker]] – A set of benchmarks to measure and compare cloud offerings.
* [[Phoronix Test Suite]] – open-source cross-platform benchmarking suite for Linux, OpenSolaris, FreeBSD, OSX and Windows. It includes a number of other benchmarks included on this page to simplify execution.
Line 113 ⟶ 129:
=== Microsoft Windows benchmarks ===
* [[CrystalDiskMark]]
* [[Futuremark|Underwriters Laboratories (UL)]]: [[3DMark]], [[PCMark]]
Line 121 ⟶ 136:
*[[Super PI]]
* [[SuperPrime]]
* [[Whetstone (benchmark)|Whetstone]]
* [[Windows System Assessment Tool]], included with Windows Vista and later releases, providing an index for consumers to rate their systems easily
* [[Worldbench]] (discontinued)
===Unusual benchmark===
* [[Will Smith Eating Spaghetti test]] - an informal test to determine the capabilities of [[text-to-video]] models.
=== Others ===
* [[AnTuTu]] – commonly used on phones and ARM-based devices.
Line 133 ⟶ 148:
* [[iCOMP (index)|iCOMP]] – the Intel comparative microprocessor performance, published by Intel
* [[Khornerstone]]
* [[Novabench]] - a computer benchmarking utility for Microsoft Windows, macOS, and Linux
* [[Performance Rating]] – modeling scheme used by AMD and Cyrix to reflect the relative performance usually compared to competing products.
* [[Rugg/Feldman benchmarks]] - one of the earliest microcomputer benchmarks, from 1977.
* [[SunSpider JavaScript Benchmark|SunSpider]] – a browser speed test
* [[UserBenchmark]] - PC benchmark utility
* [[VMmark]] – a virtualization benchmark suite.
Line 166 ⟶ 183:
{{DEFAULTSORT:Benchmark (Computing)}}
[[Category:Benchmarks (computing)| ]]
[[Category:Hardware testing]]
|