Benchmark (computing): Difference between revisions

Content deleted Content added
Iamjaani (talk | contribs)
m dead link replace
 
(46 intermediate revisions by 36 users not shown)
Line 1:
{{shortShort description|Comparing the relativeStandardized performance of computers by running the same program on all of themevaluation}}
{{About|the use of benchmarks in computing||Benchmark (disambiguation){{!}}Benchmark}}
{{Multiple issues|{{more citations needed|date=July 2015}}
{{Expert needed|Computercomputer science|talk=Outdated sources|reason=Outdated or deprecated sources|date=October 2022}}}}
 
[[File:OGRE screenshot 08.png|thumb|A graphical demo running as a benchmark of the [[OGRE]] engine]] In [[computing]], a '''benchmark''' is the act of running a [[computer program]], a set of programs, or other operations, in order to assess the relative [[Computer performance|performance]] of an object, normally by running a number of standard [[Software performance testing|tests]] and trials against it.<ref>{{Cite journal| doi = 10.1145/5666.5673| issn = 0001-0782| volume = 29| issue = 3| pages = 218–221| last1 = Fleming| first1 = Philip J.| last2 = Wallace| first2 = John J.| title = How not to lie with statistics: the correct way to summarize benchmark results| journal = Communications of the ACM| access-date = 2017-06-09| date = 1986-03-01| s2cid = 1047380| urldoi-access = http://portal.acm.org/citation.cfm?doid=5666.5673free}}</ref>
The term ''benchmark'' is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves.
 
Benchmarking is usually associated with assessing performance characteristics of [[computer hardware]], for example, the [[floating point operation]] performance of a [[Central processing unit|CPU]], but there are circumstances when the technique is also applicable to [[software]]. Software benchmarks are, for example, run against [[compiler]]s or [[database management system]]s (DBMS).
 
Benchmarks provide a method of comparing the performance of various subsystems across different chip/system [[Computer architecture|architectures]]. Benchmarking as a part of [[continuous integration]] is called Continuous Benchmarking.<ref>{{cite conference|doi=10.1109/IC2E.2019.00039|chapter-url=https://www.researchgate.net/publication/333918034|chapter=Continuous Benchmarking: Using System Benchmarking in Build Pipelines|year=2019|access-date=2023-12-03|first1=Martin|last1=Grambow|first2=Fabian|last2=Lehmann|first3=David|last3=Bermbach|title=2019 IEEE International Conference on Cloud Engineering (IC2E) |pages=241–246 |isbn=978-1-7281-0218-4 }}</ref>
 
== Purpose ==
Line 20:
Prior to 2000, computer and microprocessor architects used [[SPEC]] to do this, although SPEC's Unix-based benchmarks were quite lengthy and thus unwieldy to use intact.
 
Computer manufacturerscompanies are known to configure their systems to give unrealistically high performance on benchmark tests that are not replicated in real usage. For instance, during the 1980s some compilers could detect a specific mathematical operation used in a well-known floating-point benchmark and replace the operation with a faster mathematically equivalent operation. However, such a transformation was rarely useful outside the benchmark until the mid-1990s, when [[RISC]] and [[VLIW]] architectures emphasized the importance of [[compiler]] technology as it related to performance. Benchmarks are now regularly used by [[compiler]] companies to improve not only their own benchmark scores, but real application performance.
 
CPUs that have many execution units — such as a [[superscalar]] CPU, a [[VLIW]] CPU, or a [[reconfigurable computing]] CPU — typically have slower clock rates than a sequential CPU with one or two execution units when built from transistors that are just as fast. Nevertheless, CPUs with many execution units often complete real-world and benchmark tasks in less time than the supposedly faster high-clock-rate CPU.
 
Given the large number of benchmarks available, a manufacturervendor can usually find at least one benchmark that shows its system will outperform another system; the other systems can be shown to excel with a different benchmark.
 
ManufacturersSoftware vendors also use benchmarks in their marketing, such as the "benchmark wars" between rival [[relational database]] makers in the 1980s and 1990s. Companies commonly report only those benchmarks (or aspects of benchmarks) that show their products in the best light. They also have been known to mis-represent the significance of benchmarks, again to show their products in the best possible light.<ref Takenname="rdbmsinformix20070612">{{Cite together,interview these|interviewer=Luanne practicesJohnson are|title=RDBMS calledWorkshop: ''benchInformix |url=https://archive.computerhistory.org/resources/access/text/2013/05/102702566-marketing05-01-acc.''pdf |access-date=2025-05-30 |publisher=Computer History Museum |date=2007-06-12}}</ref><ref name="rdbmsingressybase20070613">{{Cite interview |interviewer=Doug Jerger |title=RDBMS Workshop: Ingres and Sybase |url=https://archive.computerhistory.org/resources/access/text/2013/05/102702565-05-01-acc.pdf |access-date=2025-05-30 |publisher=Computer History Museum |date=2007-06-13}}</ref>
 
Ideally benchmarks should only substitute for real applications if the application is unavailable, or too difficult or costly to port to a specific processor or computer system. If performance is critical, the only benchmark that matters is the target environment's application suite.
 
== Functionality ==
Features of benchmarking software may include recording/[[data export|exporting]] the course of performance to a [[spreadsheet]] file, visualization such as drawing [[line graph]]s or [[color-coded]] tiles, and pausing the process to be able to resume without having to start over. Software can have additional features specific to its purpose, for example, disk benchmarking software may be able to optionally start measuring the disk speed within a specified range of the disk rather than the full disk, measure [[random access]] reading speed and [[latency (engineering)|latency]], have a "quick scan" feature which measures the speed through samples of specified intervals and sizes, and allow specifying a [[Block (data storage)|data block]] size, meaning the number of requested bytes per read request.<ref>Software: HDDScan, GNOME Disks</ref>
 
== Challenges ==
Line 34 ⟶ 37:
 
* Vendors tend to tune their products specifically for industry-standard benchmarks. Norton SysInfo (SI) is particularly easy to tune for, since it mainly biased toward the speed of multiple operations. Use extreme caution in interpreting such results.
* Some vendors have been accused of "cheating" at benchmarks — doingdesigning thingstheir systems such that they give much higher benchmark numbers, but makeare not thingsas worseeffective onat the actual likely workload.<ref>{{cite news|url=http://www.pcworld.com/article/111012/nvidias_benchmark_tactics_reassessed.html|title=NVidia's Benchmark Tactics Reassessed|first=Tom|last=Krazit|year=2003|work=IDG News|access-date=2009-08-08|archive-url=https://web.archive.org/web/20110606032058/http://www.pcworld.com/article/111012/nvidias_benchmark_tactics_reassessed.html|archive-date=2011-06-06|url-status=dead}}</ref>
* Many benchmarks focus entirely on the speed of [[computer performance|computational performance]], neglecting other important features of a computer system, such as:
** Qualities of service, aside from raw performance. Examples of unmeasured qualities of service include security, availability, reliability, execution integrity, serviceability, scalability (especially the ability to quickly and nondisruptively add or reallocate capacity), etc. There are often real trade-offs between and among these qualities of service, and all are important in business computing. [[Transaction Processing Performance Council]] Benchmark specifications partially address these concerns by specifying [[ACID]] property tests, database scalability rules, and service level requirements.
Line 40 ⟶ 43:
** Facilities burden (space, power, and cooling). When more power is used, a portable system will have a shorter battery life and require recharging more often. A server that consumes more power and/or space may not be able to fit within existing data center resource constraints, including cooling limitations. There are real trade-offs as most semiconductors require more power to switch faster. See also [[performance per watt]].
** In some embedded systems, where memory is a significant cost, better [[code density]] can significantly reduce costs.
* Vendor benchmarks tend to ignore requirements for development, test, and [[IT disaster recovery|disaster recovery]] computing capacity. Vendors only like to report what might be narrowly required for production capacity in order to make their initial acquisition price seem as low as possible.
* Benchmarks are having trouble adapting to widely distributed servers, particularly those with extra sensitivity to network topologies. The emergence of [[grid computing]], in particular, complicates benchmarking since some workloads are "grid friendly", while others are not.
* Users can have very different perceptions of performance than benchmarks may suggest. In particular, users appreciate predictability — servers that always meet or exceed [[service level agreement]]s. Benchmarks tend to emphasize mean scores (IT perspective), rather than maximum worst-case response times ([[real-time computing]] perspective), or low standard deviations (user perspective).
Line 48 ⟶ 51:
* Benchmarking institutions often disregard or do not follow basic scientific method. This includes, but is not limited to: small sample size, lack of variable control, and the limited repeatability of results.<ref>{{cite web|url=http://donutey.com/hardwaretesting.php|title=Hardware Testing and Benchmarking Methodology|year=2006|access-date=2008-02-24|first=Kevin|last=Castor|url-status=dead|archive-url=https://web.archive.org/web/20080205031133/http://www.donutey.com/hardwaretesting.php|archive-date=2008-02-05}}</ref>
 
== Benchmarking Principlesprinciples ==
There are seven vital characteristics for benchmarks.<ref>{{cite conference|first1=Wei |last1=Dai |first2=Daniel |last2=Berleant |title=Benchmarking Contemporary Deep Learning Hardware and Frameworks: a Survey of Qualitative Metrics |date=December 12–14, 2019 |___location=Los Angeles, CA, USA |book-title=2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)|publisher=IEEE |doi=10.1109/CogMI48466.2019.00029 |pages=148–155|url=https://dberleant.github.io/papers/BenchmarkingContemporaryDeepLearningHardwareAndFrameworks.pdf |arxiv=1907.03626 }}</ref> These key properties are:
# Relevance: Benchmarks should measure relatively vital features.
Line 63 ⟶ 66:
#*tool software of CAD
#*user's application software (i.e.: MIS)
#*[[Video game]]s
#*[[Compiler]]s building a large project, for example [[Chromium (browser)|Chromium browser]] or [[Linux kernel]]
#Component Benchmark / Microbenchmark
#*core routine consists of a relatively small and specific piece of code.
Line 92 ⟶ 97:
 
=== Industry standard (audited and verifiable) ===
* [[BAPCo consortium|Business Applications Performance Corporation (BAPCo)]]
* [[EEMBC|Embedded Microprocessor Benchmark Consortium (EEMBC)]]
* [[LDBC|Linked Data Benchmark Council (LDBC)]]
** [[Semantic Publishing Benchmark (SPB)]]: an LDBC benchmark inspired by the Media/Publishing industry for testing the performance of RDF engines<ref>{{cite web |url=http://ldbcouncil.org/benchmarks/spb |title=LDBC Semantic Publishing Benchmark |author=LDBC |work=LDBC SPB |publisher=[[LDBC]] |access-date=2018-07-02 }}</ref>
** [[Social Network Benchmark (SNB)]]: an LDBC benchmark for testing the performance of RDF engines consisting of three distinct benchmarks (Interactive Workload, Business Intelligence Workload, Graph Analytics Workload) on a common dataset<ref>{{cite web |url=http://ldbcouncil.org/benchmarks/snb |title=LDBC Social Network Benchmark |author=LDBC |work=LDBC SNB |publisher=[[LDBC]] |access-date=2018-07-02 }}</ref>
* [[Standard Performance Evaluation Corporation]] (SPEC), in particular their [[SPECint]] and [[SPECfp]]
* [[Transaction Processing Performance Council]] (TPC): DBMS benchmarks<ref>{{cite web |url=http://www.tpc.org/information/about/history.asp |title=History and Overview of the TPC |author=Transaction Processing Performance Council |work=TPC |publisher=[[Transaction Processing Performance Council]] |date=February 1998 |access-date=2018-07-02 }}</ref>
** [[TPC-A]]: measures performance in update-intensive database environments typical in on-line transaction processing (OLTP) applications<ref>{{cite web |url=http://www.tpc.org/tpca/default.asp |title=TPC-A |author=Transaction Processing Performance Council |publisher=[[Transaction Processing Performance Council]] |access-date=2018-07-02 }}</ref>
** [[TPC-C]]: an on-line transaction processing (OLTP) benchmark<ref>{{cite web |url=http://www.tpc.org/tpcc/default.asp |title=TPC-C |author=Transaction Processing Performance Council |publisher=[[Transaction Processing Performance Council]] |access-date=2018-07-02 }}</ref>
** [[TPC-H]]: a decision support benchmark<ref>{{cite web |url=http://www.tpc.org/tpch/default.asp |title=TPC-H |author=Transaction Processing Performance Council |publisher=[[Transaction Processing Performance Council]] |access-date=2018-07-02 }}</ref>
 
=== Open source benchmarks ===
* [[AIM Multiuser Benchmark]] – composed of a list of tests that could be mixed to create a ‘load'load mix’mix' that would simulate a specific computer function on any UNIX-type OS.
* [[Bonnie++]] – filesystem and hard drive benchmark
* [[BRL-CAD]] – cross-platform architecture-agnostic benchmark suite based on multithreaded ray tracing performance; baselined against a VAX-11/780; and used since 1984 for evaluating relative CPU performance, compiler differences, optimization levels, coherency, architecture differences, and operating system differences.
* [[Collective Knowledge (software)|Collective Knowledge]] – customizable, cross-platform framework to crowdsource benchmarking and optimization of user workloads (such as [[deep learning]]) across hardware provided by volunteers
* [[Coremark]] – Embedded computing benchmark
* [[Data Storage Benchmark]] – an RDF continuation of the LDBC Social Network Benchmark, from the Hobbit Project<ref>{{cite web |url=https://github.com/hobbit-project/DataStorageBenchmark |title=Data Storage Benchmark |date=2017-07-28 |access-date=2018-07-02 }}</ref>
* [[DEISA Benchmark Suite]] – scientific HPC applications benchmark
* [[Dhrystone]] – integer arithmetic performance, often reported in DMIPS (Dhrystone millions of instructions per second)
* [[DiskSpd]] – [[Command-line]] tool]] for storage benchmarking that generates a variety of requests against [[computer file]]s, [[Disk partitioning|partitions]] or [[Computer data storage|storage devices]]
* Embench™ - portable, open-source benchmarks, for benchmarking deeply embedded systems; they assume the presence of no OS, minimal C library support and, in particular, no output stream. Embench is a project of the [[Free and Open Source Silicon Foundation]].
* [[Faceted Browsing Benchmark]] – benchmarks systems that support browsing through linked data by iterative transitions performed by an intelligent user, from the Hobbit Project<ref>{{cite web |url=https://github.com/hobbit-project/faceted-benchmark |title=Faceted Browsing Benchmark |date=2017-07-27 |access-date=2018-07-02 }}</ref>
* [[Fhourstones]] – an integer benchmark
* [[Hierarchical INTegration|HINT]] – designed to measure overall CPU and memory performance
* [[Iometer]] – I/O subsystem measurement and characterization tool for single and clustered systems.
* [[IOzone]] – Filesystem benchmark
* [[Kubestone]] – Benchmarking Operator for [[Kubernetes]] and [[OpenShift]]
* [[LINPACK benchmarks]] – traditionally used to measure [[FLOPS]]
* [[Livermore loops]]
* [[NAS benchmarks|NAS parallel benchmarks]]
* [[NBench]] – synthetic benchmark suite measuring performance of integer arithmetic, memory operations, and floating-point arithmetic
* [[PAL (software)|PAL]] – a benchmark for realtime physics engines
* [[PerfKitBenchmarker]] – A set of benchmarks to measure and compare cloud offerings.
* [[Phoronix Test Suite]] – open-source cross-platform benchmarking suite for Linux, OpenSolaris, FreeBSD, OSX and Windows. It includes a number of other benchmarks included on this page to simplify execution.
Line 135 ⟶ 129:
 
=== Microsoft Windows benchmarks ===
* [[BAPCo consortium|BAPCo]]: MobileMark, SYSmark, WebMark
* [[CrystalDiskMark]]
* [[Futuremark|Underwriters Laboratories (UL)]]: [[3DMark]], [[PCMark]]
*[[Heaven Benchmark]]
* [[PiFast]]
*[[Superposition Benchmark]]
* [[Super PI]]
* [[SuperPrime]]
* [[Super PI]]
* [[Whetstone (benchmark)|Whetstone]]
* [[Windows System Assessment Tool]], included with Windows Vista and later releases, providing an index for consumers to rate their systems easily
* [[Worldbench]] (discontinued)
===Unusual benchmark===
 
* [[Will Smith Eating Spaghetti test]] - an informal test to determine the capabilities of [[text-to-video]] models.
=== Others ===
* [[AnTuTu]] – commonly used on phones and ARM-based devices.
* [[Byte Sieve]] - originally tested language performance, but widely used as a machine benchmark as well.
* [[Berlin SPARQL Benchmark (BSBM)]] – defines a suite of benchmarks for comparing the performance of storage systems that expose SPARQL endpoints via the SPARQL protocol across architectures<ref>{{cite web |url=http://wifo5-03.informatik.uni-mannheim.de/bizer/berlinsparqlbenchmark/ |title=Berlin SPARQL Benchmark (BSBM) |access-date=2018-07-02 }}</ref>
* [[Creative Computing Benchmark]] – Compares the [[BASIC]] programming language on various platforms. Introduced in 1983.
* [[Geekbench]] – A cross-platform benchmark for Windows, Linux, macOS, iOS and Android.
* [[iCOMP (index)|iCOMP]] – the Intel comparative microprocessor performance, published by Intel
* [[Khornerstone]]
* [[Novabench]] - a computer benchmarking utility for Microsoft Windows, macOS, and Linux
* [[Lehigh University Benchmark (LUBM)]] – facilitates the evaluation of Semantic Web repositories via extensional queries over a large data set that commits to a single realistic ontology<ref>{{cite web |url=http://swat.cse.lehigh.edu/projects/lubm/ |title=SWAT Projects - the Lehigh University Benchmark (LUBM) |work=Lehigh University Benchmark (LUBM) |access-date=2018-07-02 }}</ref>
* [[Performance Rating]] – modeling scheme used by AMD and Cyrix to reflect the relative performance usually compared to competing products.
* [[Rugg/Feldman benchmarks]] - one of the earliest microcomputer benchmarks, from 1977.
* [[SunSpider JavaScript Benchmark|SunSpider]] – a browser speed test
* [[UserBenchmark]] - PC benchmark utility
* [[VMmark]] – a virtualization benchmark suite.<ref>{{cite journal|url=https://sacredreviews.com/best-gpu-benchmark-software/ |journal=VMware| title=VMmark GPU Benchmark Software Rules 1.1.1|date=2008}} date=January 2018}}</ref>
* [[VMmark]] – a virtualization benchmark suite.
* [[RenderStats]] – a 3D rendering benchmark database.<ref>{{cite journal|url=https://renderstats.com/cpu/| title=3D rendering benchmark database|access-date=2019-09-29}}</ref>
 
== See also ==
Line 166 ⟶ 164:
 
== References ==
{{Reflist}}
<references/>
 
== Further reading ==
* {{cite book |editor1-first=Jim |editor1-last=Gray |title=The Benchmark Handbook for Database and Transaction Systems |edition=2nd |publisher=Morgan Kaufmann Publishers, Inc |series=Morgan Kaufmann Series in Data Management Systems |year=1993 |isbn=1-55860-292-5 |url-access=registration |url=https://archive.org/details/benchmarkhandboo00jimg }}
* {{cite book|first1=Bert|last1= Scalzo|first2=Kevin|last2= Kline|first3= Claudia|last3= Fernandez|first4= Donald K. |last4=Burleson|first5=Mike|last5=Ault|author5-link=Mike Ault |year=2007|title=Database Benchmarking Practical Methods for Oracle & SQL Server|publisher= Rampant TechPress|isbn =978-0-9776715-3-3}}
*{{cite book|editor1-first=Raghunath|editor1-last=Nambiar|editor2-first= Meikel|editor2-last=Poess|title=Performance Evaluation and Benchmarking|publisher= Springer|year=2009| isbn =978-3-642-10423-7}}
 
== External links ==
* {{Cite journal |last1=Lewis |first1=Byron C. |last2=Crews |first2=Albert E. |date=1985 |title=The Evolution of Benchmarking as a Computer Performance Evaluation Technique |url=https://www.jstor.org/stable/249270 |journal=MIS Quarterly |volume=9 |issue=1 |pages=7–16 |doi=10.2307/249270 |jstor=249270 |issn=0276-7783 |url-access=registration}} The dates: 1962-1976
 
{{commons category|Benchmarks (computing)}}
Line 182 ⟶ 183:
{{DEFAULTSORT:Benchmark (Computing)}}
[[Category:Benchmarks (computing)| ]]
[[Category:Hardware testing]]