Benchmark (computing): Difference between revisions

Content deleted Content added
added to Category:Hardware testing
Hobart-07 (talk | contribs)
m Removed ambiguous sentence.
Line 37:
 
* Vendors tend to tune their products specifically for industry-standard benchmarks. Norton SysInfo (SI) is particularly easy to tune for, since it mainly biased toward the speed of multiple operations. Use extreme caution in interpreting such results.
* Some vendors have been accused of "cheating" at benchmarks — doingdesigning thingstheir systems such that they give much higher benchmark numbers, but makeare not thingsas worseeffective onat the actual likely workload.<ref>{{cite news|url=http://www.pcworld.com/article/111012/nvidias_benchmark_tactics_reassessed.html|title=NVidia's Benchmark Tactics Reassessed|first=Tom|last=Krazit|year=2003|work=IDG News|access-date=2009-08-08|archive-url=https://web.archive.org/web/20110606032058/http://www.pcworld.com/article/111012/nvidias_benchmark_tactics_reassessed.html|archive-date=2011-06-06|url-status=dead}}</ref>
* Many benchmarks focus entirely on the speed of [[computer performance|computational performance]], neglecting other important features of a computer system, such as:
** Qualities of service, aside from raw performance. Examples of unmeasured qualities of service include security, availability, reliability, execution integrity, serviceability, scalability (especially the ability to quickly and nondisruptively add or reallocate capacity), etc. There are often real trade-offs between and among these qualities of service, and all are important in business computing. [[Transaction Processing Performance Council]] Benchmark specifications partially address these concerns by specifying [[ACID]] property tests, database scalability rules, and service level requirements.