Benchmark (computing): Difference between revisions

Content deleted Content added
Made the short description shorter and more generic (it was too focused on computer performance in particular)
m Disambiguating links to Disaster recovery (link changed to IT disaster recovery) using DisamAssist.
Line 43:
** Facilities burden (space, power, and cooling). When more power is used, a portable system will have a shorter battery life and require recharging more often. A server that consumes more power and/or space may not be able to fit within existing data center resource constraints, including cooling limitations. There are real trade-offs as most semiconductors require more power to switch faster. See also [[performance per watt]].
** In some embedded systems, where memory is a significant cost, better [[code density]] can significantly reduce costs.
* Vendor benchmarks tend to ignore requirements for development, test, and [[IT disaster recovery|disaster recovery]] computing capacity. Vendors only like to report what might be narrowly required for production capacity in order to make their initial acquisition price seem as low as possible.
* Benchmarks are having trouble adapting to widely distributed servers, particularly those with extra sensitivity to network topologies. The emergence of [[grid computing]], in particular, complicates benchmarking since some workloads are "grid friendly", while others are not.
* Users can have very different perceptions of performance than benchmarks may suggest. In particular, users appreciate predictability — servers that always meet or exceed [[service level agreement]]s. Benchmarks tend to emphasize mean scores (IT perspective), rather than maximum worst-case response times ([[real-time computing]] perspective), or low standard deviations (user perspective).