The Computer Language Benchmarks Game: Difference between revisions

Content deleted Content added
m Benchmark programs: reference benchmarks
Heronils (talk | contribs)
Ops, the external link section was there already.
 
(48 intermediate revisions by 32 users not shown)
Line 1:
{{short description|Free software project}}
'''The Computer Language Benchmarks Game''' (formerly called ''The Great Computer Language Shootout'') is a free software project for comparing how a given subset of simple algorithms can be implemented in various popular programming languages.
'''The Computer Language Benchmarks Game''' (formerly called '''The Great Computer Language Shootout''') is a [[free software]] project for comparing how a given subset of simple [[algorithms]] can be implemented in various popular [[programming languages]].
 
The project consists of:
 
* A set of very simple algorithmic problems
* A set of very simple algorithmic problems (thirteen in total)<ref>{{Cite book |last1=Couto |first1=Marco |last2=Pereira |first2=Rui |last3=Ribeiro |first3=Francisco |last4=Rua |first4=Rui |last5=Saraiva |first5=João |chapter=Towards a Green Ranking for Programming Languages |date=2017-09-21 |title=Proceedings of the 21st Brazilian Symposium on Programming Languages |chapter-url=https://dl.acm.org/doi/abs/10.1145/3125374.3125382 |series=SBLP '17 |___location=New York, NY, USA |publisher=Association for Computing Machinery |pages=1–8 |doi=10.1145/3125374.3125382 |isbn=978-1-4503-5389-2|hdl=1822/65360 |hdl-access=free }}</ref>
* Various implementations to the above problems in various programming languages
* A set of unit tests to verify that the submitted implementations solve the problem statement
Line 8 ⟶ 10:
* A website to facilitate the interactive comparison of the results
 
== Supported languages ==
{{Collapsible list
 
| title = List of supported languages
Due to resource constraints, only a small subset of common programming languages are supported, up to the discretion of the game's operator.<ref>https://benchmarksgame.alioth.debian.org/</ref>
|[[Ada (programming language)|Ada]]
 
|[[C (programming_language)|C]]
{{Collapsible list
|[[Chapel (programming language)|Chapel]]
|title=List of supported languages
| 5 = [[C Sharp (programming language)|C#]]
| Ada
| 6 = [[C++]]
| 7 = [[Dart (programming language)|Dart]]
| Chapel
| 8 = [[Erlang (programming language)|Erlang]]
| Clojure
| 9 = [[F Sharp (programming language)|F#]]
| C#
| 10 = [[Fortran]]
| C++
| 11 = [[Go (programming language)|Go]]
| Dart
| 12 = [[Haskell]]
| Erlang
| 13 = [[Java (programming language)|Java]]
| F#
| 14 = [[JavaScript]]
| Fortran
| 15 = [[Julia (programming language)|Julia]]
| Go
| 16 = [[Lisp (programming language)|Lisp]]
| Hack
| 17 = [[Lua (programming language)|Lua]]
| Haskell
| 18 = [[OCaml]]
| Java
| 19 = [[Pascal (programming language)|Pascal]]
| JavaScript
| 20 = [[Perl]]
| Lisp
| 21 = [[PHP]]
| Lua
| 22 = [[Python (programming language)|Python]]
| OCaml
| 23 = [[Racket (programming language)|Racket]]
| Pascal
| 24 = [[Ruby (programming language)|Ruby]]
| Perl
| 26 = [[Rust (programming language)|Rust]]
| PHP
| 28 = [[Smalltalk]]
| Python
| 29 = [[Swift (programming language)|Swift]]
| Racket
| Ruby
| JRuby
| Rust
| Scala
| Smalltalk
| Swift
| TypeScript
}}
 
== Metrics ==
The following aspects of each given implementation are measured:<ref>{{cite web|url=https://benchmarksgame-team.pages.debian.net/benchmarksgame/how-programs-are-measured.html|title=How programs are measured – The Computer Language Benchmarks Game|website=benchmarksgame-team.pages.debian.net/benchmarksgame/|accessdate=29 May 2018}}</ref>
* overall user [[Run_time_(program_lifecycle_phase)|runtime]]
* peak [[memory allocation]]
* gzipped size of the solution's source code
* sum of total CPU time over all [[Thread (computing)|threads]]
* individual CPU [[Load (computing)|utilization]]
 
It is common to see multiple solutions in the same programming language for the same problem. This highlights that within the constraints of a given language, a solution can be given which is either of high abstraction, is memory efficient, is fast, or can be parallelized better.
The following aspects of each given implementation are measured:<ref>https://benchmarksgame.alioth.debian.org/how-programs-are-measured.html</ref>
 
==Benchmark programs==
* overall user runtime
It was a design choice from the start to only include very simple toy problems, each providing a different kind of programming challenge.<ref>{{cite web|url=https://benchmarksgame-team.pages.debian.net/benchmarksgame/why-measure-toy-benchmark-programs.html|title=Why toy programs? – The Computer Language Benchmarks Game|website=benchmarksgame-team.pages.debian.net/benchmarksgame|accessdate=29 May 2018}}</ref>
* peak memory allocation
This provides users of the Benchmark Game the opportunity to scrutinize the various implementations.<ref>{{cite web|url=https://benchmarksgame-team.pages.debian.net/benchmarksgame/description/nbody.html#nbody|title=n-body description (64-bit Ubuntu quad core) – Computer Language Benchmarks Game|website=benchmarksgame-team.pages.debian.net/benchmarksgame|accessdate=29 May 2018}}</ref>
* gzip'ped size of the solution's source code
* [[Memory management#Manual memory management|binary-trees]]
* sum of total CPU time over all threads
* individual CPU utilization
 
It is common to see multiple solutions in the same programming language for the same problem. This highlights that within the bounds of a given language, a solution can be given which is either of high abstraction, is memory efficiency, fast, or parallelizes better.
 
== Benchmark programs ==
 
It was a design choice from the start to only include very simple toy problems, each providing a different kind of programming challenge.<ref>https://benchmarksgame.alioth.debian.org/why-measure-toy-benchmark-programs.html</ref>
This provides users of the Benchmark Game the opportunity to scrutinize the various implementations.<ref>https://benchmarksgame.alioth.debian.org/u64q/nbody-description.html#nbody</ref>
 
* [[Memory management#Dynamic memory allocation|binary-trees]]
* [[Synchronization (computer science)#Thread or process synchronization|chameneos-redux]]
* [[Permutation|fannkuch-redux]]
Line 77 ⟶ 68:
* [[Context switch#Multitasking|thread-ring]]
 
== History ==
The project was known as ''The Great Computer Language Shootout'' until 2007.<ref>{{cite web|url=https://benchmarksgame-team.pages.debian.net/benchmarksgame/sometimes-people-just-make-up-stuff.html#history|title=Trust, and verify – Computer Language Benchmarks Game|website=benchmarksgame-team.pages.debian.net/benchmarksgame|accessdate=29 May 2018}}</ref>
 
A port for Windows was maintained separately between 2002 and 2003.<ref>{{cite web|url=http://dada.perl.it/shootout/|title=The Great Win32 Computer Language Shootout|website=Dada.perl.it|accessdate=13 December 2017}}</ref>
The project was known as ''The Great Computer Language Shootout'' until 2007.<ref>https://benchmarksgame.alioth.debian.org/sometimes-people-just-make-up-stuff.html</ref>
 
The sources have been archived on GitLab.<ref>{{cite web|url=https://salsa.debian.org/benchmarksgame-team/archive-alioth-benchmarksgame|title=archive-alioth-benchmarksgame|website=salsa.debian.org/benchmarksgame-team|accessdate=29 May 2018}}</ref>
A port for Windows was maintained separately between 2002 and 2003.<ref>http://dada.perl.it/shootout/</ref>
 
There are also older forks on GitHub.<ref>{{cite web|url=https://github.com/Byron/benchmarksgame-cvs-mirror|title=benchmarksgame-cvs-mirror: A git mirror of the benchmarksgame cvs repository|first=Sebastian|last=Thiel|date=24 October 2017|publisher=[[GitHub]]|accessdate=13 December 2017}}</ref>
Information about the project's history and lineage can be found at WikiWikiWeb.<ref>http://wiki.c2.com/?GreatComputerLanguageShootout</ref><ref>http://wiki.c2.com/?ComputerLanguageBenchmarksGame</ref>
 
The project is continuously evolving. The list of supported programming languages is updated approximately once per year, following market trends. Users can also submit improved solutions to any of the problems or suggest testing methodology refinement.<ref>{{cite web|url=https://benchmarksgame-team.pages.debian.net/benchmarksgame/play.html|title=Contribute your own program – Computer Language Benchmarks Game|website=benchmarksgame-team.pages.debian.net/benchmarksgame|accessdate=29 May 2018}}</ref>
The sources are kept in CVS, but it also has multiple forks on GitHub.<ref>https://github.com/kragen/shootout</ref><ref>https://github.com/bbarker/benchmarksgame</ref>
 
The project is continuously evolving. The list of supported programming languages is updated approximately once per annum, following market trends. Users can also submit improved solutions to any of the problems or suggest testing methodology refinement.<ref>https://benchmarksgame.alioth.debian.org/play.html</ref>
 
== Caveats ==
 
==Caveats==
The developers themselves highlight the fact that those doing research should exercise caution when using such microbenchmarks:
 
{{quotation|[...] the JavaScript benchmarks are fleetingly small, and behave in ways that are significantly different than the real applications. We have documented numerous differences in behavior, and we conclude from these measured differences that results based on the benchmarks may mislead JavaScript engine implementers. Furthermore, we observe interesting behaviors in real JavaScript applications that the benchmarks fail to exhibit, suggesting that previously unexplored optimization strategies may be productive in practice.|https://benchmarksgame.alioth.debian.org/for-programming-language-researchers.html}}
 
== Impact ==
 
==Impact==
The benchmark results have uncovered various compilers issues. Sometimes a given compiler failed to process unusual, but otherwise grammatically valid constructs. At other times, runtime performance was shown to be below expectations, which prompted compiler developers to revise their optimization capabilities.
The benchmark results have uncovered various compiler issues. Sometimes a given compiler failed to process unusual, but otherwise grammatically valid constructs. At other times, runtime performance was shown to be below expectations, which prompted compiler developers to revise their optimization capabilities.
 
Various research articles have been based on the benchmarks, its results and its methodology.<ref>
<ref>{{cite journal|author1=Kevin Williams|author2=Jason McCandless|author3=David Gregg|title=Dynamic Interpretation for Dynamic Scripting Languages|date=2009|url=https://www.scss.tcd.ie/publications/tech-reports/reports.09/TCD-CS-2009-37.pdf|accessdate=25 March 2017}}</ref><ref>
<ref>{{cite conference|author1=Tobias Wrigstad|author2=Francesco Zappa Nardelli|author3=Sylvain Lebresne Johan|author4=Ostlund Jan Vitek|title=Integrating Typed and Untyped Code in a Scripting Language|date=January 17–23, 2009|conference=POPL’10|url=https://www.di.ens.fr/~zappa/projects/liketypes/paper.pdf|accessdate=25 March 2017|___location=Madrid, Spain}}</ref><ref>
<ref>{{cite conference|last1=Lerche|first1=Carl|title=Write Fast Ruby: It’sIt's All About the Science|conference=Golden Gate Ruby Conference|date=April 17–18, 2009|url=http://2009.gogaruco.com/downloads/Wrap2009.pdf|accessdate=25 March 2017|___location=San Francisco, California}}</ref><ref>{{cite conference|author1=J. Shirako|author2=D. M. Peixotto|author3=V. Sarkar|author4=W. N. Scherer III|title=Phaser Accumulators: a New Reduction Construct for Dynamic Parallelism|date=2009|conference=IEEE International Symposium on Parallel & Distributed Processing|url=http://www.cs.rice.edu/~vs3/PDF/ipdps09-accumulators-final-submission.pdf|accessdate=25 March 2017}}</ref><ref>
{{Cite journal |
<ref>{{cite conference|author1=J. Shirako|author2=D. M. Peixotto|author3=V. Sarkar|author4=W. N. Scherer III|title=Phaser Accumulators: a New Reduction Construct for Dynamic Parallelism|date=2009|conference=IEEE International Symposium on Parallel & Distributed Processing|url=http://www.cs.rice.edu/~vs3/PDF/ipdps09-accumulators-final-submission.pdf|accessdate=25 March 2017}}</ref>
<ref>{{Cite journal |
author = Rajesh Karmani and Amin Shali and Gul Agha |
title = Actor frameworks for the JVM platform: A Comparative Analysis |
journal = inIn proceedingsProceedings of the 7th International Conference on the Principles and Practice of Programming in Java |
year = 2009 |
url = http://osl.cs.illinois.edu/docs/pppj09/paper.pdf |
accessdate = 26 March 2017
}}</ref><ref>
<ref>{{cite conference|author=Brunthaler Stefan|title=Inline Caching Meets Quickening|date=2010|volume=Object-Oriented Programming|conference=European Conference on Object-Oriented Programming (ECOOP)|pages=429–451|urldoi=https://link.springer.com/chapter/10.1007%2F978/978-3-642-14107-2_21?LI=true|accessdate=25 March 2017}}</ref><ref>
<ref>{{cite conference|author1=Prodromos Gerakios|author2=Nikolaos Papaspyrou|author3=Konstantinos Sagonas|title=Race-free and Memory-safe Multithreading: Design and Implementation in Cyclone|date=January 23, 2010|conference=Proceedings of the 5th ACM SIGPLAN workshop on Types in language design and implementation|pages=15–26|url=http://www.softlab.ntua.gr/research/techrep/CSD-SW-TR-8-09.pdf|accessdate=25 March 2017|___location=Madrid, Spain}}</ref><ref>
<ref>{{cite conference|author1=Slava Pestov|author2=Daniel Ehrenberg|author3=Joe Groff|title=Factor: A Dynamic Stack-based Programming Language|date=October 18, 2010|conference=DLS 2010|url=http://factorcode.org/littledan/dls.pdf|accessdate=25 March 2017|___location=Reno/Tahoe, Nevada, USA}}</ref><ref>
<ref>{{cite conference|author1=Andrei Homescu|author2=Alex Suhan|title=HappyJIT: A Tracing JIT Compiler for PHP|date=October 24, 2011|conference=DLS’11|url=https://www.ics.uci.edu/~ahomescu/happyjit_paper.pdf|accessdate=25 March 2017|___location=Portland, Oregon, USA}}</ref><ref>
<ref>{{cite conference|author1=Vincent St-Amour|author2=Sam Tobin-Hochstadt|author3=Matthias Felleisen|title=Optimization Coaching - Optimizers Learn to Communicate with Programmers|date=October 19–26, 2012|conference=OOPSLA’12|url=http://www.ccs.neu.edu/racket/pubs/oopsla12-stf.pdf|accessdate=25 March 2017|___location=TusconTucson, Arizona, USA}}</ref><ref>
<ref>{{cite conference|author1=Wing Hang Li|author2=David R. White|author3=Jeremy Singer|title=JVM-Hosted Languages: They Talk the Talk, but do they Walk the Walk?|date=September 11–13, 2013|conference=Proceedings of the 2013 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools|pages=101–112|url=http://www.dcs.gla.ac.uk/~wingli/jvm_language_study/jvmlanguages.pdf|accessdate=25 March 2017|___location=Stuttgart, Germany}}</ref><ref>
<ref>{{cite conference|author1=Aibek Sarimbekov|author2=Andrej Podzimek|author3=Lubomir Bulej|author4=Yudi Zheng|author5=Nathan Ricci|author6=Walter Binder|title=Characteristics of Dynamic JVM Languages|date=October 28, 2013|conference=VMIL ’13|url=http://d3s.mff.cuni.cz/publications/download/Sarimbekov-vmil13.pdf|accessdate=25 March 2017|___location=Indianapolis, Indiana, USA}}</ref><ref>
<ref>{{cite conference|author1=Bradford L. Chamberlain|author2=Ben Albrecht|author3=Lydia Duncan|author4=Ben Harshbarger|title=Entering the Fray: Chapel’sChapel's Computer Language Benchmark Game Entry|date=2017|url=http://chapel.cray.com/CHIUW/2017/chamberlain-abstract.pdf|accessdate=25 March 2017}}</ref>{{Excessive citations inline|date=April 2023}}
 
== See also ==
* [[Benchmark (computing)]]
* [[Comparison of programming languages]]
 
==References==
== External links ==
{{Reflist}}
* [https://benchmarksgame.alioth.debian.org/ official website]
 
==External References links==
* {{Official website}}
{{Reflist|30em}}
 
[[Category:Programming language comparisons| ]]
[[Category:Benchmarks (computing)]]