Concurrent computing: Difference between revisions

Content deleted Content added
Dylwi (talk | contribs)
m added ooRexx to language list
m WP:LINKs: needless-WP:PIPEs > WP:NOPIPEs, update-standardizes. WP:REFerence WP:CITation parameters: update-standardize, respace. Time MOS:CURRENT > specific.
 
(5 intermediate revisions by 5 users not shown)
Line 52:
===Advantages===
{{Unreferenced section|date=December 2006}}
TheThere are advantages of concurrent computing include:
 
* Increased program throughput—parallel execution of a concurrent programalgorithm allows the number of tasks completed in a given time to increase proportionally to the number of processors according to [[Gustafson's law]].<ref>{{Cite book |last=Padua |first=David |title=Encyclopedia of Parallel Computing |publisher=Springer New York, NY |year=2011 |isbn=978-0-387-09765-7 |publication-date=September 8, 2011 |pages=819–825 |language=en}}</ref>
* High responsiveness for input/output—input/output-intensive programs mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.<ref>{{citationCitation |title=Asynchronous I/O needed|date=December2024-12-20 2016|work=Wikipedia |url=https://en.wikipedia.org/wiki/Asynchronous_I/O |access-date=2024-12-27 |language=en}}</ref>
* More appropriate program structure—some problems and problem domains are well-suited to representation as concurrent tasks or processes.{{citation needed|date=DecemberFor 2016}}example [[Multiversion concurrency control|MVCC]].
 
==Models==
Line 85:
;Shared memory communication: Concurrent components communicate by altering the contents of [[shared memory (interprocess communication)|shared memory]] locations (exemplified by [[Java (programming language)|Java]] and [[C Sharp (programming language)|C#]]). This style of concurrent programming usually needs the use of some form of locking (e.g., [[Mutual exclusion|mutexes]], [[Semaphore (programming)|semaphores]], or [[Monitor (synchronization)|monitors]]) to coordinate between threads. A program that properly implements any of these is said to be [[Thread safety|thread-safe]].
 
;Message passing communication: Concurrent components communicate by [[message passing|exchanging messages]] (exchanging messages, exemplified by [[Open MPI|MPI]], [[Go (programming language)|Go]], [[Scala (programming language)|Scala]], [[Erlang (programming language)|Erlang]] and [[occam (programming language)|occam]]). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming.{{Citation needed|date=May 2013}} A wide variety of mathematical theories to understand and analyze message-passing systems are available, including the [[actor model]], and various [[process calculi]]. Message passing can be efficiently implemented via [[symmetric multiprocessing]], with or without shared memory [[cache coherence]].
 
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
Line 92:
Concurrent computing developed out of earlier work on railroads and [[telegraphy]], from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via [[time-division multiplexing]] (1870s).
 
The academic study of concurrent algorithms started in the 1960s, with {{Harvtxt|Dijkstra|1965}} credited with being the first paper in this field, identifying and solving [[mutual exclusion]].<ref>{{CitationCite |report |url=http://www.podc.org/influential/2002.html | title=PODC Influential Paper Award: 2002 | work=ACM Symposium on Principles of Distributed Computing | access-date=2009-08-24}}</ref>
 
==Prevalence==
Line 116:
[[List of concurrent programming languages|Concurrent programming languages]] are programming languages that use language constructs for [[concurrency (computer science)|concurrency]]. These constructs may involve [[Thread (computer science)|multi-threading]], support for [[distributed computing]], [[message passing programming|message passing]], [[sharing|shared resources]] (including [[Parallel Random Access Machine|shared memory]]) or [[futures and promises]]. Such languages are sometimes described as ''concurrency-oriented languages'' or ''concurrency-oriented programming languages'' (COPL).<ref name="armstrong2003">{{cite web |last1=Armstrong |first1=Joe |year=2003 |title=Making reliable distributed systems in the presence of software errors |url=http://www.diva-portal.org/smash/get/diva2:9492/FULLTEXT01.pdf |archive-url=https://web.archive.org/web/20160415213739/http://www.diva-portal.org/smash/get/diva2:9492/FULLTEXT01.pdf |archive-date=2016-04-15}}</ref>
 
Today, the most commonly used programming languages that have specific constructs for concurrency are [[Java (programming language)|Java]] and [[C Sharp (programming language)|C#]]. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by [[Monitor (synchronization)|monitors]] (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, [[Erlang (programming language)|Erlang]] iswas probably the most widely used in industry atas presentof 2010.{{Citation needed|date=August 2010}}
 
Many concurrent programming languages have been developed more as research languages (e.g., [[Pict (programming language)|Pict]]) rather than as languages for production use. However, languages such as [[Erlang (programming language)|Erlang]], [[Limbo (programming language)|Limbo]], and [[occam (programming language)|occam]] have seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities:
 
* [[Ada (programming language)|Ada]]—general purpose, with native support for message passing and monitor based concurrency
Line 128:
* [[C++]]—thread and coroutine support libraries<ref>{{Cite web |title=Standard library header <thread> (C++11) |url=https://en.cppreference.com/w/cpp/header/thread |access-date=2024-10-03 |website=en.cppreference.com}}</ref><ref>{{Cite web |title=Standard library header <coroutine> (C++20) |url=https://en.cppreference.com/w/cpp/header/coroutine |access-date=2024-10-03 |website=en.cppreference.com}}</ref>
* [[Cω]] (C omega)—for research, extends C#, uses asynchronous communication
* [[C Sharp (programming language)|C#]]—supports concurrent computing using {{Mono|lock}}, {{Mono|yield}}, also since version 5.0 {{Mono|async}} and {{Mono|await}} keywords introduced
* [[Clojure]]—modern, [[Functionalfunctional programming|functional]] dialect of [[Lisp (programming language)|Lisp]] on the [[Java (software platform)|Java]] platform
* [[Concurrent Clean]]—functional programming, similar to [[Haskell (programming language)|Haskell]]
* [[Concurrent Collections]] (CnC)—Achieves implicit parallelism independent of memory model by explicitly defining flow of data and control
* [[Concurrent Haskell]]—lazy, pure functional language operating concurrent processes on shared memory
Line 145:
* [[Fortran]]—[[Coarray Fortran|coarrays]] and ''do concurrent'' are part of Fortran 2008 standard
* [[Go (programming language)|Go]]—for system programming, with a concurrent programming model based on [[Communicating sequential processes|CSP]]
* [[Haskell programming language|Haskell]]—concurrent, and parallel functional programming language<ref> Marlow, Simon (2013) Parallel and Concurrent Programming in Haskell : Techniques for Multicore and Multithreaded Programming {{ISBN|9781449335946}}</ref>
* [[Hume (programming language)|Hume]]—functional, concurrent, for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing
* [[Io (programming language)|Io]]—actor-based concurrency
Line 172:
* [[PHP]]—multithreading support with parallel extension implementing message passing inspired from [[Go (programming language)|Go]]<ref>{{Cite web |title=PHP: parallel - Manual |url=https://www.php.net/manual/en/book.parallel.php |access-date=2024-10-03 |website=www.php.net |language=en}}</ref>
* [[Pict (programming language)|Pict]]—essentially an executable implementation of Milner's [[π-calculus]]
*[[Raku (programming language)|Raku]] includes classes for threads, promises and channels by default<ref>{{Cite web|url=https://docs.perl6.org/language/concurrency|title=Concurrency|website=docs.perl6.org|language=en|access-date=2017-12-24}}</ref>
* [[Python (programming language)|Python]] — uses thread-based parallelism and process-based parallelism <ref>[https://docs.python.org/3/library/concurrency.html Documentation » The Python Standard Library » Concurrent Execution]</ref>
*[[Raku (programming language)|Raku]] includes classes for threads, promises and channels by default<ref>{{Cite web|url=https://docs.perl6.org/language/concurrency|title=Concurrency|website=docs.perl6.org|language=en|access-date=2017-12-24}}</ref>
* [[Reia (programming language)|Reia]]—uses asynchronous message passing between shared-nothing objects
* [[Red (programming language)|Red/System]]—for system programming, based on [[Rebol]]