Content deleted Content added
PinkDucky91 (talk | contribs) Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5 |
No edit summary |
||
Line 19:
|title=Parallelism vs. Concurrency
|work=Haskell Wiki
}}</ref> although both can be described as "multiple processes executing ''during the same period of time''". In parallel computing, execution occurs at the same physical instant: for example, on separate [[central processing unit|processors]] of a [[multi-processor]] machine, with the goal of speeding up computations—parallel computing is impossible on a ([[Multi-core processor|one-core]]) single processor, as only one computation can occur at any instant (during any single clock cycle).{{efn|This is discounting parallelism internal to a processor core, such as pipelining or vectorized instructions. A one-core, one-processor ''machine'' may be capable of some parallelism, such as with a [[coprocessor]], but the processor alone is not.}} By contrast, concurrent computing consists of process ''lifetimes'' overlapping, but execution
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via [[time-sharing]] slices: only one process runs at a time, and if it does not complete during its time slice, it is ''paused'', another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.{{citation needed|date=December 2016}}
|