Thread (computing): Difference between revisions

Content deleted Content added
Single-threaded vs multithreaded programs: Fixed wording and link. A CPU executes instructions, not commands.
Tags: Mobile edit Mobile web edit
m Removing link(s) Wikipedia:Articles for deletion/Beginthread closed as delete (XFDcloser)
 
(3 intermediate revisions by 2 users not shown)
Line 2:
{{short description|Smallest sequence of programmed instructions that can be managed independently by a scheduler}}
[[File:Multithreaded process.svg|thumb|A process with two threads of execution, running on one [[Processor (computing)|processor]]]]
[[File:Concepts- Program vs. Process vs. Thread.jpg|thumb|[[Computer program|Program]] vs. [[Process (computing)|Processprocess]] vs. Threadthread <br/>[[Scheduling (computing)|Schedulingscheduling]], [[Preemption (computing)|Preemptionpreemption]], [[Context switch|Contextcontext Switchingswitching]]|400x400px]]
 
In [[computer science]], a '''thread''' of [[Execution (computing)|execution]] is the smallest sequence of programmed instructions that can be managed independently by a [[scheduling (computing)|scheduler]], which is typically a part of the [[operating system]].<ref>{{Cite journal |last=Lamport |first=Leslie |author-link=Leslie Lamport |date=September 1979 |title=How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs |url=http://research.microsoft.com/en-us/um/people/lamport/pubs/multi.pdf |journal=IEEE Transactions on Computers |volume=C-28 |pages=690–691 |doi=10.1109/tc.1979.1675439 |number=9 |s2cid=5679366}}</ref> In many cases, a thread is a component of a [[Process (computing)|process]].
Line 14:
== History ==
{{expand section|date=February 2021}}
Threads made an early appearance under the name of "tasks" in IBM's batch processing operating system, OS/360, in 1967. It provided users with three available configurations of the [[OS/360]] control system, of which [[OS/360 and successors#MVT|Multiprogrammingmultiprogramming with a Variablevariable Numbernumber of Taskstasks]] (MVT) was one. Saltzer (1966) credits [[Victor A. Vyssotsky]] with the term "thread".<ref>{{Cite thesis|url=http://web.mit.edu/Saltzer/www/publications/MIT-MAC-TR-030.ocr.pdf|title=Traffic Control in a Multiplexed Computer System|first=Jerome Howard|last=Saltzer|author-link=Jerry Saltzer|degree=Doctor of Science|date=July 1966|page=20}}</ref>
 
The use of threads in software applications became more common in the early 2000s as CPUs began to utilize multiple cores. Applications wishing to take advantage of multiple cores for performance advantages were required to employ concurrency to utilize the multiple cores.<ref>{{Cite journal |first=Herb |last=Sutter |author-link=Herb Sutter |title=The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software |url=http://gotw.ca/publications/concurrency-ddj.htm |journal=[[Dr. Dobb's Journal]] |volume=30 |issue=3 |date=March 2005}}</ref>
Line 24:
===Processes===
{{Main|Process (computing)}}
At the kernel level, a ''process'' contains one or more ''kernel threads'', which share the process's resources, such as memory and file handles – a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel scheduling is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such as a [[runtime system]] can itself schedule multiple threads of execution. If these do not share data, as in [[Erlang (programming language)|Erlang]], they are usually analogously called processes,<ref>{{Cite web |title=Erlang: 3.1 Processes |url=http://www.erlang.org/doc/getting_started/conc_prog.html}}</ref> while if they share data they are usually called ''(user) threads'', particularly if preemptively scheduled. Cooperatively scheduled user threads are known as ''[[fiber (computer science)|fibers]]''; different processes may schedule user threads differently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-one, many-to-many). The term "''[[light-weight process]]"'' variously refers to user threads or to kernel mechanisms for scheduling user threads onto kernel threads.
 
A ''process'' is a "heavyweight" unit of kernel scheduling, as creating, destroying, and switching processes is relatively expensive. Processes own [[Resource (computer science)|resources]] allocated by the operating system. Resources include memory (for both code and data), [[Handle (computing)|file handles]], sockets, device handles, windows, and a [[process control block]]. Processes are ''isolated'' by [[process isolation]], and do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way – see [[interprocessInterprocess communication]]. Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typically preemptively multitasked, and process switching is relatively expensive, beyond basic cost of [[context switching]], due to issues such as cache flushing (in particular, process switching changes virtual memory addressing, causing invalidation and thus flushing of an untagged [[translation lookaside buffer]] (TLB), notably on [[x86]]).
 
===Kernel threads===
{{anchor|kernel thread}}
A ''kernel thread'' is a "lightweight" unit of kernel scheduling. At least one kernel thread exists within each process. If multiple kernel threads exist within a process, then they share the same memory and file resources. Kernel threads are preemptively multitasked if the operating system's process [[Scheduling (computing)|scheduler]] is preemptive. Kernel threads do not own resources except for a [[call stack|stack]], a copy of the [[processor register|registers]] including the [[program counter]], and [[thread-local storage]] (if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped.
 
===User threads===
Line 40:
However, the use of blocking system calls in user threads (as opposed to kernel threads) can be problematic. If a user thread or a fiber performs a system call that blocks, the other user threads and fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel and cannot run, which starves other user threads and fibers in the same process from executing.
 
A common solution to this problem (used, in particular, by many green threads implementations) is providing an I/O [[API]] that implements an interface that blocks the calling thread, rather than the entire process, by using non-blocking I/O internally, and scheduling another user thread or fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls (in particular, using non-blocking I/O, including lambda continuations and/or async/[[await]] primitives<ref>{{Cite AV media |first=Sergey |last=Ignatchenko |title=Eight Ways to Handle Non-blocking Returns in Message-passing Programs: from C++98 via C++11 to C++20 |url=https://www.youtube.com/watch?v=6lXUrvlMXNU |url-status=bot: unknown |archive-url=https://web.archive.org/web/20201125224240/https://www.youtube.com/watch?v=6lXUrvlMXNU&gl=US&hl=en |archive-date=2020-11-25 |publisher=CPPCON |access-date=2020-11-24 }}</ref>).
 
===Fibers===
{{Main|Fiber (computer science)}}
[[Fiber (computer science)|Fibers]] are an even lighter unit of scheduling which are [[cooperative multitasking|cooperatively scheduled]]: a running fiber must explicitly "''[[Yield (multithreading)|yield]]"'' to allow another fiber to run, which makes their implementation much easier than kernel or [[#User threads|user threads]]. A fiber can be scheduled to run in any thread in the same process. This permits applications to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Some research implementations of the [[OpenMP]] parallel programming model implement their tasks through fibers.<ref>{{Cite conference |date=September 2022 |title=Enhancing MPI+OpenMP Task Based Applications for Heterogeneous Architectures with GPU support |url=https://hal-cea.archives-ouvertes.fr/cea-03749364/file/2022_iwomp_omp-target.pdf |conference=IWOMP 2022: 18th International Workshop on OpenMP |book-title=OpenMP in a Modern World: From Multi-device Support to Meta Programming. |series=Lecture Notes in Computer Science |doi=10.1007/978-3-031-15922-0_1 |last1=Ferat |first1=Manuel |last2=Pereira |first2=Romain |last3=Roussel |first3=Adrien |last4=Carribault |first4=Patrick |last5=Steffenel |first5=Luiz-Angelo |last6=Gautier |first6=Thierry |volume=13527 |pages=3–16 |isbn=978-3-031-15921-3 |s2cid=251692327 }}</ref><ref>{{Cite conference |title=BOLT: Optimizing OpenMP Parallel Regions with User-Level Threads |first1=Shintaro |last1=Iwasaki |first2=Abdelhalim |last2=Amer |first3=Kenjiro |last3=Taura |first4=Sangmin |last4=Seo |first5=Pavan |last5=Balaji |conference=The 28th International Conference on Parallel Architectures and Compilation Techniques |url=https://www.mcs.anl.gov/~iwasaki/pdfs/papers/PACT2019_paper.pdf}}</ref> Closely related to fibers are [[coroutine]]s, with the distinction being that coroutines are a language-level construct, while fibers are a system-level construct.
 
===Threads vs processes===
Line 63:
==Scheduling==
===Preemptive vs cooperative scheduling===
Operating systems schedule threads either [[Preemption (computing)|preemptively]] or [[Cooperative multitasking|cooperatively]]. [[Operating system#Single- and multi-user| Multi-user operating systems]] generally favor [[preemptive multithreading]] for its finer-grained control over execution time via [[context switch]]ing. However, preemptive scheduling may context-switch threads at moments unanticipated by programmers, thus causing [[lock convoy]], [[priority inversion]], or other side-effects. In contrast, [[cooperative multithreading]] relies on threads to relinquish control of execution, thus ensuring that threads [[Run to completion scheduling |run to completion]]. This can cause problems if a cooperatively -multitasked thread [[Blocking (computing) |blocks]] by waiting on a [[Resource (computer science)| resource]] or if it [[Starvation (computer science) |starves]] other threads by not yielding control of execution during intensive computation.
 
=== Single- vs multi-processor systems ===
Line 115:
Multithreaded applications have the following advantages vs single-threaded ones:
* ''Responsiveness'': multithreading can allow an application to remain responsive to input. In a one-thread program, if the main execution thread blocks on a long-running task, the entire application can appear to freeze. By moving such long-running tasks to a ''worker thread'' that runs concurrently with the main execution thread, it is possible for the application to remain responsive to user input while executing tasks in the background. On the other hand, in most cases multithreading is not the only way to keep a program responsive, with [[non-blocking I/O]] and/or [[Unix signals]] being available for obtaining similar results.<ref>{{Cite journal |first=Sergey |last=Ignatchenko |title=Single-Threading: Back to the Future? |url=http://accu.org/index.php/journals/1634 |journal=[[Overload (magazine)|Overload]] |issue=97 |pages=16–19 |date=August 2010 |publisher=[[ACCU (organisation)|ACCU]]}}</ref>
* ''Parallelization'': applications looking to use multicore or multi-CPU systems can use multithreading to split data and tasks into parallel subtasks and let the underlying architecture manage how the threads run, either concurrently on one core or in parallel on multiple cores. GPU computing environments like [[CUDA]] and [[OpenCL]] use the multithreading model where dozens to hundreds of threads run in [[Data parallelism|parallel across data]] on a [[Manycore processor|large number of cores]]. This, in turn, enables better system utilization, and (provided that synchronization costs don't eat the benefits up), can provide faster program execution.
 
Multithreaded applications have the following drawbacks:
Line 126:
 
* IBM [[PL/I]](F) included support for multithreading (called ''multitasking'') as early as in the late 1960s, and this was continued in the Optimizing Compiler and later versions. The IBM Enterprise PL/I compiler introduced a new model "thread" API. Neither version was part of the PL/I standard.
* Many implementations of [[C (programming language)|C]] and [[C++]] support threading, and provide access to the native threading APIs of the operating system. A standardized interface for thread implementation is [[POSIX Threads]] (Pthreads), which is a set of C-function library calls. OS vendors are free to implement the interface as desired, but the application developer should be able to use the same interface across multiple platforms. Most [[Unix]] platforms, including Linux, support Pthreads. Microsoft Windows has its own set of thread functions in the [[process.h]] interface for multithreading, like [[beginthread]].
* Some [[high-level programming language|higher level]] (and usually [[cross-platform]]) programming languages, such as [[Java (programming language)|Java]], [[Python (programming language)|Python]], and [[.NET Framework]] languages, expose threading to developers while abstracting the platform specific differences in threading implementations in the runtime. Several other programming languages and language extensions also try to abstract the concept of concurrency and threading from the developer fully ([[Cilk]], [[OpenMP]], [[Message Passing Interface]] (MPI)). Some languages are designed for sequential parallelism instead (especially using GPUs), without requiring concurrency or threads ([[Ateji PX]], [[CUDA]]).
* A few interpreted programming languages have implementations (e.g., [[Ruby MRI]] for Ruby, [[CPython]] for Python) which support threading and concurrency but not parallel execution of threads, due to a [[global interpreter lock]] (GIL). The GIL is a mutual exclusion lock held by the interpreter that can prevent the interpreter from simultaneously interpreting the application's code on two or more threads at once. This effectively limits the parallelism on multiple core systems. It also limits performance for processor-bound threads (which require the processor), but doesn't effect I/O-bound or network-bound ones as much. Other implementations of interpreted programming languages, such as [[Tcl]] using the Thread extension, avoid the GIL limit by using an Apartment model where data and code must be explicitly "shared" between threads. In Tcl each thread has one or more interpreters.