Thread (computing)

This is an old revision of this page, as edited by 163.181.251.10 (talk) at 18:42, 12 March 2004. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Many programming languages, operating systems, and other software development environments support what are called "threads" of execution. Threads are similar to processes, in that both represent a single sequence of instructions executed in parallel with other sequences, either by time slicing or multiprocessing. Threads are a way for a program to split itself into two or more simultaneously running tasks. A common use of threads is having one thread paying attention to the graphical user interface, while others do a long calculation in the background. As a result, the application more readily responds to user's interaction.

Threads are distinguished from traditional multi-tasking processes in that processes are typically independent, carry considerable state information, and interact only through system-provided inter-process communication mechanisms. Multiple threads, on the other hand, typically share the state information of a single process, share memory and other resources directly. On operating systems that have special facilities for threads, it is typically faster for the system to context switch between different threads in the same process than to switch between different processes. Systems like Windows NT and OS/2 are said to have "cheap" threads and "expensive" processes, while in systems like Linux there is not so big a difference.

An advantage of a multi-threaded program is that it can operate faster on computer systems that have multiple CPUs, or across a cluster of machines. This is because the threads of the program naturally lend themselves for truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require atomic operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

Use of threads in programming often causes a state inconsistency. A common anti-pattern is to set a global variable, then invoke subprograms that depend on its value. This is known as [[accumulate and fire]].

Operating systems generally implement threads in either of two ways: preemptive multithreading, or cooperative multithreading. Preemptive multithreading is the superior implementation, as it allows the operating system to determine when a context switch should occur. Cooperative multithreading on the other hand, relies on the threads themselves to relinquish control once they are at a stopping point. Clearly this can create problems if a thread is waiting for a resource to become available.

The Java programming language is an example of a computer language which supports multi-threaded programs.

Hardware support for software threads is provided by simultaneous multithreading. This feature was introduced in Intel's Pentium 4 processor, with the name Hyper-threading.

See also


An unrelated use of the term thread is for threaded code, which is a form of code consisting entirely of subroutine calls, written without the subroutine call instruction, and processed by an interpreter or the CPU. Two threaded code languages are Forth and early B programming languages.