Content deleted Content added
mNo edit summary |
m →Types of operating system schedulers: Corrected Signal Redirect Tags: Mobile edit Mobile app edit Android app edit App section source |
||
(169 intermediate revisions by more than 100 users not shown) | |||
Line 1:
{{Short description|Method by which work is assigned}}
{{About|scheduling
In [[computing]], '''scheduling''' is the action of assigning [[System resource|resources]] to perform [[task (computing)|tasks]]. The resources may be [[central processing unit|processors]], [[telecommunications link|network links]] or [[expansion card]]s. The tasks may be [[thread (computer science)|threads]], [[process (computing)|processes]] or data [[Traffic flow (computer networking)|flows]].
A scheduler is what carries out the scheduling activity. Schedulers are often implemented so they keep all computer resources busy (as in [[load balancing (computing)|load balancing]]), allow multiple users to share system resources effectively, or to achieve a target [[quality of service]]. Scheduling is fundamental to computation itself, and an intrinsic part of the [[execution model]] of a computer system; the concept of scheduling makes it possible to have [[computer multitasking]] with a single [[central processing unit]] (CPU).▼
The scheduling activity is carried out by a mechanism called a '''scheduler'''. Schedulers are often designed so as to keep all computer resources busy (as in [[load balancing (computing)|load balancing]]), allow multiple users to share system resources effectively, or to achieve a target [[quality-of-service]].
A scheduler may aim at one or more of many goals, for example: ▼
▲
maximizing ''[[throughput]]'' (the total amount of work completed per time unit); ▼
==Goals==
minimizing ''[[Computer performance#Response time|wait time]]'' ▼
▲* <!-- throughput -->maximizing ''[[throughput]]'' (the total amount of work completed per time unit);
minimizing ''[[latency (engineering)|latency]]'' or ''[[Response time (technology)|response time]]'' (time from work becoming enabled until it is finished in case of batch activity,<ref name="liu1973">{{Cite journal▼
▲* <!-- wait time -->minimizing ''[[Computer performance#Response time|wait time]]'' (time from work becoming ready until the first point it begins execution);
▲* <!-- response_time/latency for batch systems -->minimizing ''[[latency (engineering)|latency]]'' or ''[[Response time (technology)|response time]]'' (time from work becoming
| first1 = Liu
| last1 = C. L.
Line 23 ⟶ 24:
| pages = 46–61
| publisher = ACM
| date = January 1973
| quote = We define the response time of a request for a certain task to be the time span between the request and the end of the response to that request.
| doi = 10.1145/321738.321743
| doi-access = free
}}
</ref><ref name="kleinrock1976">{{cite book
| last = Kleinrock
Line 32 ⟶ 34:
| title = Queueing Systems, Vol. 2: Computer Applications
| publisher = Wiley-Interscience
| url = https://
| isbn = 047149111X
| quote = For a customer requiring x sec of service, his response time will equal his service time x plus his waiting time.
| edition = 1
| page = [https://archive.org/details/queueingsystems00klei/page/171 171]
| year = 1976
}}
</ref><ref name="feitelson2014">{{cite book
| last = Feitelson
Line 45 ⟶ 48:
| url = http://www.cs.huji.ac.il/~feit/wlmod/
| isbn = 9781107078239
|
| quote = if we denote the time that a job waits in the queue by t<sub>w</sub>, and the time it actually runs by t<sub>r</sub>, then the response time is r = t<sub>w</sub> + t<sub>r</sub>.
| at = Section 8.4 (Page 422) in Version 1.03 of the freely available manuscript
| year = 2015}}
</ref><!-- response_time/latency for interactive systems --> or until the system responds and hands the first output to the user in case of interactive activity);<ref name="silberschatz2012">{{cite book▼
</ref>▼
▲or until the system responds and hands the first output to the user in case of interactive activity);<ref name="silberschatz2012">{{cite book
| last1 = Silberschatz
| first1 = Abraham
Line 60 ⟶ 61:
| title = Operating System Concepts
| publisher = Wiley Publishing
▲ | isbn = 0470128720
| quote = In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some output fairly early and can continue computing new results while previous results are being output to the user. Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the time it takes to start responding, not the time it takes to output the response.
| edition = 9
| page = 187
| year = 2012}}</ref>
* <!-- fairness -->maximizing ''fairness'' (equal CPU time to each process, or more generally appropriate times according to the priority and workload of each process).
In [[real-time computing|real-time]] environments, such as [[embedded system]]s for [[automatic control]] in industry (for example [[robotics]]), the scheduler also must ensure that processes can meet [[time limit|deadlines]]; this is crucial for keeping the system stable. Scheduled tasks can also be distributed to remote devices across a network and [[device Management|managed]] through an administrative back end.
Line 77 ⟶ 78:
==={{Anchor|PROCESS}}Process scheduler===
The process scheduler is a part of the operating system that decides which process runs at a certain point in time. It usually has the ability to pause a running process, move it to the back of the running queue and start a new process; such a scheduler is known as a ''[[Preemption (computing)|preemptive]] scheduler'', otherwise it is a ''[[Nonpreemptive multitasking|cooperative]] scheduler''.<ref>{{cite web|access-date=2023-06-19|author=Paul Krzyzanowski|date=2014-02-19|title=Process Scheduling: Who gets to run next?|url=https://www.cs.rutgers.edu/~pxk/416/notes/07-scheduling.html|website=cs.rutgers.edu}}</ref>
We distinguish between ''long-term scheduling'', ''medium-term scheduling'', and ''short-term scheduling'' based on how often decisions must be made.<ref>{{cite book |author=Raphael Finkel |author-link=Raphael Finkel |url=https://www.yumpu.com/en/document/read/32199214/an-operating-systems-vade-mecum |title=An Operating Systems Vade Mecum |publisher=Prentice Hall |date=1988 |chapter=Chapter 2: Time Management |page=27}}</ref>
===={{Anchor|LONG-TERM}}Long-term scheduling====
The ''long-term scheduler'', or ''admission scheduler'', decides which jobs or processes are to be admitted to the ready queue (in main memory); that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system,
In general, most processes can be described as either [[I/O-bound]] or [[CPU-bound]]. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations. It is important that a long-term scheduler selects a good process mix of I/O-bound and CPU-bound processes. If all processes are I/O-bound, the ready queue will almost always be empty, and the short-term scheduler will have little to do. On the other hand, if all processes are CPU-bound, the I/O waiting queue will almost always be empty, devices will go unused, and again the system will be unbalanced. The system with the best performance will thus have a combination of CPU-bound and I/O-bound processes. In modern operating systems, this is used to make sure that real-time processes get enough CPU time to finish their tasks.<ref name="
|
| title = Operating System Concepts
| volume = 9
Line 97 ⟶ 95:
Long-term scheduling is also important in large-scale systems such as [[batch processing]] systems, [[computer cluster]]s, [[supercomputer]]s, and [[render farm]]s. For example, in [[concurrent computing|concurrent systems]], [[coscheduling]] of interacting processes is often required to prevent them from blocking due to waiting on each other. In these cases, special-purpose [[job scheduler]] software is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system.
Some operating systems only allow new tasks to be added if it is sure all real-time deadlines can still be met.
The specific heuristic algorithm used by an operating system to accept or reject new tasks is the ''admission control mechanism''.<ref>
Robert Kroeger (2004).
[https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.3977&rep=rep1&type=pdf "Admission Control for Independently-authored Realtime Applications"].
UWSpace.
http://hdl.handle.net/10012/1170 .
Section "2.6 Admission Control".
p. 33.
▲</ref>
===={{Anchor|MEDIUM-TERM}}Medium-term scheduling====
The ''medium-term scheduler'' temporarily removes processes from main memory and places them in secondary memory (such as a [[hard disk drive]]) or vice versa, which is commonly referred to as
In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the medium-term scheduler may actually perform the role of the long-term scheduler, by treating binaries as
===={{Anchor|SHORT-TERM}}Short-term scheduling====
The ''short-term scheduler'' (also known as the ''CPU scheduler'') decides which of the ready, in-memory processes is to be executed (allocated a CPU) after a clock [[interrupt]], an I/O interrupt, an operating [[system call]] or another form of [[Signal
A preemptive scheduler relies upon a [[programmable interval timer]] which invokes an [[interrupt handler]] that runs in [[kernel mode]] and implements the scheduling function.
Line 115 ⟶ 123:
* Jumping to the proper ___location in the user program to restart that program indicated by its new state.
The dispatcher should be as fast as possible
==Scheduling disciplines==
The main purposes of scheduling algorithms are to minimize [[resource starvation]] and to ensure fairness amongst the parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is to be allocated resources. There are many different scheduling algorithms. In this section, we introduce several of them.
Line 124 ⟶ 132:
In [[packet-switched]] [[computer networks]] and other [[statistical multiplexing]], the notion of a '''scheduling algorithm''' is used as an alternative to [[FIFO (computing and electronics)|first-come first-served]] queuing of data packets.
The simplest best-effort scheduling algorithms are [[round-robin scheduling|round-robin]], [[fair queuing]] (a [[max-min fair]] scheduling algorithm), [[
In advanced packet radio wireless networks such as [[HSDPA]] (High-Speed Downlink Packet Access) [[3.5G]] cellular system, '''channel-dependent scheduling''' may be used to take advantage of [[channel state information]]. If the channel conditions are favourable, the [[throughput]] and [[system spectral efficiency]] may be increased. In even more advanced systems such as [[LTE Advanced|LTE]], the scheduling is combined by channel-dependent packet-by-packet [[dynamic channel allocation]], or by assigning [[OFDMA]] multi-carriers or other [[frequency-___domain equalization]] components to the users that best can utilize them.<ref name=Miao>{{cite book|author1=
===First come, first served===
{{Main
[[File:Thread pool.svg|thumb|400px|A sample [[thread pool]] (green boxes) with a queue (FIFO) of waiting tasks (blue) and a queue of completed tasks (yellow)]]
Line 136 ⟶ 144:
* Since context switches only occur upon process termination, and no reorganization of the process queue is required, scheduling overhead is minimal.
* Throughput can be low, because long processes can be holding the CPU,
* No starvation, because each process gets chance to be executed after a definite time.
* [[Turnaround time]], waiting time and response time
* No prioritization occurs, thus this system has trouble meeting process deadlines.
* The lack of prioritization means that as long as every process eventually completes, there is no starvation. In an environment where some processes might not complete, there can be starvation.
* It is based on queuing.
===
{{Main
{{See also|Deadline-monotonic scheduling|}}
Earliest deadline first (EDF) or ''least time to go'' is a dynamic scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (a task finishes, new task is released, etc.), the queue will be searched for the process closest to its deadline, which will be the next to be scheduled for execution.
===Shortest remaining time first ===
{{Main
Similar to [[shortest job first]] (SJF). With this strategy the scheduler arranges processes with the least estimated processing time remaining to be next in the queue. This requires advanced knowledge or estimations about the time required for a process to complete.
Line 159 ⟶ 168:
* To use this policy we should have at least two processes of different priority
===Fixed
{{Main
The operating system assigns a fixed
* Overhead is not minimal, nor is it significant.
* FPPS has no particular advantage in terms of throughput over FIFO scheduling.
Line 171 ⟶ 180:
===Round-robin scheduling===
{{Main
The scheduler assigns a fixed time unit per process, and cycles through them. If process completes within that time-slice it gets terminated otherwise it is rescheduled after giving a chance to all other processes.
* RR scheduling involves extensive overhead, especially with a small time unit.
* Balanced throughput between FCFS/
* Good average response time, waiting time is dependent on number of processes, and not average process length.
* Because of high waiting times, deadlines are rarely met in a pure RR system.
* Starvation can never occur, since no priority is given. Order of time unit allocation is based upon process arrival time, similar to FIFO.
* If Time-Slice is large
===Multilevel queue scheduling===
{{Main
This is used for situations in which processes are easily divided into different groups. For example, a common division is made between foreground (interactive) processes and background (batch) processes. These two types of processes have different response-time requirements and so may have different scheduling needs. It is very useful for [[shared memory]] problems.
===Work-conserving schedulers===
{{Main
A [[work-conserving scheduler]] is a scheduler that always tries to keep the scheduled resources busy, if there are submitted jobs ready to be scheduled. In contrast, a non-work conserving scheduler is a scheduler that, in some cases, may leave the scheduled resources idle despite the presence of jobs ready to be scheduled.
Line 194 ⟶ 203:
===Scheduling optimization problems===
There are several scheduling problems in which the goal is to decide which job goes to which station at what time, such that the total [[makespan]] is minimized:
* [[Job
* [[Open-shop scheduling]]{{snd}} there are {{mvar|n}} jobs and {{mvar|m}} different stations. Each job should spend some time at each station, in a free order.
* [[Flow
===Manual scheduling===
Line 208 ⟶ 217:
===Choosing a scheduling algorithm===
When designing an operating system, a programmer must consider which scheduling algorithm will perform best for the use the system is going to see. There is no universal
For example, [[Windows NT]]/XP/Vista uses a [[multilevel feedback queue]], a combination of fixed-priority preemptive scheduling, round-robin, and first in, first out algorithms. In this system, threads can dynamically increase or decrease in priority depending on if it has been serviced already, or if it has been waiting extensively. Every priority level is represented by its own queue, with [[round-robin scheduling]] among the high-priority threads and [[FIFO (computing and electronics)|FIFO]] among the lower-priority ones. In this sense, response time is short for most threads, and short but critical system threads get completed very quickly. Since threads can only use one time unit of the round-robin in the highest-priority queue, starvation can be a problem for longer high-priority threads.
==Operating system process scheduler implementations==
The algorithm used may be as simple as [[
More advanced algorithms take into account process priority, or the importance of the process. This allows some processes to use more time than other processes. The kernel always uses whatever resources it needs to ensure proper functioning of the system, and so can be said to have infinite priority. In [[Symmetric multiprocessing|SMP]]
===OS/360 and successors===
IBM [[
* The ''Single Sequential Scheduler'' option, also known as the ''Primary Control Program (PCP)'' provided sequential execution of a single stream of jobs.
Line 227 ⟶ 236:
===Windows===
Very early [[MS-DOS]] and Microsoft Windows systems were non-multitasking, and as such did not feature a scheduler. [[Windows 3.1x]] used a non-preemptive scheduler, meaning that it did not interrupt programs. It relied on the program to end or tell the OS that it didn't need the processor so that it could move on to another process. This is usually called cooperative multitasking. Windows 95 introduced a rudimentary preemptive scheduler; however, for legacy support opted to let 16
[[Windows NT]]-based operating systems use a multilevel feedback queue. 32 priority levels are defined, 0 through to 31, with priorities 0 through 15 being
The kernel may change the priority level of a thread depending on its I/O and CPU usage and whether it is interactive (i.e. accepts and responds to input from humans), raising the priority of interactive and I/O bounded processes and lowering that of CPU bound processes, to increase the responsiveness of interactive applications.<ref>{{cite web|title=A Tale of Two Schedulers Windows NT and Windows CE |url=http://sriramk.com/schedulers.html |author=Sriram Krishnan | ===Classic Mac OS and macOS===
Mac OS 9 uses cooperative scheduling for threads, where one process controls multiple cooperative threads, and also provides preemptive scheduling for multiprocessing tasks. The kernel schedules multiprocessing tasks using a preemptive scheduling algorithm. All Process Manager processes run within a special multiprocessing task, called the
macOS uses a multilevel feedback queue, with four priority bands for threads{{snd}}normal, system high priority, kernel mode only, and real-time.<ref>{{cite web|url=https://developer.apple.com
===AIX===
Line 240 ⟶ 251:
* First In, First Out: Once a thread with this policy is scheduled, it runs to completion unless it is blocked, it voluntarily yields control of the CPU, or a higher-priority thread becomes dispatchable. Only fixed-priority threads can have a FIFO scheduling policy.
* Round Robin: This is similar to the AIX Version 3 scheduler round-robin scheme based on
* OTHER: This policy is defined by POSIX1003.4a as implementation-defined. In AIX Version 4, this policy is defined to be equivalent to RR, except that it applies to threads with non-fixed priority. The recalculation of the running thread's priority value at each clock interrupt means that a thread may lose control because its priority value has risen above that of another dispatchable thread. This is the AIX Version 3 behavior.
Threads are primarily of interest for applications that currently consist of several asynchronous processes. These applications might impose a lighter load on the system if converted to a multithreaded structure.
AIX 5 implements the following scheduling policies: FIFO, round robin, and a fair round robin. The FIFO policy has three different implementations: FIFO, FIFO2, and FIFO3. The round robin policy is named SCHED_RR in AIX, and the fair round robin is called SCHED_OTHER.<ref>[http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/index.html#N100F6] {{webarchive|url=https://web.archive.org/web/20110811094049/http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/index.html
===Linux===
{{See also|Linux kernel#Scheduling}}
[[File:Simplified Structure of the Linux Kernel.svg|thumb|upright=1.5|A highly simplified structure of the Linux kernel, which includes process schedulers, I/O schedulers, and [[Linux kernel packet scheduler|packet schedulers]] ]]▼
==== Linux 1.2 ====
Linux 1.2 used a [[round-robin scheduling]] policy.<ref name=":0">{{Cite web |last=Jones |first=M. |date=2018-09-18 |orig-date=first published on 2009-12-14 |title=Inside the Linux 2.6 Completely Fair Scheduler |url=https://developer.ibm.com/tutorials/l-completely-fair-scheduler/ |access-date=2024-02-07 |website=developer.ibm.com}}</ref>
==== Linux 2.2 ====
▲Linux 2.2 added scheduling classes and support for [[symmetric multiprocessing]] (SMP).<ref name=":0" />[[File:Simplified Structure of the Linux Kernel.svg|thumb|upright=1.5|A highly simplified structure of the Linux kernel, which includes process schedulers, I/O schedulers, and [[Linux kernel packet scheduler|packet schedulers]] ]]
====Linux 2.4====
In [[Linux]] 2.4,<ref name=":0" /> an [[O(n) scheduler]] with a [[multilevel feedback queue]] with priority levels ranging from 0 to 140 was used; 0–99 are reserved for real-time tasks and 100–140 are considered [[nice (Unix)|nice]] task levels. For real-time tasks, the time quantum for switching processes was approximately 200 ms, and for nice tasks approximately 10 ms.{{citation needed|date=December 2011}} The scheduler ran through the [[run queue]] of all ready processes, letting the highest priority processes go first and run through their time slices, after which they will be placed in an expired queue. When the active queue is empty the expired queue will become the active queue and vice versa.
However, some enterprise [[Linux distributions]] such as [[SUSE Linux Enterprise Server]] replaced this scheduler with a backport of the [[O(1) scheduler]] (which was maintained by [[Alan Cox (computer programmer)|Alan Cox]] in his Linux 2.4-ac Kernel series) to the Linux 2.4 kernel used by the distribution.
====Linux 2.6.0 to Linux 2.6.22====
In versions 2.6.0 to 2.6.22, the kernel used an [[O(1) scheduler]] developed by [[Ingo Molnar]] and many other kernel developers during the Linux 2.5 development. For many kernel in time frame, [[Con Kolivas]] developed patch sets which improved interactivity with this scheduler or even replaced it with his own schedulers.
====
Con Kolivas'
| last=Molnár
| first=Ingo
|
| title=
| url=https://lwn.net/Articles/230501/
|
| date=2007-04-13}}</ref> CFS is the first implementation of a fair queuing [[process scheduler]] widely used in a general-purpose operating system.<ref name="dwrr">{{cite web|url=http://happyli.org/tongli/papers/dwrr.pdf
The
The [[Brain Fuck Scheduler]]
==== Linux 6.6 ====
In 2023, Peter Zijlstra proposed replacing CFS with an [[earliest eligible virtual deadline first scheduling]] (EEVDF) process scheduler.<ref>{{Cite web |title=EEVDF Scheduler May Be Ready For Landing With Linux 6.6 |url=https://www.phoronix.com/news/Linux-6.6-EEVDF-Likely |access-date=2023-08-31 |website=[[Phoronix]] |language=en}}</ref><ref>{{Cite web |title=EEVDF Scheduler Merged For Linux 6.6, Intel Hybrid Cluster Scheduling Re-Introduced |url=https://www.phoronix.com/news/Linux-6.6-EEVDF-Merged |access-date=2024-02-07 |website=www.phoronix.com |language=en}}</ref> The aim was to remove the need for CFS ''latency nice'' patches.<ref>{{Cite web |title=An EEVDF CPU scheduler for Linux [LWN.net] |url=https://lwn.net/Articles/925371/ |access-date=2023-08-31 |website=[[LWN.net]]}}</ref>
==== Linux 6.12 ====
Linux 6.12 added support for [[User space and kernel space|userspace]] scheduler extensions, also known as sched_ext.<ref>{{Cite web |title=Sched_ext Merged For Linux 6.12 - Scheduling Policies As BPF Programs |url=https://www.phoronix.com/news/Linux-6.12-Lands-sched-ext |access-date=2025-02-10 |website=www.phoronix.com |language=en}}</ref> These schedulers can be installed and replace the default scheduler.<ref>{{Cite web |title=Pluggable CPU schedulers - openSUSE Wiki |url=https://en.opensuse.org/Pluggable_CPU_schedulers |access-date=2025-02-10 |website=en.opensuse.org}}</ref>
===FreeBSD===
[[FreeBSD]] uses a multilevel feedback queue with priorities ranging from 0–255. 0–63 are reserved for interrupts, 64–127 for the top half of the kernel, 128–159 for real-time user threads, 160–223 for time-shared user threads, and 224–255 for idle user threads. Also, like Linux, it uses the active queue setup, but it also has an idle queue.<ref name="opensolaris-queue">{{cite web|title=Comparison of Solaris, Linux, and FreeBSD Kernels |url=http://cn.opensolaris.org/files/solaris_linux_bsd_cmp.pdf |
===NetBSD===
[[NetBSD]] uses a multilevel feedback queue with priorities ranging from 0–223. 0–63 are reserved for time-shared threads (default, SCHED_OTHER policy), 64–95 for user threads which entered [[kernel space]],
===Solaris===
[[Solaris (operating system)|Solaris]] uses a multilevel feedback queue with priorities ranging between 0 and 169. Priorities 0–59 are reserved for time-shared threads, 60–99 for system threads, 100–159 for real-time threads, and 160–169 for low priority interrupts. Unlike Linux,<ref name="opensolaris-queue"/> when a process is done using its time quantum, it is given a new priority and put back in the queue. Solaris 9 introduced two new scheduling classes, namely fixed
===Summary===
{| class="wikitable sortable"
|-
! Operating System
Line 298 ⟶ 320:
| [[Linux kernel]] before 2.6.0
| {{Yes}}
| [[Multilevel feedback queue]]
|-
| Linux kernel 2.6.0–2.6.23
Line 304 ⟶ 326:
| [[O(1) scheduler]]
|-
| Linux kernel
| {{Yes}}
| [[Completely Fair Scheduler]]
|-
| Linux kernel 6.6 and later
| {{Yes}}
| [[Earliest eligible virtual deadline first scheduling]] (EEVDF)
|-
| [[classic Mac OS]] pre-9
Line 318 ⟶ 344:
| [[macOS]]
| {{Yes}}
| [[Multilevel feedback queue]]
|-
| [[NetBSD]]
| {{Yes}}
| [[Multilevel feedback queue]]
|-
| [[Solaris (operating system)|Solaris]]
| {{Yes}}
| [[Multilevel feedback queue]]
|-
| [[Windows 3.1x]]
| {{No|None}}
| [[Cooperative scheduler]]
|-
| [[Windows 95]], [[Windows 98|98]], [[Windows Me|Me]]
Line 338 ⟶ 364:
| [[Windows NT]] (including 2000, XP, Vista, 7, and Server)
| {{Yes}}
| [[Multilevel feedback queue]]
|}
==See also==
{{Div col|colwidth=20em}}
* [[Activity selection problem]]
* [[Aging (scheduling)]]
* [[Automated planning and scheduling]]
* [[Cyclic executive]]
Line 358 ⟶ 380:
* [[Priority inversion]]
* [[Process states]]
* [[Queuing
* [[Rate-monotonic scheduling]]
* [[Scheduling (production processes)]]
* [[Stochastic scheduling]]
Line 371 ⟶ 392:
==References==
{{Refbegin}}
* {{cite book|
* {{cite book | author=Stallings, William | title=Operating Systems Internals and Design Principles
* [https://github.com/bdaehlie/linux-cpu-scheduler-docs/ Information on the Linux 2.6 O(1)-scheduler]
{{Refend}}
==Further reading==
*[
*[http://www.cs.sunysb.edu/~algorith/files/scheduling.shtml Brief discussion of Job Scheduling algorithms]
*[https://web.archive.org/web/20060613130106/http://oreilly.com/catalog/linuxkernel/chapter/ch10.html Understanding the Linux Kernel: Chapter 10 Process Scheduling]
Line 385 ⟶ 406:
*Peter Brucker, Sigrid Knust. Complexity results for scheduling problems [http://www.mathematik.uni-osnabrueck.de/research/OR/class/]
*[http://rtime.felk.cvut.cz/scheduling-toolbox TORSCHE Scheduling Toolbox for Matlab] is a toolbox of scheduling and graph algorithms.
*[
*[https://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/43438.pdf Large-scale cluster management at Google with Borg]
Line 393 ⟶ 414:
{{DEFAULTSORT:Scheduling (Computing)}}
[[Category:Operations research]]
[[Category:Planning]]
[[Category:
▲[[Category:Scheduling algorithms]]
|