Two-level scheduling: Difference between revisions

Content deleted Content added
mNo edit summary
stub template should be placed at the very end of article after external link, navbox & category tags per WP:STUBSPACING
 
(9 intermediate revisions by 9 users not shown)
Line 1:
'''Two-level scheduling''' is a [[computer science]] term to describe a method to more efficiently perform process [[Scheduling (computing)|scheduling]] that involves [[swapped out]] [[process (computing)|processes]].
 
Consider this problem: A system contains 50 running processes all with equal priority. However, the system's [[computer storage|memory]] can only hold 10 processes in memory simultaneously. Therefore, there will always be 40 processes swapped out written on [[virtual memory]] on the [[hard disk]]. The time taken to swap out and swap in a process is 50 ms respectively.
 
With straightforward [[Round-robin scheduling]], every time a [[context switch]] occurs, therea process would beneed anto 80%be probabilityswapped in (40/50,because ifonly itthe chooses10 randomlyleast amongrecently theused processes are swapped in). thatChoosing arandomly processamong the processes would needdiminish tothe beprobability swappedto in80% (40/50). If that occurs, then obviously a process also need to be swapped out. Swapping in and out of memory is costly, and the scheduler would waste much of its time doing unneeded swaps.
 
That is where two-level scheduling enters the picture. It uses two different schedulers, one '''lower-level scheduler''' which can only select among those processes in memory to run. That scheduler could be a Round-robin scheduler. The other scheduler is the '''higher-level scheduler''' whose only concern is to swap in and swap out processes from memory. It does its scheduling much less often than the lower-level scheduler since swapping takes so much time.
Line 9:
Thus, the higher-level scheduler selects among those processes in memory that have run for a long time and swaps them out. They are replaced with processes on disk that have not run for a long time. Exactly how it selects processes is up to the implementation of the higher-level scheduler. A compromise has to be made involving the following variables:
 
* [[Response time (technology)|Response time]]: A process should not be swapped out for too long. Then some other process (or the user) will have to wait needlessyneedlessly long. If this variable is not considered [[resource starvation]] may occur and a process may not complete at all.
* Size of the process: Larger processes must be subject to fewer swaps than smaller ones because they take longer time to swap. Because they are larger, fewer processes can share the memory with the process.
* Priority: The higher the priority of the process, the longer it should stay in memory so that it completes faster.
 
==References==
* [[Andrew S. Tanenbaum|Tanenbaum]], [[Albert Woodhull]], ''Operating Systems: Design and Implementation'', p.92
 
{{Processor scheduling}}
 
[[Category:SchedulingProcessor scheduling algorithms]]
 
==References==
* [[Andrew S. Tanenbaum|Tanenbaum]], [[Woodhull]], ''Operating Systems: Design and Implementation'', p.92
 
{{comp-sci-stub}}
[[Category:Scheduling algorithms]]