Content deleted Content added
reply to time slicing vs real time question |
m Maintain {{WPBS}} and vital articles: 1 WikiProject template. Create {{WPBS}}. Keep majority rating "Stub" in {{WPBS}}. Remove 1 same rating as {{WPBS}} in {{WikiProject Computing}}. Tag: |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 1:
{{WikiProject
{{WikiProject Computing|science=yes}}
}}
==Untitled==
In my opinion it is wrong to ''say advantage of making sure no task hogs the processor for any time longer than the time slice''
fixed priority means only the highest priority thread will run.
Line 11 ⟶ 14:
Why are we talking about "real-time"? If the system has to allocate the cpu to a task for some time (since there are less cpus than processes/tasks), it's more "time slicing" ... (http://en.wikipedia.org/wiki/Preemption_(computing)#Time_slice ) [[User:Zenkutsu|Zenkutsu]] ([[User talk:Zenkutsu|talk]]) 05:52, 20 March 2013 (UTC)
:Perhaps I misunderstand you, but I think you may have taken too narrow a definition of "real time". I think of time slicing in situations where the processor allocation is rotated between some number of very long computations, each with run times much longer than a slice, so that they all appear to be making progress, all at a reduced rate. I think of real time as implying that a computation is being performed in synchronism with events in the real world outside the computer. Consider what I think is a fairly common design, a digital simulation of an analog controller in an embedded system. It takes samples and provides updates frequently enough to satisfy the [[Nyquist–Shannon sampling theorem|anti-aliasing criteria]] of the analog system. This "task", the servo simulation thread, might never end, but the "process" it supports runs in real time, performs each update on schedule, then waits to be triggered again for the next update. Just like with a priority interrupt system, the thread can be suspended temporarily to allow even more time-critical work to run, and, when it is time for it to perform an update, it can temporarily suspend any less time-critical work. The amount of time it gets is not determined by an independent "time slice" parameter, it is determined by the time it takes to complete the next update, so that it keeps in sync with the outside world (to which it appears to be an analog circuit). The lowest priority task, or thread, is ultimately the one that gives up some of its allocated time to let this happen. Instead of hardware interrupts, we might have software events, semaphores say, that trigger the updates, although in this example they would be synchronized to a time source or external event of some sort. By the way, I also think that most systems that employ time slicing only guarantee that a slice is the longest continuous time a computation will get, and that a computation generally also looses control whenever it needs to wait for an external event such as receiving the next bucket of data from a file.
:--[[User:AJim|AJim]] ([[User talk:AJim|talk]]) 20:57, 20 March 2013 (UTC)
== advantages of this model ==
|