Talk:Load (computing): Difference between revisions

Content deleted Content added
m added to the load exampel debate
m Error in example of load average?: indent replies for clarity
Line 69:
== Error in example of load average? ==
 
''For example a load average of "3.73 7.98 0.50" on a single CPU system can be interpreted as...
the CPU was overloaded by 373%''
 
Line 77:
{{unsigned|Vmardian|13:25, 21 November 2006}}
 
:The author did OK until they said "For example a load average of "3.73 7.98 0.50" on a single CPU system can be interpreted as:
the CPU was overloaded by 373% (needed to do 373% as much work as it can do in a minute) during the last minute."
 
:This is NOT correct. (which is why I deleted that section previously, but it re-appeared).
++++++
The author did OK until they said "For example a load average of "3.73 7.98 0.50" on a single CPU system can be interpreted as:
the CPU was overloaded by 373% (needed to do 373% as much work as it can do in a minute) during the last minute."
 
:As the author correctly stated earlier, load is how many processes is waiting to run (+ 1 actually running) on the system. e.g if there are 9 waiting and 1 running the load will be 10.
This is NOT correct. (which is why I deleted that section previously, but it re-appeared).
 
:As only one process can run at a time on a single core processor (obviously multi-processor or multi-cores will be better), the others have to wait dependant on a multitude of factors from what the process is, to the nice (priority) value etc,
As the author correctly stated earlier, load is how many processes is waiting to run (+ 1 actually running) on the system. e.g if there are 9 waiting and 1 running the load will be 10.
 
:But let's take a simple example, ignoring the overhead as the CPU switches tasks.
As only one process can run at a time on a single core processor (obviously multi-processor or multi-cores will be better), the others have to wait dependant on a multitude of factors from what the process is, to the nice (priority) value etc,
 
:You have 10 small, identical programs running on a single core CPU. They only take 5% CPU when they run as they are not CPU intensive, but they are designed to run for 1 second. Only 1 can run at a time (it's a single CPU, single core, remember) - so what is the CPU usuage (%) and the load ?
But let's take a simple example, ignoring the overhead as the CPU switches tasks.
 
:CPU utilization is 5% because only one can run at a time but the load is 10.
You have 10 small, identical programs running on a single core CPU. They only take 5% CPU when they run as they are not CPU intensive, but they are designed to run for 1 second. Only 1 can run at a time (it's a single CPU, single core, remember) - so what is the CPU usuage (%) and the load ?
 
:Don't forget multi-tasking is an illusion of programs running concurrently. They only get a share and are switched to other waiting tasks.
CPU utilization is 5% because only one can run at a time but the load is 10.
 
:I was going to suggest trying a low-load device like a sound player, but when I started the second xmms playing and played a mp3, the load dropped from 0.41 to 0.0 on my amd64 4000+ notebook running pclinuxos 2.6.16.27.tex1 #1 Thu Aug 10 20:13:42 CDT 2006 i686 Mobile AMD Athlon(tm) 64 Processor 4000+ unknown GNU/Linux
Don't forget multi-tasking is an illusion of programs running concurrently. They only get a share and are switched to other waiting tasks.
 
:So it looks like top or something else is broken. XMMS playing a MP3 takes 1.5% CPU but as there are 143 tasks with x and kde running, something is seriously wrong.
I was going to suggest trying a low-load device like a sound player, but when I started the second xmms playing and played a mp3, the load dropped from 0.41 to 0.0 on my amd64 4000+ notebook running pclinuxos 2.6.16.27.tex1 #1 Thu Aug 10 20:13:42 CDT 2006 i686 Mobile AMD Athlon(tm) 64 Processor 4000+ unknown GNU/Linux
 
:Anyway, I hope you get the idea.
So it looks like top or something else is broken. XMMS playing a MP3 takes 1.5% CPU but as there are 143 tasks with x and kde running, something is seriously wrong.
:15:38, 13 February 2007 (UTC)~~marrandy
 
::Your example is flawed. Running processes do not use only a percentage of the CPU. The running process is always using 100%. When we say that a process takes 5% of the CPU we mean that, over a defined period, it is running for 5% of the time. Each of your 1 second long processes with 5% utilisation is in reality running for 1/20 of a second and waiting for something (e.g. disk I/O or sleeping) for 19/20 of a second.
Anyway, I hope you get the idea.
 
::The profile of the load depends on when those processes want their 1/20th of a second slot. If they all do something for 1/20th of a second then sleep for 19/20ths and you start them at exactly the same time the load will start at 10 (1 running, 9 waiting for the CPU), drop to 9 after 1/20th seconds (1 running 8 waiting, 1 sleeping), then 8 and so on until after 0.5 seconds all the processes are sleeping. In this case, the average of the load over a whole second is 1.1. If each process is designed to run in a different 1/20th of a second e.g. the first process runs straight away, the second waits 1/20th seconds then runs, the third waits 2/20ths of a second then runs etc, the load will be 1 while there is a process running and 0 while there isn't. Over the second, the average load will be 0.5 in this case.
15:38, 13 February 2007 (UTC)~~marrandy
 
::This is why your media player example appears to be broken. Linux counts a process waiting for disk I/O towards the load average even though if a processor became available the process would not be able to run. I imagine that one media player spends alot of its time waiting for the disk to deliver blocks from the media file. A second media player probably introduces contention somewhere else where the waiting processes would not be counted towards the load. Either that, or the media files used in the second test were still in the disk cache from the first test, thus massively reducing the required IO wait time.
Your example is flawed. Running processes do not use only a percentage of the CPU. The running process is always using 100%. When we say that a process takes 5% of the CPU we mean that, over a defined period, it is running for 5% of the time. Each of your 1 second long processes with 5% utilisation is in reality running for 1/20 of a second and waiting for something (e.g. disk I/O or sleeping) for 19/20 of a second.
::[[User:Jeremypnet|Jeremypnet]] 13:09, 12 March 2007 (UTC)
 
The profile of the load depends on when those processes want their 1/20th of a second slot. If they all do something for 1/20th of a second then sleep for 19/20ths and you start them at exactly the same time the load will start at 10 (1 running, 9 waiting for the CPU), drop to 9 after 1/20th seconds (1 running 8 waiting, 1 sleeping), then 8 and so on until after 0.5 seconds all the processes are sleeping. In this case, the average of the load over a whole second is 1.1. If each process is designed to run in a different 1/20th of a second e.g. the first process runs straight away, the second waits 1/20th seconds then runs, the third waits 2/20ths of a second then runs etc, the load will be 1 while there is a process running and 0 while there isn't. Over the second, the average load will be 0.5 in this case.
 
:::why is a load of 7.9 over 5 minutes mean half the time the processor is in use? that cld make some sense for 15 minutes, its clear this example needs revising, i'll se if i can find a new example on line
This is why your media player example appears to be broken. Linux counts a process waiting for disk I/O towards the load average even though if a processor became available the process would not be able to run. I imagine that one media player spends alot of its time waiting for the disk to deliver blocks from the media file. A second media player probably introduces contention somewhere else where the waiting processes would not be counted towards the load. Either that, or the media files used in the second test were still in the disk cache from the first test, thus massively reducing the required IO wait time.
:::[[user:theonhighgod|theonhighgod]] 16/03/07
[[User:Jeremypnet|Jeremypnet]] 13:09, 12 March 2007 (UTC)
 
 
why is a load of 7.9 over 5 minutes mean half the time the processor is in use? that cld make some sense for 15 minutes, its clear this example needs revising, i'll se if i can find a new example on line
[[user:theonhighgod|theonhighgod]] 16/03/07