Workload Manager: Difference between revisions

Content deleted Content added
m clean up, typo(s) fixed: For example → For example, using AWB
Workload Manager: image --> table + math
Line 7:
 
The definition of a response time also requires that the applications communicate with WLM. If this is not possible a relative speed measure – named execution velocity - is used to describe the end user expectation to the system.
 
[[Image:ExVel02.jpg|thumb|400px|Definition of Execution Velocity|left]] This measurement is based on system states which are continuously collected. The system states describe when a work request uses a system resource and when it must wait for it because it is used by other work. The latter is named a delay state. The quotient of all using states to all productive states (using and delay states) multiplied by 100 is the execution velocity. This measurement does not require any communication of the application with the WLM component but it is also more abstract than a response time goal.
{| class="wikitable floatleft"
|-
! Definition of Execution Velocity
|-
| <math>\text{Execution Velocity}=100\cdot\frac{\text{Total Using Samples}}{\text{Total Using Samples}+\text{Total Delay Samples}}</math>
|}
[[Image:ExVel02.jpg|thumb|400px|Definition of Execution Velocity|left]] This measurement is based on system states which are continuously collected. The system states describe when a work request uses a system resource and when it must wait for it because it is used by other work. The latter is named a delay state. The quotient of all using states to all productive states (using and delay states) multiplied by 100 is the execution velocity. This measurement does not require any communication of the application with the WLM component but it is also more abstract than a response time goal.
 
Finally the system administrator assigns an importance to each service class to tell WLM which service classes should get preferred access to system resources if the system load is too high to allow all work to execute. The service classes and goal definitions are organized in service policies together with other constructs for reporting and further controlling and saved as a service definition for access to WLM. The active service definition is saved on a couple data set which allows all z/OS systems of a [[IBM Parallel Sysplex|Parallel Sysplex]] cluster to access and execute towards the same performance goals.
 
WLM is a closed control mechanism which continuously collects data about the work and system resources; compares the collected and aggregated measurements with the user definitions from the service definition and adjusts the access of the work to the system resources if the user expectations have not been achieved. This mechanism runs continuously in pre-defined time intervals. In order to compare the collected data with the goal definitions a performance index is calculated.
 
[[Image:PI02.jpg|thumb|400px|Definition of Performance Index|left]] The performance index for a service class is a single number which tells whether the goal definition could be met, has been overachieved or was missed. WLM modifies the access of the service classes based on the achieved performance index and importance. For this it uses the collected data to project the possibility and result of a change. The change is executed if the forecast comes to the result that it is beneficial for the work based on the defined customer expectations. WLM uses a data base ranging from 20 seconds to 20 minutes to contain a statistically relevant basis of samples for its calculations. Also in one decision interval a change is performed for the benefit of one service class to maintain a controlled and predictable system.
{| class="wikitable floatleft"
|-
! Definition of Performance Index
|-
| <math>\text{for Response Time: }PI=\frac{\text{Actual Achieved Response Time}}{\text{Response Time Goal}}</math>
<br />
<math>\text{for Execution Velocity: }PI=\frac{\text{Execution Velocity Goal}}{\text{Achieved Execution Velocity}}</math>
|}
[[Image:PI02.jpg|thumb|400px|Definition of Performance Index|left]] The performance index for a service class is a single number which tells whether the goal definition could be met, has been overachieved or was missed. WLM modifies the access of the service classes based on the achieved performance index and importance. For this it uses the collected data to project the possibility and result of a change. The change is executed if the forecast comes to the result that it is beneficial for the work based on the defined customer expectations. WLM uses a data base ranging from 20 seconds to 20 minutes to contain a statistically relevant basis of samples for its calculations. Also in one decision interval a change is performed for the benefit of one service class to maintain a controlled and predictable system.
 
WLM controls the access of the work to the system processors, the I/O units, the system storage and starts and stops processes for work execution. The access to the system processors for example is controlled by a dispatch priority which defines a relative ranking between the units of work which want to execute. The same dispatch priority is assigned to all units of work which were classified to the same service class. As already stated the dispatch priority is not fixed and not simply derived from the importance of the service class. It changes based on goal achievement, system utilization and demand of the work for the system processors. Similar mechanisms exist for controlling all other system resources. This way of z/OS Workload Manager controlling the access of work to system resources is named goal oriented workload management and is in contrast to resource entitlement based workload management which defines a much more static relationship how work can access the system resources. Resource entitlement based workload management is found on larger [[UNIX]] operating systems for example.