Buzen's algorithm: Difference between revisions

Content deleted Content added
Added one new reference. Improved the clarity of as few sentences.
Adjusted the subscripts of some of the variables that appear in equations.
Line 17:
<math>\mu_j X_j = \sum_{i=1}^M \mu_i X_i p_{ij}\quad\text{ for }j=1,\ldots,M.</math>
 
''G''(''N'') is a normalizing constant chosen so that the sum of all <math>\tbinom{N+M-1}{M-1}</math> values of <math>\mathbb P(n_1,n_2,\cdots,n_M) </math> will be equal to 1.<ref name="buzen-1973" /> Buzen's algorithm is an efficient method to compute G(''N'').<ref name="buzen-1973" />
 
<math>\mathbb P(n_1,n_2,\cdots,n_M) </math> will be equal to 1.<ref name="buzen-1973" /> Buzen's algorithm is an efficient method to compute G(''N'').<ref name="buzen-1973" />
 
==Algorithm description==
 
The individual terms that must be added together to compute G(''N'') all have the following form: <math> \left( X_1 \right)^{n_1}</math><math> \left( X_2 \right)^{n_2}</math><math> \left( X_3 \right)^{n_3}</math> .... <math> \left( X_M \right)^{n_M}</math>. Note that this set of terms can be partitioned into two groups. The first group comprises all terms for which the exponent of <math> \left( X_M \right)</math> is greater than or equal to 1.  This implies that <math> \left( X_M \right)</math><sup>''1''</sup> can be factored out of each of these terms.  
 
After factoring out <math> \left( X_M \right)</math>, a surprising result emerges: the modified terms in the first group are identical to the terms used to compute the normalizing constant for the same network with one customer removed.<ref>{{cite journal |last1=Denning |first1=Peter J. |title=Rethinking Randomness: An interview with Jeff Buzen, Part I |journal=Ubiquity |date=24 August 2016 |volume=2016 |issue=August |pages=1:1–1:17 |doi=10.1145/2986329 |url=https://dl.acm.org/doi/10.1145/2986329}}</ref>  Thus, the sum of the terms in the first group can be written as “''X''<sub>''M''</sub> times G(''N'' -1)”.
 
Next consider the second group.  The exponent of ''X''<sub>''M''</sub> for every term in this group is zero.  In effect, service facility ''M'' disappears from all terms in this group (since it reduces in every case to a factor of 1). This leaves the total number of customers at the remaining ''M'' -1 service facilities equal to ''N''.
 
To express this result mathematically, assume that X<sub>1</sub>, X<sub>2</sub>, … ''X''<sub>''M''</sub> have been obtained for a given network with ''M'' service facilities. For any n ≤ ''N'' and m ≤ ''M,'' define g(n,m) as the normalizing constant for a network with n customers, m service facilities (1,2, … m), and values of  X<sub>1</sub>, X<sub>2</sub>, … ''X''<sub>''m''</sub>  that match the first m members of the original sequence X<sub>1</sub>, X<sub>2</sub>, … ''X''<sub>''M''</sub>  
 
Given this definition, the normalizing constant G(''N'') in the Gordon-Newell theorem can now be re-written as g(''N'',''M'').
Line 35 ⟶ 33:
It also follows immediately that “X''<sub>M</sub>'' times G(''N'' -1)”, the sum of the terms in the first group, can be re-written as “X''<sub>M</sub>'' times g(''N'' -1,''M'' )”.   More importantly, the sum of the terms in the second group can now be written as g(''N'', ''M'' -1).
 
Since G(''N'')is equal to the combined sum of the terms in the first and second groups is equal to G(''N''),
 
G(''N'') = g(''N'', ''M'' ) = X<sub>M</sub> g(''N'' -1,''M'' ) + g(''N'',''M'' -1)
Line 50 ⟶ 48:
==Marginal distributions, expected number of customers==
 
The Gordon-Newell theorem enables analysts to determine the stationary probability associated with each individual state of a closed queueing network.  These individual probabilities must then be added together to evaluate marginal probabilities such as P(n<sub>ji</sub>≥k), the probability that the total number of customers at service center i is greater than or equal to k (summed over all values of n<sub>ji</sub>≥k and, for each value of n<sub>i</sub>, all possible ways the remaining N – n<sub>i</sub> customers can be distributed across the other M-1 service centers in the network).
 
Many of these marginal probabilities can be computed with minimal additional effort.  This is easy to see for the case of P(n<sub>ji</sub>≥k).   Clearly, X<sub>i</sub> must be raised to the power of k or higher in every state where the number of customers at service center i is greater than or equal to k. Thus (X<sub>i</sub>)<sup>k</sup> can be factored out from each of these probabilities, leaving a set of modified probabilities whose sum is given by G(N-k)/G(N).   This observation yields the following simple and highly efficient result:
 
P(n<sub>ji</sub>≥k) = (X<sub>i</sub>)<sup>k</sup> G(N-k)/G(N)
 
This relationship can then be used to compute the [[marginal distribution]]s and [[expected value|expected]] number of customers at each service facility.<math>\mathbb P(n_i = k) = \frac{X_i^k}{G(N)}[G(N-k) - X_i G(N-k-1)]\quad\text{ for }k=0,1,\ldots,N-1,</math><math>\mathbb P(n_i = N) = \frac{X_i^N}{G(N)}[G(0)].</math>
 
The expected number of customers at service facility ''i'' is given by<math>\mathbb E(n_i) = \sum_{k=1}^N X_i^k \frac{G(N-k)}{G(N)}.</math>
 
These characterizations of quantities of interest in terms of the G(n) are also due to Buzen.<ref name="buzen-1973">{{Cite journal | last1 = Buzen | first1 = J. P. | author-link = Jeffrey P. Buzen| title = Computational algorithms for closed queueing networks with exponential servers | doi = 10.1145/362342.362345 | url = http://www-unix.ecs.umass.edu/~krishna/ece673/buzen.pdf| journal = Communications of the ACM | volume = 16 | issue = 9 | pages = 527–531 | year = 1973 | s2cid = 10702 }}</ref>
 
==Implementation==