Reduce (parallel pattern): Difference between revisions

Content deleted Content added
Flogr (talk | contribs)
Flogr (talk | contribs)
Line 18:
:::: '''else if''' <math>i + 2^k < p</math>
::::: <math>x_i \gets x_i \oplus^{\star} x_{i+2^k}</math>
The binary operator for vectors is defined such that <math>\begin{pmatrix} e_i^0 \\ \vdots \\ e_i^{m-1}\end{pmatrix} \oplus^\star \begin{pmatrix} e_j^0 \\ \vdots \\ e_j^{m-1}\end{pmatrix} = \begin{pmatrix} e_i^0 \oplus e_j^0 \\ \vdots \\ e_i^{m-1} \oplus e_j^{m-1} \end{pmatrix}</math>. The algorithm further assumes that in the beginning <math>x_i = v_i</math> for all <math>i</math> and <math>p</math> is a power of two and uses the processing units <math>p_0,p_1,\dots p_{n-1}</math>. In every iteration, half of the processing units become inactive and do not contribute to further computations. The figure shows a visualization of the algorithm using addition as the operator. Vertical lines represent the processing units where the computation of the elements on that line take place. The eight input elements are located on the bottom and every animation step corresponds to one parallel step in the execution of the algorithm. An active processor <math>p_i</math> evaluates the given operator on the element <math>x_i</math> it is currently holding and <math>x_j</math> where <math>j</math> is the minimal index fulfilling <math>j > i</math>, so that <math>p_j</math> is becoming an inactive processor in the current step. <math>x_i</math> and <math>x_j</math> are not necessarily elements of the input set <math>X</math> as the fields are overwritten and reused for previously evaluated expressions. To coordinate the roles of the processing units in each step without causing additional communication between them, the fact that the processing units are indexed with numbers from <math>0</math> to <math>np-1</math> is used. Each processor looks at its <math>k</math>-th least significant bit and decides whether to get inactive or compute the operator on its own element and the element with the index where the <math>k</math>-th bit is not set. The underlying communication pattern of the algorithm is a binomial tree, hence the name of the algorithm.
 
Only <math>p_0</math> holds the result in the end, therefore it is the root processor. For an Allreduce-operation the result has to be distributed, which can be done by appending a broadcast from <math>p_0</math>. Furthermore, the number <math>p</math> of processors is restricted to be a power of two. This can be lifted by padding the number of processors to the next power of two. There are also algorithms that are more tailored for this use-case.<ref>{{Cite journal|last=Rabenseifner|first=Rolf|last2=Träff|first2=Jesper Larsson|date=2004-09-19|title=More Efficient Reduction Algorithms for Non-Power-of-Two Number of Processors in Message-Passing Parallel Systems|url=https://link.springer.com/chapter/10.1007/978-3-540-30218-6_13|journal=Recent Advances in Parallel Virtual Machine and Message Passing Interface|series=Lecture Notes in Computer Science|language=en|publisher=Springer, Berlin, Heidelberg|pages=36–46|doi=10.1007/978-3-540-30218-6_13|isbn=9783540231639}}</ref>
 
==== Runtime analysis ====
The main loop is executed <math>\lceil\log_2 p\rceil</math> times, the time needed for the part done in parallel is in <math>\mathcal{O}(m)</math> as a processing unit either combines two vectors or becomes inactive. Thus the parallel time <math>T(np,m)</math> for the PRAM is <math>T(np,m) = \mathcal{O}(log(np) \cdot m)</math> for <math>n = p</math>. The strategy for handling read and write conflicts can be chosen as restrictive as an exclusive read and exclusive write (EREW). The efficiency <math>S(np,m)</math> of the algorithm is <math>S(np,m) \in \mathcal{O}(\frac{T_{seq}}{T(np,m)}) = \mathcal{O}(\frac{np}{log(np)})</math> and therefore the efficiency is <math>E(np,m) \in \mathcal{O}(\frac{S(np,m)}{np}) = \mathcal{O}(\frac{1}{log(np)})</math>. The efficiency suffers because of the fact that half of the active processing units become inactive after each step, so <math>\frac{np}{2^i}</math> units are active in step <math>i</math>.
 
=== Distributed memory algorithm ===
Line 55:
== Pipelined tree ==
[[File:Pipelined binomial.gif|thumb|483x483px|Pipelined Fibonacci-tree algorithm using addition.]]
The binomial tree and the pipeline both have their advantages and disadvantages, depending on the values of <math>T_{start}</math> and <math>T_{byte}</math> for the parallel communication. They can be combined into one algorithm<ref>{{Cite journal|last=Sanders|first=Peter|last2=Sibeyn|first2=Jop F|title=A bandwidth latency tradeoff for broadcast and reduction|url=https://doi.org/10.1016/S0020-0190(02)00473-8|journal=Information Processing Letters|volume=86|issue=1|pages=33–38|doi=10.1016/s0020-0190(02)00473-8}}</ref> which uses a tree as its underlying communication pattern and splits the computation of the operator into pieces at the same time. Instead of the binomial tree, a Fibonacci tree is used. The animation shows the execution of such an algorithm in a full-duplex communication model. The first frame shows the Fibonacci-tree which describes the communication links, afterwards blue arrows indicate the transmission of elements. A processing node is depicted by three neighboring boxes with elements inside and receives elements from its two children in turn (assuming there are children with valid values). The runtime is <math>T(N,np,m) \approx (\frac{N}{m}T_{byte} + T_{start})(d + 2m - 2)</math>, where <math>d = log_{\phi}(p)</math> is the height of the tree and <math>\phi = \frac{1 + \sqrt{5}}{2}</math> the golden ratio.
 
== Applications ==