Content deleted Content added
No edit summary |
m →Examples: : <math> \left \right |
||
(47 intermediate revisions by 12 users not shown) | |||
Line 1:
In computer science, a '''parallel external memory (PEM) model''' is a [[Cache-aware model|cache-aware]], external-memory [[abstract machine]].<ref name=":0">{{Cite
__TOC__
== Model ==
=== Definition ===
The PEM model<ref name=":0">{{Cite book|last1=Arge|first1=Lars|last2=Goodrich|first2=Michael T.|last3=Nelson|first3=Michael|last4=Sitchinava|first4=Nodari|title=Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures |chapter=Fundamental parallel algorithms for private-cache chip multiprocessors |date=2008|pages=197–206|___location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1378533.1378573|isbn=9781595939739|s2cid=11067041 }}</ref> is a combination of the EM model and the PRAM model. The PEM model is a computation model which consists of <math>P</math> processors and a two-level [[memory hierarchy]]. This memory hierarchy consists of a large [[External memory algorithm | external memory]] (main memory) of size <math>N</math> and <math>P</math> small [[Cache (computing) | internal memories (caches)]]. The processors share the main memory. Each cache is exclusive to a single processor. A processor can't access another’s cache. The caches have a size <math>M</math> which is partitioned in blocks of size <math>B</math>. The processors can only perform operations on data which are in their cache. The data can be transferred between the main memory and the cache in blocks of size <math>B</math>.
===
The [[Programming complexity | complexity measure]] of the PEM model is the I/O complexity,<ref name=":0">{{Cite book|last1=Arge|first1=Lars|last2=Goodrich|first2=Michael T.|last3=Nelson|first3=Michael|last4=Sitchinava|first4=Nodari|title=Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures |chapter=Fundamental parallel algorithms for private-cache chip multiprocessors |date=2008|pages=197–206|___location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1378533.1378573|isbn=9781595939739|s2cid=11067041 }}</ref> which determines the number of parallel blocks transfers between the main memory and the cache. During a parallel block transfer each processor can transfer a block. So if <math>P</math> processors load parallelly a data block of size <math>B</math> form the main memory into their caches, it is considered as an I/O complexity of <math>O(1)</math> not <math>O(P)</math>. A program in the PEM model should minimize the data transfer between main memory and caches and operate as much as possible on the data in the caches.
===
In the PEM model, there is no [[Computer network | direct communication network]] between the P processors. The processors have to communicate indirectly over the main memory. If multiple processors try to access the same block in main memory concurrently read/write conflicts<ref name=":0">{{Cite book|last1=Arge|first1=Lars|last2=Goodrich|first2=Michael T.|last3=Nelson|first3=Michael|last4=Sitchinava|first4=Nodari|title=Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures |chapter=Fundamental parallel algorithms for private-cache chip multiprocessors |date=2008|pages=197–206|___location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1378533.1378573|isbn=9781595939739|s2cid=11067041 }}</ref> occur. Like in the PRAM model, three different variations of this problem are considered:
* Concurrent Read Concurrent Write (CRCW): The same block in main memory can be read and written by multiple processors concurrently.
* Concurrent Read Exclusive Write (CREW): The same block in main memory can be read by multiple processors concurrently. Only one processor can write to a block at a time.
* Exclusive Read Exclusive Write (EREW): The same block in main memory cannot be read or written by multiple processors concurrently. Only one processor can access a block at a time.
The following two algorithms<ref name=":0">{{Cite book|last1=Arge|first1=Lars|last2=Goodrich|first2=Michael T.|last3=Nelson|first3=Michael|last4=Sitchinava|first4=Nodari|title=Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures |chapter=Fundamental parallel algorithms for private-cache chip multiprocessors |date=2008|pages=197–206|___location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1378533.1378573|isbn=9781595939739|s2cid=11067041 }}</ref> solve the CREW and EREW problem if <math>P \leq B</math> processors write to the same block simultaneously.
A first approach is to serialize the write operations. Only one processor after the other writes to the block. This results in a total of <math>P</math> parallel block transfers. A second approach needs <math>O(\log(P))</math> parallel block transfers and an additional block for each processor. The main idea is to schedule the write operations in a [[Reduce (parallel pattern) | binary tree fashion]] and gradually combine the data into a single block. In the first round <math>P</math> processors combine their blocks into <math>P/2</math> blocks. Then <math>P/2</math> processors combine the <math>P/2</math> blocks into <math>P/4</math>. This procedure is continued until all the data is combined in one block.
=== Comparison to other models ===
{| class="wikitable"
|+
!Model
!Multi-core
!Cache-aware
|-
|[[Random-access machine]] (RAM)
|No
|No
|-
|[[Parallel random-access machine]] (PRAM)
|Yes
|No
|-
|[[External memory algorithm|External memory]] (EM)
|No
|Yes
|-
|'''Parallel external memory (PEM)'''
|Yes
|Yes
|}
== Examples ==
<!-- Discuss if code folding is ok with respect to: https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style#Scrolling_lists_and_collapsible_content -->
=== Multiway partitioning ===
Let <math>M=\{m_1,...,m_{d-1}\}</math> be a vector of d-1 pivots sorted in increasing order. Let {{mvar|A}} be an unordered set of N elements. A d-way partition<ref name=":0">{{Cite book|last1=Arge|first1=Lars|last2=Goodrich|first2=Michael T.|last3=Nelson|first3=Michael|last4=Sitchinava|first4=Nodari|title=Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures |chapter=Fundamental parallel algorithms for private-cache chip multiprocessors |date=2008|pages=197–206|___location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1378533.1378573|isbn=9781595939739|s2cid=11067041 }}</ref> of {{mvar|A}} is a set <math>\Pi=\{A_1,...,A_d\}</math> , where <math>\cup_{i=1}^d A_i = A</math> and <math>A_i\cap A_j=\emptyset</math> for <math>1\leq i<j\leq d</math>. <math>A_i</math> is called the i-th bucket. The number of elements in <math>A_i</math> is greater than <math>m_{i-1}</math> and smaller than <math>m_{i}^2</math>. In the following algorithm<ref name=":0">{{Cite book|last1=Arge|first1=Lars|last2=Goodrich|first2=Michael T.|last3=Nelson|first3=Michael|last4=Sitchinava|first4=Nodari|title=Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures |chapter=Fundamental parallel algorithms for private-cache chip multiprocessors |date=2008|pages=197–206|___location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1378533.1378573|isbn=9781595939739|s2cid=11067041 }}</ref> the input is partitioned into N/P-sized contiguous segments <math>S_1,...,S_P</math> in main memory. The processor i primarily works on the segment <math>S_i</math>. The multiway partitioning algorithm (<code>PEM_DIST_SORT</code><ref name=":0">{{Cite book|last1=Arge|first1=Lars|last2=Goodrich|first2=Michael T.|last3=Nelson|first3=Michael|last4=Sitchinava|first4=Nodari|title=Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures |chapter=Fundamental parallel algorithms for private-cache chip multiprocessors |date=2008|pages=197–206|___location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1378533.1378573|isbn=9781595939739|s2cid=11067041 }}</ref>) uses a PEM [[prefix sum]] algorithm<ref name=":0">{{Cite book|last1=Arge|first1=Lars|last2=Goodrich|first2=Michael T.|last3=Nelson|first3=Michael|last4=Sitchinava|first4=Nodari|title=Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures |chapter=Fundamental parallel algorithms for private-cache chip multiprocessors |date=2008|pages=197–206|___location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1378533.1378573|isbn=9781595939739|s2cid=11067041 }}</ref> to calculate the prefix sum with the optimal <math>O\left(\frac{N}{PB} + \log P\right)</math> I/O complexity. This algorithm simulates an optimal PRAM prefix sum algorithm.
// Compute parallelly a d-way partition on the data segments <math>S_i</math>
'''for each''' processor i '''in parallel do'''
Read the vector of pivots {{mvar|M}} into the cache.
Partition <math>S_i</math> into d buckets and let vector <math>M_i=\{j_1^i,...,j_d^i\}</math> be the number of items in each bucket.
'''end for'''
Run PEM prefix sum on the set of vectors <math>\{M_1,...,M_P\}</math> simultaneously.
// Use the prefix sum vector to compute the final partition
'''for each''' processor i '''in parallel do'''
Write elements <math>S_i</math> into memory locations offset appropriately by <math>M_{i-1}</math> and <math>M_{i}</math>.
'''end for'''
Using the prefix sums stored in <math>M_P</math> the last processor P calculates the vector {{mvar|B}} of bucket sizes and returns it.
If the vector of <math>d=O\left(\frac{M}{B}\right)</math> pivots M and the input set A are located in contiguous memory, then the d-way partitioning problem can be solved in the PEM model with <math>O\left(\frac{N}{PB} + \left\lceil \frac{d}{B} \right\rceil>\log(P)+d\log(B)\right)</math> I/O complexity. The content of the final buckets have to be located in contiguous memory.
=== Selection ===
The [[selection problem]] is
The following code<ref name=":0" /> makes use of <code>PRAMSORT</code> which is a PRAM optimal sorting algorithm which runs in <math>O(\log N)</math>, and <code>SELECT</code>, which is a cache optimal single-processor selection algorithm.
'''if''' <math>N \leq P</math> '''then'''
'''return''' <math>
'''end if'''
//Find median of each <math>S_i</math>
'''for each''' processor {{mvar|i}} '''in parallel do'''
<math>m_i = \texttt{SELECT}(S_i, \frac{N}{2P}) </math>
'''end for'''
Line 39 ⟶ 85:
<math>t = \texttt{PEMPARTITION}(A, m_{P/2},P)</math>
'''if''' <math>k \leq t</math> '''then'''
'''return''' <math>\texttt{PEMSELECT}(A[1:t], P, k)</math>
'''else'''
'''return''' <math>\texttt{PEMSELECT}(A[t+1:N], P, k-t)</math>
'''end if'''
Under the assumption that the input is stored in contiguous memory, <code>PEMSELECT</code> has an I/O complexity of:
: <math>O\left(\frac{N}{PB} + \log (PB) \cdot \log(\frac{N}{P})\right)</math
=== Distribution sort ===
[[Distribution sort]] partitions an input list {{mvar|A}} of size {{mvar|N}} into {{mvar|d}} disjoint buckets of similar size. Every bucket is then sorted recursively and the results are combined into a fully sorted list.
If <math>P = 1</math> the task is delegated to a cache-optimal single-processor sorting algorithm.
Otherwise the following algorithm<ref name=":0" /> is used:
// Sample <math>\tfrac{4N}{\sqrt{d}}</math> elements from {{mvar|A}}
'''for''' '''each''' processor {{mvar|i}} '''in parallel do'''
'''if''' <math>M < |S_i|</math> '''then'''
<math>d = M/B</math>
Load <math>S_i</math> in {{mvar|M}}-sized pages and sort pages individually
'''else'''
<math>d = |S_i|</math>
Load and sort <math>S_i</math> as single page
'''end if'''
Pick every <math>\sqrt{d}/4</math>'th element from each sorted memory page into contiguous vector <math>R^i</math> of samples
'''end for'''
'''in parallel do'''
Combine vectors <math>R^1 \dots R^P</math> into a single contiguous vector <math>\mathcal{R}</math>
Make <math>\sqrt{d}</math> copies of <math>\mathcal{R}</math>: <math>\mathcal{R}_1 \dots \mathcal{R}_{\sqrt{d}}</math>
'''end do'''
// Find <math>\sqrt{d}</math> pivots <math>\mathcal{M}[j]</math>
'''for''' <math>j = 1</math> to <math>\sqrt{d}</math> '''in parallel do'''
<math>\mathcal{M}[j] = \texttt{PEMSELECT}(\mathcal{R}_i, \tfrac{P}{\sqrt{d}}, \tfrac{j \cdot 4N}{d})</math>
'''end for'''
Pack pivots in contiguous array <math>\mathcal{M}</math>
// Partition {{mvar|A}}around pivots into buckets <math>\mathcal{B}</math>
<math>\mathcal{B} = \texttt{PEMMULTIPARTITION}(A[1:N],\mathcal{M},\sqrt{d},P)</math>
// Recursively sort buckets
'''for''' <math>j = 1</math> to <math>\sqrt{d} + 1</math> '''in parallel do'''
recursively call <math>\texttt{PEMDISTSORT}</math> on bucket {{mvar|j}}of size <math>\mathcal{B}[j]</math>
using <math>O \left( \left \lceil \tfrac{\mathcal{B}[j]}{N / P} \right \rceil \right)</math> processors responsible for elements in bucket {{mvar|j}}
'''end for'''
The I/O complexity of <code>PEMDISTSORT</code> is:
:<math>O \left( \left \lceil \frac{N}{PB} \right \rceil \left ( \log_d P + \log_{M/B} \frac{N}{PB} \right ) + f(N,P,d) \cdot \log_d P \right)</math>
where
:<math>f(N,P,d) = O \left ( \log \frac{PB}{\sqrt{d}} \log \frac{N}{P} + \left \lceil \frac{\sqrt{d}}{B} \log P + \sqrt{d} \log B \right \rceil \right )</math>
If the number of processors is chosen that <math>f(N,P,d) = O\left ( \left \lceil \tfrac{N}{PB} \right \rceil \right )</math>and <math>M < B^{O(1)}</math> the I/O complexity is then:
<math>O \left ( \frac{N}{PB} \log_{M/B} \frac{N}{B} \right )</math>
=== Other PEM algorithms ===<!-- Check if ^O(1) notation can be used to replace any constant -->
{| class="wikitable"
|+
!PEM Algorithm
!I/O complexity
!Constraints
|-
!
|<math>O\left(\frac{N}{PB} \log_{\frac{M}{B}} \frac{N}{B}\right) = \textrm{sort}_P(N) </math>
|<math>
|-
![[List ranking]]<ref name=":1">{{Cite book|last1=Arge|first1=Lars|last2=Goodrich|first2=Michael T.|last3=Sitchinava|first3=Nodari|title=2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS) |chapter=Parallel external memory graph algorithms |date=2010|pages=1–11|publisher=IEEE|doi=10.1109/ipdps.2010.5470440|isbn=9781424464425|s2cid=587572 }}</ref>
|<math>O \left ( \textrm{sort}_P(N) \right )</math>
|<math>P \leq \frac{N/B^2}{\log B \cdot \log^{O(1)} N}, M = B^{O(1)}</math>
|-
![[Euler tour]]<ref name=":1" />
|<math>O \left ( \textrm{sort}_P(N) \right )</math>
|<math>P \leq \frac{N}{B^2}, M = B^{O(1)}</math>
|-
![[Expression tree]] evaluation<ref name=":1" />
|<math>O \left ( \textrm{sort}_P(N) \right )</math>
|<math>P \leq \frac{N}{B^2 \log B \cdot \log^{O(1)}N}, M = B^{O(1)}</math>
|-
!Finding a [[Minimum spanning tree|MST]]<ref name=":1" />
|<math>O \left(\textrm{sort}_P(|V|) + \textrm{sort}_P(|E|) \log \tfrac{|V|}{pB} \right)</math>
|<math>p \leq \frac{|V|+|E|}{B^2 \log B \cdot \log^{O(1)} N}, M = B^{O(1)}</math>
|}
Where <math>\textrm{sort}_P(N)</math> is the time it takes to sort {{mvar|N}} items with {{mvar|P}} processors in the PEM model.
== See also ==
* [[Parallel random-access machine]] (PRAM)
* [[Random-access machine]] (RAM)
* [[External memory algorithm|External memory]] (EM)
==References==
{{reflist}}
{{Parallel Computing}}
[[Category:Algorithms]]
[[Category:Models of computation]]
[[Category:Analysis of parallel algorithms]]
[[Category:External memory algorithms]]
[[Category:Cache (computing)]]
|