Content deleted Content added
request clarification of "snow, snowfall, do-like", change bold to italic or remove per MOS:EMPHASIS. I think a third-party source needed. I certainly get the impression that "pdbR does not like Rmpi" ! |
Arodichevski (talk | contribs) mNo edit summary |
||
Line 22:
Two main implementations in [[R (programming language)|R]] using [[Message Passing Interface|MPI]] are Rmpi<ref name=rmpi>{{cite journal|author=Yu, H.|title=Rmpi: Parallel Statistical Computing in R|year=2002|url=http://cran.r-project.org/package=Rmpi|journal=R News}}</ref> and pbdMPI of pbdR.
* The pbdR built on pbdMPI uses [[SPMD|SPMD parallelism]] where every
* The Rmpi<ref name=rmpi/> uses [[Master/slave (technology)|manager/workers parallelism]] where one main processor (manager) servers as the control of all other processors (workers). The [[Master/slave (technology)|manager/workers parallelism]] introduced around early 2000 is particularly efficient for large tasks in small [[Computer cluster|clusters]], for example, [[Bootstrapping (statistics)|bootstrap method]] and [[Monte Carlo method|Monte Carlo simulation]] in applied statistics since [[Independent and identically distributed random variables|i.i.d.]] assumption is commonly used in most [[Statistics|statistical analysis]]. In particular, task pull parallelism has better performance for Rmpi in heterogeneous computing environments.
The idea of [[SPMD|SPMD parallelism]] is to let every processor do the same amount of work, but on different parts of a large data set. For example, a modern [[Graphics processing unit|GPU]] is a large collection of slower co-processors that can simply apply the same computation on different parts of relatively smaller data, but the SPMD parallelism ends up with an efficient way to obtain final solutions (i.e. time to solution is shorter).<ref>{{cite web | url = http://graphics.stanford.edu/~mhouston/ | title = Folding@Home - GPGPU | author = Mike Houston | accessdate = 2007-10-04 }}</ref> It is clear that pbdR is not only suitable for small [[Computer cluster|clusters]], but is also more stable for analyzing [[Big data]] and more scalable for [[supercomputer]]s.<ref>{{cite journal|author=Schmidt, D., Ostrouchov, G., Chen, W.-C., and Patel, P.|title=Tight Coupling of R and Distributed Linear Algebra for High-Level Programming with Big Data|year=2012|pages=811–815|journal=High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:|url=http://dl.acm.org/citation.cfm?id=2477156}}</ref>{{third-party-inline|date=October 2014}} In short, pbdR
|