Content deleted Content added
Wikipedia:Articles for deletion/Programming with Big Data in R closed as no consensus |
m Bot: link syntax/spacing and minor changes |
||
Line 19:
Two main implementations in [[R (programming language)|R]] using [[Message Passing Interface|MPI]] are Rmpi<ref name=rmpi>{{cite journal|author=Yu, H.|title=Rmpi: Parallel Statistical Computing in R|year=2002|url=http://cran.r-project.org/package=Rmpi|journal=R News}}</ref> and pbdMPI of pbdR.
* The pbdR built on pbdMPI uses [[SPMD|SPMD parallelism]] where every processors are considered as workers and own parts of data. The [[SPMD|SPMD parallelism]] introduced in mid 1980 is particularly efficient in homogeneous computing environments for large data, for example, performing [[
* The Rmpi<ref name=rmpi/> uses [[Master/slave (technology)|manager/workers parallelism]] where one main processor (manager) servers as the control of all other processors (workers). The [[Master/slave (technology)|manager/workers parallelism]] introduced around early 2000 is particularly efficient for large tasks in small [[Computer cluster|clusters]], for example, [[Bootstrapping (statistics)|bootstrap method]] and [[Monte Carlo method|Monte Carlo simulation]] in applied statistics since [[Independent and identically distributed random variables|i.i.d.]] assumption is commonly used in most [[Statistics|statistical analysis]]. In particular, task pull parallelism has better performance for Rmpi in heterogeneous computing environments.
The idea of [[SPMD|SPMD parallelism]] is to let every processors do the same works but on different parts of large data. For example, modern [[Graphics processing unit|GPU]] is a large collection of slower co-processors which can simply apply the same computation on different parts of relatively smaller data, but the SPMD parallelism ends up an efficient way to obtain final solutions, i.e. time to solution is shorter.<ref>{{cite web | url = http://graphics.stanford.edu/~mhouston/ | title = Folding@Home - GPGPU | author = Mike Houston | accessdate = 2007-10-04 }}</ref> It is clearly that pbdR is not only suitable for small [[Computer cluster|clusters]], but also is stabler for analyzing [[Big data]] and is more scalable for [[Supercomputer|supercomputers]].<ref>{{cite journal|author=Schmidt, D., Ostrouchov, G., Chen, W.-C., and Patel, P.|title=Tight Coupling of R and Distributed Linear Algebra for High-Level Programming with Big Data|year=2012|pages=811-815|journal=High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:|url=http://dl.acm.org/citation.cfm?id=2477156}}</ref> In short, pbdR
Line 35:
| pbdDEMO || pbdNCDF4 || pbdDMAT || pmclust
|-
| pbdMPI || || pbdBASE ||
|-
| || || pbdSLAP ||
|}
[[File:Pbd overview.png|thumb|The images describes how various pbdr packages are correlated.]]
Line 54:
== Examples ==
=== Example 1 ===
Hello World! Save the following code in a file called ``demo.r``
<source lang="rsplus">
Line 102:
=== Example 3 ===
The following example modified from pbdDEMO illustrates the basic ddmatrix computation of pbdR which performs [[
Save the following code in a file called ``demo.r``
<source lang="rsplus">
|