The topic of this article may not meet Wikipedia's general notability guideline. (June 2013) |
Programming with Big Data in R (pbdR)[1][2][3] is a series of R packages and an environment for statistical computing with Big Data by utilizing high-performance statistical computation.[4] The pbdR uses the same programming language as R[5] with S3/S4 classes and methods which is used among statisticians and data miners for developing statistical software. The significant difference between pbdR and R[5] codes is pbdR mainly focuses on distributed memory system where data are distributed across several processors, while communications between processors are based on MPI which is easily utilized in large high-performance computing (HPC) systems. R system[5] mainly focuses on interactive data analysis on single multi-core machines. Two main implementations in R using MPI are Rmpi[6] and pbdMPI of pbdR.
- The pbdR built on pbdMPI uses SPMD Parallelism where every processors are considered as workers and own parts of data. This parallelism is particularly efficient in homogeneous computing environments for large data, for example, performing singular value decomposition on a large matrix, or performing clustering analysis on high-dimensional large data. On the other hand, there is no restriction to use manager/workers parallelism in SPMD parallelism environment.
- The Rmpi[6] uses manager/workers parallelism where one main processor (manager) servers as the control of all other processors (workers). This parallelism is particularly efficient for large tasks in small clusters, for example, bootstrap method and Monte Carlo simulation in applied statistics since i.i.d. assumption is commonly used in most statistical analysis. In particular, pull parallelism has better performance for Rmpi in heterogeneous computing environments.
pbdR | |
---|---|
File:Pbdr.png | |
Paradigm | SPMD |
Designed by | Wei-Chen Chen, George Ostrouchov, Pragneshkumar Patel, and Drew Schmidt |
Developer | pbdR Core Team |
First appeared | Sep. 2012 |
Preview release | |
Typing discipline | Dynamic |
OS | Cross-platform |
License | General Public License and Mozilla Public License |
Website | r-pbd.org |
Influenced by | |
R, C, Fortran, and MPI |
It is clearly that pbdR is suitable for small clusters, but is stabler for analyzing larger data and is more scalable for supercomputers.[7]
Package design
Programming with pbdR requires usage of various packages developed by pbdR core team.
Packages developed are the following.
General | I/O | Computation | Application |
---|---|---|---|
pbdDEMO | pbdNCDF4 | pbdDMAT | pmclust |
pbdMPI | pbdBASE | ||
pbdSLAP |
- pbdMPI --- an efficient interface to MPI either OpenMPI[8] or MPICH2[9] with a focus on Single Program/Multiple Data (SPMD) parallel programming style[10][11][12]
- pbdSLAP --- bundles scalable dense linear algebra libraries in double precision for R, based on ScaLAPACK version 2.0.2[13] which includes several scalable linear algebra packages (namely BLACS, PBLAS, and ScaLAPACK).[14][15]
- pbdNCDF4 --- Interface to Parallel Unidata NetCDF4 format data files[16]
- pbdBASE --- low-level ScaLAPACK codes and wrappers
- pbdDMAT --- distributed matrix classes and computational methods, with a focus on linear algebra and statistics[17][18][19]
- pbdDEMO --- set of package demonstrations and examples, and this unifying vignette[3]
- pmclust -- parallel model-based clustering using pbdR
Amount those packages the pbdDEMO package is a collection of 20+ package demos which offer example uses of the various pbdR packages, and contains a vignette which offers detailed explanations for the demos and provides some mathematical or statistical insight.
Examples
Example 1
Hello World! Save the following code in a file called ``demo.r``
### Initial MPI
library(pbdMPI, quiet = TRUE)
init()
comm.cat("Hello World!\n")
### Finish
finalize()
and use the command
mpiexec -np 2 Rscript demo.r
to execute the code where Rscript is one of command line executable program.
Example 2
The following example modified from pbdMPI illustrates the basic syntax of the language of pbdR. Since pbdR is designed in SPMD, all the R scripts are stored in files and executed from the command line via mpiexec, mpirun, etc. Save the following code in a file called ``demo.r``
### Initial MPI
library(pbdMPI, quiet = TRUE)
init()
.comm.size <- comm.size()
.comm.rank <- comm.rank()
### Set a vector x on all processors with different values
N <- 5
x <- (1:N) + N * .comm.rank
### All reduce x using summation operation
y <- allreduce(as.integer(x), op = "sum")
comm.print(y)
y <- allreduce(as.double(x), op = "sum")
comm.print(y)
### Finish
finalize()
and use the command
mpiexec -np 4 Rscript demo.r
to execute the code where Rscript is one of command line executable program.
Example 3
The following example modified from pbdDEMO illustrates the basic ddmatrix computation of pbdR which performs singular value decomposition on a given matrix. Save the following code in a file called ``demo.r``
# Initialize process grid
library(pbdDMAT, quiet=T)
if(comm.size() != 2)
comm.stop("Exactly 2 processors are required for this demo.")
init.grid()
# Setup for the remainder
comm.set.seed(diff=TRUE)
M <- N <- 16
BL <- 2 # blocking --- passing single value BL assumes BLxBL blocking
dA <- ddmatrix("rnorm", nrow=M, ncol=N, mean=100, sd=10)
# LA SVD
svd1 <- La.svd(dA)
comm.print(svd1$d)
# Finish
finalize()
and use the command
mpiexec -np 2 Rscript demo.r
to execute the code where Rscript is one of command line executable program.
Further reading
- UMBC HPCF Technique Report by Raim, A.M. (2013).[20]
- CRAN Task View: High-Performance and Parallel Computing with R.[21]
- R at 12,000 Cores.[22] This article was read 22,584 times in 2012 since it posted on October 16, 2012 and ranked number 3 according to 100 R posts of 2012[23]
External links
- Official website of the pbdR project
- Technical website of the pbdR packages
- Source Code of developing version of the pbdR packages
- Discussion Group for any of pbdR related topics
Milestones
2013
- Version 1.0-2: Add pmclust.
- Version 1.0-1: Add pbdNCDF4.
- Version 1.0-0: Add pbdDEMO.
2012
References
- ^ Ostrouchov, G., Chen, W.-C., Schmidt, D., Patel, P. (2012). "Programming with Big Data in R".
{{cite web}}
: CS1 maint: multiple names: authors list (link) - ^ "XSEDE".
- ^ a b Schmidt, D., Chen, W.-C., Patel, P., Ostrouchov, G. (2013). "Speaking Serial R with a Parallel Accent" (PDF).
{{cite journal}}
: Cite journal requires|journal=
(help)CS1 maint: multiple names: authors list (link) - ^ Chen, W.-C. and Ostrouchov, G. (2011). "HPSC -- High Performance Statistical Computing for Data Intensive Research".
{{cite web}}
: CS1 maint: multiple names: authors list (link) - ^ a b c R Core Team (2012). R: A Language and Environment for Statistical Computing. ISBN 3-900051-07-0.
- ^ a b c Yu, H. (2002). "Rmpi: Parallel Statistical Computing in R". R News.
- ^ Schmidt, D., Ostrouchov, G., Chen, W.-C., and Patel, P. (2012). "Tight Coupling of R and Distributed Linear Algebra for High-Level Programming with Big Data". High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:: 811–815.
{{cite journal}}
: CS1 maint: extra punctuation (link) CS1 maint: multiple names: authors list (link) - ^ Jeff Squyres. "Open MPI: 10^15 Flops Can't Be Wrong" (PDF). Open MPI Project. Retrieved 2011-09-27.
- ^ MPICH License
- ^ Darema, F. (2001). "The SPMD Model: Past, Present and Future".
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Ortega, J.M., Voight, G.G., and Romine, C.H. (1989). "Bibliography on Parallel and Vector Numerical Algorithms".
{{cite journal}}
: Cite journal requires|journal=
(help)CS1 maint: multiple names: authors list (link) - ^ Ostrouchov, G. (1987). "Parallel Computing on a Hypercube: An Overview of the Architecture and Some Applications". Proc. 19th Symp. on the Interface of Computer Science and Statistics: 27-32.
- ^ Blackford, L.S.; et al. (1997). ScaLAPACK Users' Guide.
{{cite book}}
: Explicit use of et al. in:|author=
(help) - ^ Petitet, Antoine (1995). "PBLAS". Netlib. Retrieved 13 July 2012.
- ^ Jaeyoung Choi; Dongarra, J.J.; Walker, D.W. (1994). "PB-BLAS: a set of Parallel Block Basic Linear Algebra Subprograms" (PDF). Scalable High-Performance Computing Conference: 534–541. doi:10.1109/SHPCC.1994.296688. ISBN 0-8186-5680-8.
{{cite journal}}
: Unknown parameter|month=
ignored (help) - ^ NetCDF Group (2008). "Network Common Data Form".
- ^ J. Dongarra and D. Walker. "The Design of Linear Algebra Libraries for High Performance Computers".
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ J. Demmel, M. Heath, and H. van der Vorst. "Parallel Numerical Linear Algebra".
{{cite journal}}
: Cite journal requires|journal=
(help)CS1 maint: multiple names: authors list (link) - ^ "2d block-cyclic data layout".
- ^ Raim, A.M. (2013). "Introduction to distributed computing with pbdR at the UMBC High Performance Computing Facility" (PDF). Technical Report HPCF-2013-2.
- ^ Dirk Eddelbuettel. "High-Performance and Parallel Computing with R".
- ^ "R at 12,000 Cores".
- ^ "100 most read R posts in 2012 (stats from R-bloggers) – big data, visualization, data manipulation, and other languages".