Programming with Big Data in R: Difference between revisions

Content deleted Content added
Cran link.
More cleanups.
Line 11:
| paradigm = [[SPMD]]
| year = Sep. 2012
| designer = [http://thirteen-01.stat.iastate.edu/snoweye/mypage/ Wei-Chen Chen], [http://www.csm.ornl.gov/~ost George Ostrouchov], Pragneshkumar Patel, and [http://wrathematics.github.io/ Drew Schmidt]
| developer = pbdR Core Team
| latest_test_version = Through [[GitHub]] at [http://github.com/RBigData/ RBigData]
Line 18:
| operating_system = [[Cross-platform]]
| license = [[General Public License]] and [[Mozilla Public License]]
| website = [http://www.r-pbd.org/ http://www.r-pbd.org/]
}}
'''Programming with Big Data in R''' (pbdR)<ref>{{cite journalweb|author=Ostrouchov, G., Chen, W.-C., Schmidt, D., Patel, P.|title=Programming with Big Data in R|year=2012|url=http://r-pbd.org|journal=URL http://r-pbd.org}}</ref> is a series of [[R (programming language)|R]] packages and an environment for [[statistical computing]] with [[Big Data]] by utilizing high-performance statistical computation.<ref>{{cite web|author=Chen, W.-C. and Ostrouchov, G.|url=http://thirteen-01.stat.iastate.edu/snoweye/hpsc/|year=2011|title=HPSC -- High Performance Statistical Computing for Data Intensive Research}}</ref> The pbdR uses the same programming language as R with [[S (programming language)|S3/S4]] classes and methods which is used among [[statistician]]s and [[Data mining|data miners]] for developing [[statistical software]]. The significant difference between pbdR and R codes is pbdR mainly focuses on [[distributed memory]] system where data are distributed across several processors and analyzed in a [[Batch processing|batch mode]], while communications between processors are based on [[Message Passing Interface|MPI]] which is easily utilized in large [[High-performance computing|high-performance computing (HPC)]] systems. R system mainly focuses{{cn}} on single [[Multi-core processor| multi-core]] machines for data analysis via an interactive mode such as [[Graphical user interface|GUI interface]].
 
Two main implementations in [[R (programming language)|R]] using [[Message Passing Interface|MPI]] are Rmpi<ref name=rmpi/> and pbdMPI of pbdR.
* The pbdR built on pbdMPI uses [[SPMD|SPMD parallelism]] where every processors are considered as workers and own parts of data. The [[SPMD|SPMD parallelism]] introduced in mid 1980 is particularly efficient in homogeneous computing environments for large data, for example, performing [[Singular value decomposition|singular value decomposition]] on a large matrix, or performing [[Mixture model|clustering analysis]] on high-dimensional large data. On the other hand, there is no restriction to use [[Master/slave (technology)|manager/workers parallelism]] in [[SPMD|SPMD parallelism]] environment.
</ref> on a large matrix, or performing [[Mixture model|clustering analysis]] on high-dimensional large data. On the other hand, there is no restriction to use [[Master/slave (technology)|manager/workers parallelism]] in [[SPMD|SPMD parallelism]] environment.
* The Rmpi<ref name=rmpi/> uses [[Master/slave (technology)|manager/workers parallelism]] where one main processor (manager) servers as the control of all other processors (workers). The [[Master/slave (technology)|manager/workers parallelism]] introduced around early 2000 is particularly efficient for large tasks in small [[Computer cluster|clusters]], for example, [[Bootstrapping (statistics)|bootstrap method]] and [[Monte Carlo method|Monte Carlo simulation]] in applied statistics since [[Independent and identically distributed random variables|i.i.d.]] assumption is commonly used in most [[Statistics|statistical analysis]]. In particular, task pull parallelism has better performance for Rmpi in heterogeneous computing environments.
The idea of [[SPMD|SPMD parallelism]] is to let every processors do the same works but on different parts of large data. For example, modern [[Graphics processing unit|GPU]] is a large collection of slower co-processors which can simply apply the same computation on different parts of relatively smaller data, but the SPMD parallelism ends up an efficient way to obtain final solutions, i.e. time to solution is shorter.<ref>{{cite web | url = http://graphics.stanford.edu/~mhouston/ | title = Folding@Home - GPGPU | author = Mike Houston | accessdate = 2007-10-04 }}</ref> It is clearly that pbdR is not only suitable for small [[Computer cluster|clusters]], but also is stabler for analyzing [[Big data]] and is more scalable for [[Supercomputer|supercomputers]].<ref>{{cite journal|author=Schmidt, D., Ostrouchov, G., Chen, W.-C., and Patel, P.|title=Tight Coupling of R and Distributed Linear Algebra for High-Level Programming with Big Data|year=2012|pages=811-815|journal=High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:|url=http://dl.acm.org/citation.cfm?id=2477156}}</ref> In short, pbdR
Line 52 ⟶ 51:
* [http://cran.r-project.org/web/packages/pbdBASE/vignettes/pbdBASE-guide.pdf pbdBASE] --- low-level [[ScaLAPACK]] codes and wrappers
* [http://cran.r-project.org/web/packages/pbdDMAT/vignettes/pbdDMAT-guide.pdf pbdDMAT] --- distributed matrix classes and computational methods, with a focus on linear algebra and statistics
* [http://cran.r-project.org/web/packages/pbdDEMO/vignettes/pbdDEMO-guide.pdf pbdDEMO] --- set of package demonstrations and examples, and this unifying vignette<ref name=pbdDEMO>{{cite journalweb|author=Schmidt, D., Chen, W.-C., Patel, P., Ostrouchov, G.|year=2013|title=Speaking Serial R with a Parallel Accent|url=http://github.com/wrathematics/pbdDEMO/blob/master/inst/doc/pbdDEMO-guide.pdf?raw=true}}</ref>
* [http://cran.r-project.org/web/packages/pmclust/vignettes/pmclust-guide.pdf pmclust] -- parallel [[Mixture model|model-based clustering]] using pbdR