Programming with Big Data in R: Difference between revisions

Content deleted Content added
Wccsnow (talk | contribs)
mNo edit summary
Stat3472 (talk | contribs)
m Link to model-based clustering article updated to link to main article
 
(21 intermediate revisions by 16 users not shown)
Line 2:
{{notability|date=June 2013}}
{{COI|date=June 2013}}
{{Expert needed|Computer science|date=June 2013}}
}}
 
{{Infobox programming language
| name = pbdRbdrp
| logo =
| paradigm = [[SPMD]] and [[MPMD]]
Line 19 ⟶ 18:
| website = {{URL|www.r-pbd.org}}
}}
'''Programming with Big Data in R''' (pbdR)<ref>{{cite web|author=Ostrouchov, G., Chen, W.-C., Schmidt, D., Patel, P.|title=Programming with Big Data in R|year=2012|url=http://r-pbd.org}}</ref> is a series of [[R (programming language)|R]] packages and an environment for [[statistical computing]] with [[Bigbig Datadata]] by using high-performance statistical computation.<ref>{{cite web|author1=Chen, W.-C. |author2=Ostrouchov, G. |lastauthorampname-list-style=yes amp|url=http://thirteen-01.stat.iastate.edu/snoweye/hpsc/|year=2011|title=HPSC -- High Performance Statistical Computing for Data Intensive Research|access-date=2013-06-25|archive-url=https://web.archive.org/web/20130719020318/http://thirteen-01.stat.iastate.edu/snoweye/hpsc/|archive-date=2013-07-19|url-status=dead}}</ref><ref>{{cite web|url=https://learnshareit.com/tutorials-for-r/|title=Basic Tutorials for R to Start Analyzing Data|date=3 November 2022 }}</ref> The pbdR uses the same programming language as R with [[S (programming language)|S3/S4]] classes and methods which is used among [[statistician]]s and [[Data mining|data miners]] for developing [[statistical software]]. The significant difference between pbdR and R code is that pbdR mainly focuses on [[distributed memory]] systems, where data are distributed across several processors and analyzed in a [[Batch processing|batch mode]], while communications between processors are based on [[Message Passing Interface|MPI]] that is easily used in large [[High-performance computing|high-performance computing (HPC)]] systems. R system mainly focuses{{Citation needed|date=July 2013}} on single [[Multi-core processor|multi-core]] machines for data analysis via an interactive mode such as [[Graphical user interface|GUI interface]].
 
Two main implementations in [[R (programming language)|R]] using [[Message Passing Interface|MPI]] are Rmpi<ref name=rmpi>{{cite journal|author=Yu, H.|title=Rmpi: Parallel Statistical Computing in R|year=2002|url=https://cran.r-project.org/package=Rmpi|journal=R News}}</ref> and pbdMPI of pbdR.
* The pbdR built on pbdMPI uses [[SPMD|SPMD parallelism]] where every processor is considered as worker and owns parts of data. The [[SPMD|SPMD parallelism]] introduced in mid 1980 is particularly efficient in homogeneous computing environments for large data, for example, performing [[singular value decomposition]] on a large matrix, or performing [[Mixture model|clustering analysis]] on high-dimensional large data. On the other hand, there is no restriction to use [[Master/slave (technology)|manager/workers parallelism]] in [[SPMD|SPMD parallelism]] environment.
* The Rmpi<ref name=rmpi/> uses [[Master/slave (technology)|manager/workers parallelism]] where one main processor (manager) serversserves as the control of all other processors (workers). The [[Master/slave (technology)|manager/workers parallelism]] introduced around early 2000 is particularly efficient for large tasks in small [[Computer cluster|clusters]], for example, [[Bootstrapping (statistics)|bootstrap method]] and [[Monte Carlo method|Monte Carlo simulation]] in applied statistics since [[Independent and identically distributed random variables|i.i.d.]] assumption is commonly used in most [[Statistics|statistical analysis]]. In particular, task pull parallelism has better performance for Rmpi in heterogeneous computing environments.
The idea of [[SPMD|SPMD parallelism]] is to let every processor do the same amount of work, but on different parts of a large data set. For example, a modern [[Graphics processing unit|GPU]] is a large collection of slower co-processors that can simply apply the same computation on different parts of relatively smaller data, but the SPMD parallelism ends up with an efficient way to obtain final solutions (i.e. time to solution is shorter).<ref>{{cite web | url = http://graphics.stanford.edu/~mhouston/ | title = Folding@Home - GPGPU | author = Mike Houston | accessdateaccess-date = 2007-10-04 }}</ref> It is clear that pbdR is not only suitable for small [[Computer cluster|clusters]], but is also more stable for analyzing [[Big data]] and more scalable for [[supercomputer]]s.<ref>{{cite journal|author=Schmidt, D., Ostrouchov, G., Chen, W.-C., and Patel, P.|title=Tight Coupling of R and Distributed Linear Algebra for High-Level Programming with Big Data|year=2012|pages=811–815|journal=High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:|url=http://dl.acm.org/citation.cfm?id=2477156}}</ref>{{third-party-inline|date=October 2014}} In short, pbdR
* does ''not'' like Rmpi, {{clarify|text=snow, snowfall, do-like,|date=October 2014}} nor parallel packages in R,
* does ''not'' focus on interactive computing nor master/workers,
* but is able to use ''both'' SPMD and task parallelisms.
 
== Package design ==
Line 52 ⟶ 48:
* pbdDMAT --- distributed matrix classes and computational methods, with a focus on linear algebra and statistics
* pbdDEMO --- set of package demonstrations and examples, and this unifying vignette
* pmclust --- parallel [[Mixture model|model-based clustering]] using pbdR
* pbdPROF --- profiling package for MPI codes and visualization of parsed stats
* pbdZMQ --- interface to [[ZeroMQ|ØMQ]]
Line 67 ⟶ 63:
=== Example 1 ===
Hello World! Save the following code in a file called "demo.r"
<sourcesyntaxhighlight lang="rsplusr">
### Initial MPI
library(pbdMPI, quiet = TRUE)
Line 76 ⟶ 72:
### Finish
finalize()
</syntaxhighlight>
</source>
and use the command
<sourcesyntaxhighlight lang="bash">
mpiexec -np 2 Rscript demo.r
</syntaxhighlight>
</source>
to execute the code where [[R (programming language)|Rscript]]script is one of command line executable program.
 
=== Example 2 ===
The following example modified from pbdMPI illustrates the basic [[programming language syntax|syntax of the language]] of pbdR.
Since pbdR is designed in [[SPMD]], all the R scripts are stored in files and executed from the command line via mpiexec, mpirun, etc. Save the following code in a file called "demo.r"
<sourcesyntaxhighlight lang="rsplusr">
### Initial MPI
library(pbdMPI, quiet = TRUE)
Line 105 ⟶ 101:
### Finish
finalize()
</syntaxhighlight>
</source>
and use the command
<sourcesyntaxhighlight lang="bash">
mpiexec -np 4 Rscript demo.r
</syntaxhighlight>
</source>
to execute the code where [[R (programming language)|Rscript]] is one of command line executable program.
 
Line 115 ⟶ 111:
The following example modified from pbdDEMO illustrates the basic ddmatrix computation of pbdR which performs [[singular value decomposition]] on a given matrix.
Save the following code in a file called "demo.r"
<sourcesyntaxhighlight lang="rsplusr">
# Initialize process grid
library(pbdDMAT, quiet=T)
Line 134 ⟶ 130:
# Finish
finalize()
</syntaxhighlight>
</source>
and use the command
<sourcesyntaxhighlight lang="bash">
mpiexec -np 2 Rscript demo.r
</syntaxhighlight>
</source>
to execute the code where [[R (programming language)|Rscript]] is one of command line executable program.
 
== Further reading ==
* {{cite techreporttech report|author=Raim, A.M.|year=2013|title=Introduction to distributed computing with pbdR at the UMBC High Performance Computing Facility|institution=UMBC High Performance Computing Facility, University of Maryland, Baltimore County|number=HPCF-2013-2|url=http://userpages.umbc.edu/~gobbert/papers/pbdRtara2013.pdf|accessdate=2013-06-26|archiveurl=https://web.archive.org/web/20140204051402/http://userpages.umbc.edu/~gobbert/papers/pbdRtara2013.pdf|archivedate=2014-02-04|url-status=dead}}
* {{cite techreporttech report|author=Bachmann, M.G., Dyas, A.D., Kilmer, S.C. and Sass, J.|year=2013|title=Block Cyclic Distribution of Data in pbdR and its Effects on Computational Efficiency|institution=UMBC High Performance Computing Facility, University of Maryland, Baltimore County|number=HPCF-2013-11|url=http://userpages.umbc.edu/~gobbert/papers/REU2013Team1.pdf|accessdate=2014-02-01|archiveurl=https://web.archive.org/web/20140204051351/http://userpages.umbc.edu/~gobbert/papers/REU2013Team1.pdf|archivedate=2014-02-04|url-status=dead}}
* {{cite techreporttech report|author=Bailey, W.J., Chambless, C.A., Cho, B.M. and Smith, J.D.|year=2013|title=Identifying Nonlinear Correlations in High Dimensional Data with Application to Protein Molecular Dynamics Simulations|institution=UMBC High Performance Computing Facility, University of Maryland, Baltimore County|number=HPCF-2013-12|url=http://userpages.umbc.edu/~gobbert/papers/REU2013Team2.pdf|accessdate=2014-02-01|archiveurl=https://web.archive.org/web/20140204055902/http://userpages.umbc.edu/~gobbert/papers/REU2013Team2.pdf|archivedate=2014-02-04|url-status=dead}}
* {{cite web|title=High-Performance and Parallel Computing with R|author=[[Dirk Eddelbuettel]]|date=13 November 2022 |url=https://cran.r-project.org/web/views/HighPerformanceComputing.html|author-link=Dirk Eddelbuettel}}
* {{cite news|title=R at 12,000 Cores|url=http://www.r-bloggers.com/r-at-12000-cores/}}<br />This article was read 22,584 times in 2012 since it posted on October 16, 2012, and ranked number 3<ref>{{cite news|url=http://www.r-bloggers.com/100-most-read-r-posts-for-2012-stats-from-r-bloggers-big-data-visualization-data-manipulation-and-other-languages/|title=100 most read R posts in 2012 (stats from R-bloggers) – big data, visualization, data manipulation, and other languages}}</ref>
* {{cite web|url=http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2013:mpiprofiler|archive-url=https://archive.today/20130629095333/http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2013:mpiprofiler|url-status=dead|archive-date=2013-06-29|title=Profiling Tools for Parallel Computing with R|author=Google Summer of Code - R 2013}}
* {{cite web|url=http://rpubs.com/wush978/pbdMPI-linux-pilot|title=在雲端運算環境使用R和MPI|author=Wush Wu (2014)}}
* {{cite web|url=https://www.youtube.com/watch?v=m1vtPESsFqM|title=快速在AWS建立R和pbdMPI的使用環境|author=Wush Wu (2013)|website=[[YouTube]] }}
 
== References ==
Line 164 ⟶ 160:
[[Category:Functional languages]]
[[Category:Numerical analysis software for Linux]]
[[Category:Numerical analysis software for MacOSmacOS]]
[[Category:Numerical analysis software for Windows]]
[[Category:Parallel computing]]