Content deleted Content added
Jerryobject (talk | contribs) m WP:LINK update, needless WP:PIPE cut. Add: Template:URL, Template:Official website. |
Remove unrelated topics to pbdR and Rmpi |
||
Line 26:
The idea of [[SPMD|SPMD parallelism]] is to let every processor do the same amount of work, but on different parts of a large data set. For example, a modern [[Graphics processing unit|GPU]] is a large collection of slower co-processors that can simply apply the same computation on different parts of relatively smaller data, but the SPMD parallelism ends up with an efficient way to obtain final solutions (i.e. time to solution is shorter).<ref>{{cite web | url = http://graphics.stanford.edu/~mhouston/ | title = Folding@Home - GPGPU | author = Mike Houston | accessdate = 2007-10-04 }}</ref> It is clear that pbdR is not only suitable for small [[Computer cluster|clusters]], but is also more stable for analyzing [[Big data]] and more scalable for [[supercomputer]]s.<ref>{{cite journal|author=Schmidt, D., Ostrouchov, G., Chen, W.-C., and Patel, P.|title=Tight Coupling of R and Distributed Linear Algebra for High-Level Programming with Big Data|year=2012|pages=811–815|journal=High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:|url=http://dl.acm.org/citation.cfm?id=2477156}}</ref>{{third-party-inline|date=October 2014}} In short, pbdR
* does ''not'' like Rmpi, {{clarify|text=snow, snowfall, do-like,|date=October 2014}} nor parallel packages in R,
* does ''not'' focus on interactive computing nor master/workers,
* but is able to use ''both'' SPMD and task parallelisms.
|