Content deleted Content added
→MapReduce: added link to main article |
Citation bot (talk | contribs) Add: bibcode, authors 1-1. Removed URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 595/1032 |
||
(44 intermediate revisions by 27 users not shown) | |||
Line 1:
{{Short description|Class of parallel computing applications}}
'''Data-intensive computing''' is a class of [[parallel computing]] applications which use a [[data parallel]] approach to
== Introduction ==
The rapid growth of the [[Internet]] and [[World Wide Web]]
[[Parallel computing|Parallel processing]] approaches can be generally classified as either ''compute-intensive'', or ''data-intensive''.<ref>[http://portal.acm.org/citation.cfm?id=280278 Models and languages for parallel computation], by D.B. Skillicorn, and D. Talia, ACM Computing Surveys, Vol. 30, No. 2, 1998, pp. 123-169.</ref><ref
Data-intensive is used to describe applications that are I/O bound or with a need to process large volumes of data.<ref>[https://computation.llnl.gov/casc/dcca-pub/dcca/Papers_files/data-intensive-ieee-computer-0408.pdf IEEE: Hardware Technologies for High-Performance Data-Intensive Computing], by M. Gokhale, J. Cohen, A. Yoo, and W.M. Miller, IEEE Computer, Vol. 41, No. 4, 2008, pp. 60-68.</ref> Such applications devote most of their processing time to I/O and movement and manipulation of data. [[Parallel computing|Parallel processing]] of data-intensive applications typically involves partitioning or subdividing the data into multiple segments which can be processed independently using the same executable application program in parallel on an appropriate computing platform, then reassembling the results to produce the completed output data.<ref>[http://www.agoldberg.org/Publications/DesignMethForDP.pdf IEEE: A Design Methodology for Data-Parallel Applications] {{Webarchive|url=https://web.archive.org/web/20110724225852/http://www.agoldberg.org/Publications/DesignMethForDP.pdf |date=2011-07-24 }}, by L.S. Nyland, J.F. Prins, A. Goldberg, and P.H. Mills, IEEE Transactions on Software Engineering, Vol. 26, No. 4, 2000, pp. 293-314.</ref> The greater the aggregate distribution of the data, the more benefit there is in parallel processing of the data. Data-intensive processing requirements normally scale linearly according to the size of the data and are very amenable to straightforward parallelization. The fundamental challenges for data-intensive computing are managing and processing exponentially growing data volumes, significantly reducing associated data analysis cycles to support practical, timely applications, and developing new algorithms which can scale to search and process massive amounts of data. Researchers
== Data-
Computer system architectures which can support [[data parallel]] applications
The US [[National Science Foundation]] (NSF) funded a research program from 2009 through 2010.<ref>{{Cite web |title= Data-intensive Computing |work= Program description |year= 2009 |publisher= NSF |url= https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503324&org=IIS |accessdate=24 April 2017 }}</ref> Areas of focus were:
* Approaches to [[parallel programming]] to address the [[Parallel computing|parallel processing]] of data on data-intensive systems
* Programming abstractions including models, languages, and [[algorithms]] which allow a natural expression of parallel processing of data
* Design of data-intensive computing platforms to provide high levels of reliability, efficiency, availability, and scalability.
* Identifying applications that can exploit this computing paradigm and determining how it should evolve to support emerging data-intensive applications
[[Pacific Northwest National Labs]]
==
==
▲(1) The principle of collection of the data and programs or algorithms is used to perform the computation. To achieve high performance in data-intensive computing, it is important to minimize the movement of data.<ref>[http://queue.acm.org/detail.cfm?id=1394131 Distributed Computing Economics] by J. Gray, "Distributed Computing Economics," ACM Queue, Vol. 6, No. 3, 2008, pp. 63-68.</ref> This characteristic allows processing algorithms to execute on the nodes where the data resides reducing system overhead and increasing performance.<ref>[http://www.pnl.gov/science/images/highlights/computing/dic_special.pdfData-Intensive Computing in the 21st Century], by I. Gorton, P. Greenfield, A. Szalay, and R. Williams, IEEE Computer, Vol. 41, No. 4, 2008, pp. 30-32.</ref> Newer technologies such as [[InfiniBand]] allow data to be stored in a separate repository and provide performance comparable to collocated data.
▲(2) The programming model utilized. Data-intensive computing systems utilize a machine-independent approach in which applications are expressed in terms of high-level operations on data, and the runtime system transparently controls the scheduling, execution, load balancing, communications, and movement of programs and data across the distributed computing cluster.<ref>[http://www.cs.cmu.edu/~bryant/presentations/DISC-concept.ppt Data Intensive Scalable Computing] by R.E. Bryant. "Data Intensive Scalable Computing," 2008</ref> The programming abstraction and language tools allow the processing to be expressed in terms of data flows and transformations incorporating new dataflow [[programming languages]] and shared libraries of common data manipulation algorithms such as sorting.
▲(3) A focus on reliability and availability. Large-scale systems with hundreds or thousands of processing nodes are inherently more susceptible to hardware failures, communications errors, and software bugs. Data-intensive computing systems are designed to be fault resilient. This typically includes redundant copies of all data files on disk, storage of intermediate processing results on disk, automatic detection of node or processing failures, and selective re-computation of results.
▲(4) The inherent scalability of the underlying hardware and [[software architecture]]. Data-intensive computing systems can typically be scaled in a linear fashion to accommodate virtually any amount of data, or to meet time-critical performance requirements by simply adding additional processing nodes. The number of nodes and processing tasks assigned for a specific application can be variable or fixed depending on the hardware, software, communications, and [[distributed file system]] architecture.
▲== System Architectures ==
A variety of [[system]] architectures have been implemented for data-intensive computing and large-scale data analysis applications including parallel and distributed [[relational database management systems]] which have been available to run on shared nothing clusters of processing nodes for more than two decades.<ref>[http://www.cse.nd.edu/~dthain/courses/cse40771/spring2010/benchmarks-sigmod09.pdf A Comparison of Approaches to Large-Scale Data Analysis] by A. Pavlo, E. Paulson, A. Rasin, D.J. Abadi, D.J. Dewitt, S. Madden, and M. Stonebraker. Proceedings of the 35th SIGMOD International conference on Management of Data, 2009.</ref>
However, most data growth is with data in unstructured form and new processing paradigms with more flexible data models were needed. Several solutions have emerged including the [[MapReduce]] architecture pioneered by Google and now available in an open-source implementation called [[Hadoop]] used by [[Yahoo]], [[Facebook]], and others. [[LexisNexis|LexisNexis Risk Solutions]] also developed and implemented a scalable platform for data-intensive computing which is used by [[LexisNexis]].
===MapReduce===
The [[MapReduce]] architecture and programming model pioneered by [[Google]] is an example of a modern [[systems architecture]] designed for data-intensive computing.<ref>[http://labs.google.com/papers/mapreduce-osdi04.pdf MapReduce: Simplified Data Processing on Large Clusters] {{Webarchive|url=https://web.archive.org/web/20091223010101/http://labs.google.com/papers/mapreduce-osdi04.pdf |date=2009-12-23 }} by J. Dean, and S. Ghemawat. Proceedings of the Sixth Symposium on Operating System Design and Implementation (OSDI), 2004.</ref> The MapReduce architecture allows programmers to use a [[functional programming]] style to create a map function that processes a [[
▲The [[MapReduce]] architecture and programming model pioneered by [[Google]] is an example of a modern systems architecture designed for data-intensive computing.<ref>[http://labs.google.com/papers/mapreduce-osdi04.pdf MapReduce: Simplified Data Processing on Large Clusters] by J. Dean, and S. Ghemawat. Proceedings of the Sixth Symposium on Operating System Design and Implementation (OSDI), 2004.</ref> The MapReduce architecture allows programmers to use a functional programming style to create a map function that processes a [[key-value pair]] associated with the input data to generate a set of intermediate [[key-value pair]]s, and a reduce function that merges all intermediate values associated with the same intermediate key. Since the system automatically takes care of details like partitioning the input data, scheduling and executing tasks across a processing cluster, and managing the communications between nodes, programmers with no experience in parallel programming can easily use a large distributed processing environment.
The programming model for [[MapReduce]] architecture is a simple abstraction where the computation takes a set of input
===Hadoop===
[[Apache Hadoop]] is an open source software project sponsored by The [[Apache Software Foundation]] which implements the MapReduce architecture. Hadoop now encompasses multiple subprojects in addition to the base core, MapReduce, and HDFS distributed filesystem. These additional subprojects provide enhanced application processing capabilities to the base Hadoop implementation and currently include Avro, [[Pig_(programming_language)|Pig]], [[HBase]], [[Apache ZooKeeper|ZooKeeper]], [[Apache Hive|Hive]], and Chukwa. The Hadoop MapReduce architecture is functionally similar to the Google implementation except that the base programming language for Hadoop is [[Java (programming language)|Java]] instead of [[C++]]. The implementation is intended to execute on clusters of commodity processors.
Hadoop implements a distributed data processing scheduling and execution environment and framework for MapReduce jobs. Hadoop includes a distributed file system called HDFS which is analogous to [[Google File System|GFS]] in the Google MapReduce implementation. The Hadoop execution environment supports additional distributed data processing capabilities which are designed to run using the Hadoop MapReduce architecture. These include [[HBase]], a distributed column-oriented database which provides random access read/write capabilities; Hive, which is a [[data warehouse]] system built on top of Hadoop that provides [[SQL]]-like query capabilities for data summarization, ad
[[Pig_(programming_language)|Pig]] was developed at Yahoo! to provide a specific language notation for data analysis applications and to improve programmer productivity and reduce development cycles when using the Hadoop MapReduce environment. Pig programs are automatically translated into sequences of MapReduce programs if needed in the execution environment. Pig provides capabilities in the language for loading, storing, filtering, grouping, de-duplication, ordering, sorting, aggregation, and joining operations on the data.<ref>[http://i.stanford.edu/~usriv/talks/sigmod08-pig-latin.ppt#283,18,User-Code as a First-Class Citizen Pig Latin: A Not-So-Foreign Language for Data Processing] {{Webarchive|url=https://web.archive.org/web/20110720045445/http://i.stanford.edu/~usriv/talks/sigmod08-pig-latin.ppt#283,18,User-Code |date=2011-07-20 }} by C. Olston, B. Reed, U. Srivastava, R. Kumar, and A. Tomkins. (Presentation at SIGMOD 2008)," 2008</ref>
===HPCC===
[[
The [[ECL
To address both batch and online aspects data-intensive computing applications,
== See also ==
* [[Implicit parallelism]]
* [[Massively parallel]]
* [[Supercomputer]]
* [[Graph500]]
Line 75 ⟶ 64:
{{Reflist|2}}
[[Category:Parallel computing]]
|