Content deleted Content added
m avoid unnec redirect |
mNo edit summary |
||
Line 22:
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as [[peer-to-peer]] or [[grid computing]] which also use many nodes, but with a far more [[distributed computing|distributed nature]].<ref name=nbis />
A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast [[supercomputer]]. A basic approach to building a cluster is that of a [[Beowulf (computing)|Beowulf]] cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional [[high
Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. The [[TOP500]] organization's semiannual list of the 500 fastest [[supercomputer]]s often includes many clusters, e.g. the world's fastest machine in 2011 was the [[K computer]] which has a [[distributed memory]], cluster architecture.<ref>{{cite conference|first=Mitsuo|last=Yokokawa |display-authors=etal |title=The K computer: Japanese next-generation supercomputer development project|conference=International Symposium on Low Power Electronics and Design (ISLPED)|date=1–3 August 2011|pages=371–372|doi=10.1109/ISLPED.2011.5993668}}</ref>
==
{{Main|History of computer clusters}}
{{See also|History of supercomputing}}
Line 53:
==Benefits==
<!-- This used to be a list. Work has been done since, but it's still incomplete. -->
Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (''the ability for a system to continue working with a malfunctioning node'') allows for [[horizontal scaling|scalability]], and in high
In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers.
Line 125:
The Linux world supports various cluster software; for application clustering, there is [[distcc]], and [[MPICH]]. [[Linux Virtual Server]], [[Linux-HA]] - director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes. [[MOSIX]], [[LinuxPMI]], [[Kerrighed]], [[OpenSSI]] are full-blown clusters integrated into the [[kernel (computer science)|kernel]] that provide for automatic process migration among homogeneous nodes. [[OpenSSI]], [[openMosix]] and [[Kerrighed]] are [[single-system image]] implementations.
[[Microsoft Windows]] computer cluster Server 2003 based on the [[Windows Server]] platform provides pieces for High
[[gLite]] is a set of middleware technologies created by the [[Enabling Grids for E-sciencE]] (EGEE) project.
Line 134:
Although most computer clusters are permanent fixtures, attempts at [[flash mob computing]] have been made to build short-lived clusters for specific computations. However, larger-scale [[volunteer computing]] systems such as [[BOINC]]-based systems have had more followers.
==
{| cellspacing="5" cellpadding="5" border="0" width="60%"
|-
Line 165:
|}
==
{{Reflist|30em}}
==
* {{cite arXiv|first=Mark|last=Baker |display-authors=etal |title=Cluster Computing White Paper|eprint=cs/0004014|date=11 Jan 2001}}
* {{cite book|first1=Evan|last1=Marcus|first2=Hal|last2=Stern|title=Blueprints for High Availability: Designing Resilient Distributed Systems|publisher=John Wiley & Sons|isbn=978-0-471-35601-1|date=2000-02-14|url=https://archive.org/details/blueprintsforhig00marc}}
Line 176 ⟶ 175:
* {{cite book|editor-first=Rajkumar|editor-last=Buyya|title=High Performance Cluster Computing: Architectures and Systems|volume=2|isbn=978-0-13-013785-2|publisher=Prentice Hall|___location=NJ, USA|year=1999}}
==
{{Commons category|Clusters (computing)}}
* [https://web.archive.org/web/20190219183441/https://www.ieeetcsc.org/ IEEE Technical Committee on Scalable Computing (TCSC)]
|