History of supercomputing: Difference between revisions

Content deleted Content added
Using correct term – FLOPS, not flop
External links: add link to paper by Espasa Valero and Smith on Vector architectures. fascinating history and insights
Tags: Mobile edit Mobile web edit Advanced mobile edit
 
(17 intermediate revisions by 12 users not shown)
Line 1:
{{shortShort description|Aspect of historynone}}
{{use American English|date=December 2022}}
[[Image:Cray-1-deutsches-museum.jpg|thumb|A [[Cray-1]] supercomputer preserved at the [[Deutsches Museum]]]]
Line 8:
By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through the [[FLOPS|teraFLOPS]] computational barrier.
 
Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching [[petaflop]]petaFLOPS performance levels.
 
==Beginnings: 1950s and 1960s==
{{see also|Vector processor#History|Vector Processor history}}
The term "Super Computing" was first used in the ''[[New York World]]'' in 1929<ref>{{cite book |lastlast1=Eames |firstfirst1=Charles |last2=Eames |first2=Ray |title=A Computer Perspective |year=1973 |publisher=Harvard University Press |___location= Cambridge, Mass |pages = 95 }}. Page 95 identifies the article as {{cite news |title= Super Computing Machines Shown |publisher=New York World |date= March 1, 1920 }}. However, the article shown on page 95 references the Statistical Bureau in Hamilton Hall, and an article at the Columbia Computing History web site states that such did not exist until 1929. See [http://www.columbia.edu/acis/history/packard.html The Columbia Difference Tabulator - 1931]</ref> to refer to large custom-built [[Tabulating machine|tabulator]]s that [[IBM]] had made for [[Columbia University]].<ref>[{{cite web | url=http://www.columbia.edu/cu/computinghistory/statlab-clipping.jpg|title= ''Super Computing Machines Shown'' (in ''New York World'']) | year = 1920| access-date = 26 February 2024}}</ref>
 
There were several lines of second generation computers that were substantially faster than most contemporary mainframes. These included
* [[Atlas (computer)|Atlas]]
* [[UNIVAC LARC]]
* [[IBM 7030]]
* [[IBM System/360 Model 91|IBM 360/91]]
* IBM 360/95
* [[CDC 6600]]
The second generation saw the introduction of features intended to support [[multiprogramming]] and [[multiprocessor]] configurations, including master/slave (supervisor/problem) mode, storage protection keys, limit registers, protection associated with address translation, and [[atomic instruction]]s.
 
In 1957, a group of engineers left [[Sperry Corporation]] to form [[Control Data Corporation]] (CDC) in [[Minneapolis]], Minnesota. [[Seymour Cray]] left Sperry a year later to join his colleagues at CDC.<ref name=chen >{{cite book | title = Hardware software co-design of a multimedia SOC platform | first1 = Sao-Jie | last1 = Chen | first2 = Guang-Huei | last2 = Lin | first3 = Pao-Ann | last3 = Hsiung | first4 = Yu-Hen | last4 = Hu | year = 2009 | isbn = 9781402096235 | pages = 70–72 | url = https://books.google.com/books?id=OXyo3om9ZOkC&pg=PA70 | publisher = [[Springer Science+Business Media]] | access-date = 20 February 2018}}</ref> In 1960, Cray completed the [[CDC 1604]], one of the first generation of commercially successful [[Transistor computer|transistorized]] computers and at the time of its release, the fastest computer in the world.<ref name=Hannan >{{cite book | title = Wisconsin Biographical Dictionary | first = Caryn | last = Hannan | year = 2008 | isbn = 978-1-878592-63-7 | pages = 83–84 | publisher = State History Publications | url = https://books.google.com/books?id=V08bjkJeXkAC&pg=PA83 | access-date = 20 February 2018}}</ref> However, the sole fully transistorized [[Harwell CADET]] was operational in 1951, and IBM delivered its commercially successful transistorized [[IBM 7090]] in 1959.
Line 36 ⟶ 46:
 
[[File:University of Manchester Atlas, January 1963.JPG|thumb|The University of Manchester [[Atlas (computer)|Atlas]] in January 1963.]]
In 1956, a team at [[Manchester University]] in the United Kingdom began development of [[Manchester_computers#Muse_and_Atlas|MUSE]]⁠{{nowrap|{{px2}}{{mdash}}{{px2}}}}a name derived from [[microsecond]] {{nowrap|engine{{px2}}{{mdash}}{{px2}}}}with the aim of eventually building a computer that could operate at processing speeds approaching one&nbsp;microsecond per instruction, about one&nbsp;million [[instructions per second]].<ref>{{cite web |title=The Atlas |url=http://www.computer50.org/kgill/atlas/atlas.html |publisher=University of Manchester |access-date=21 September 2010 |url-status=dead |archive-url=https://web.archive.org/web/20120728105352/http://www.computer50.org/kgill/atlas/atlas.html |archive-date=28 July 2012 }}</ref> ''Mu'' (the name of the Greek letter ''µμ'') is a prefix in the SI and other systems of units denoting a factor of 10<sup>−6</sup> (one millionth).
 
At the end of 1958, [[Ferranti#Computers|Ferranti]] agreed to collaborate with Manchester University on the project, and the computer was shortly afterwards renamed [[Atlas (computer)|Atlas]], with the joint venture under the control of [[Tom Kilburn]]. The first Atlas was officially commissioned on 7&nbsp;December {{nowrap|1962{{px2}}{{mdash}}{{px2}}}}nearly three years before the Cray CDC 6600 supercomputer was {{nowrap|introduced{{px2}}{{mdash}}{{px2}}}}as one of the world's first [[supercomputer]]s. It was considered at the time of its commissioning to be the most powerful computer in the world, equivalent to four [[IBM 7094]]s. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost.<ref name=Lavington>{{cite book |last=Lavington |first=Simon Hugh |title=A History of Manchester Computers |year=1998 |edition=2 |publisher=The British Computer Society |___location=Swindon |isbn=978-1-902505-01-5 |pages=41–52 |url=https://books.google.com/books?id=rVnxAAAAMAAJ}}</ref> The Atlas pioneered [[virtual memory]] and [[paging]] as a way to extend its working memory by combining its 16,384 words of primary [[magnetic-core memory|core memory]] with an additional 96K words of secondary [[drum memory]].<ref>{{citation | first = R. J. | last = Creasy | url = http://pages.cs.wisc.edu/~stjones/proj/vm_reading/ibmrd2505M.pdf | title = The Origin of the VM/370 Time-Sharing System | work = IBM Journal of Research & Development | volume = 25 | number = 5 | date = September 1981 | page = 486 }}</ref> Atlas also pioneered the [[Atlas Supervisor]], "considered by many to be the first recognizable modern [[operating system]]".<ref name=Lavington />
Line 42 ⟶ 52:
==The Cray era: mid-1970s and 1980s==
[[File:Cray2.jpeg|thumb|A [[Fluorinert]]-cooled [[Cray-2]] supercomputer]]
Four years after leaving CDC, Cray delivered the 80&nbsp;MHz [[Cray-1]] in 1976, and it became the most successful supercomputer in history.<ref name=Hill41>{{cite book | title = Readings in computer architecture | first1 = Mark Donald | last1 = Hill |author-link2=Norman Jouppi | first2 = Norman Paul | last2 = Jouppi | first3 = Gurindar | last3 = Sohi | year = 1999 | isbn = 978-1-55860-539-8 | pages = 41–48| publisher = Gulf Professional }}</ref><ref name=Edwin65 /> The Cray-1, which used integrated circuits with two gates per chip, was a [[vector processor]]. It introduced a number of innovations, such as [[chaining (vector processing)|chaining]], in which scalar and vector registers generate interim results that can be used immediately, without additional memory references which would otherwise reduce computational speed.<ref name="The Supermen 1997"/><ref name=Tokhi >{{cite book | title = Parallel computing for real-time signal processing and control | url = https://archive.org/details/parallelcomputin00phdm | url-access = limited | first1 = M. O. | last1 = Tokhi | first2 = Mohammad Alamgir | last2 = Hossain | year = 2003 | isbn = 978-1-85233-599-1 | pages = [https://archive.org/details/parallelcomputin00phdm/page/n209 201]-202| publisher = Springer }}</ref> The [[Cray X-MP]] (designed by [[Steve Chen (computer engineer)|Steve Chen]]) was released in 1982 as a 105&nbsp;MHz shared-memory [[Parallel computing|parallel]] [[vector processor]] with better chaining support and multiple memory pipelines. All three floating -point pipelines on the X-MP could operate simultaneously.<ref name=Tokhi /> By 1983 Cray and Control Data were supercomputer leaders; despite its lead in the overall computer market, IBM was unable to produce a profitable competitor.<ref name="greenwald19830711">{{Cite magazine |last=Greenwald |first=John |date=1983-07-11 |title=The Colossus That Works |url=http://content.time.com/time/magazine/article/0,9171,949693-2,00.html |url-status=live |url-access=subscription | magazine=Time |archive-url=https://web.archive.org/web/20080514004334/http://www.time.com/time/magazine/article/0,9171,949693-2,00.html |archive-date=2008-05-14 |access-date=2019-05-18}}</ref>
 
The [[Cray-2]], released in 1985, was a four-processor [[Computer cooling|liquid cooled]] computer totally immersed in a tank of [[Fluorinert]], which bubbled as it operated.<ref name="The Supermen 1997" /> It reached 1.9&nbsp;gigaflops and was the world's fastest supercomputer, and the first to break the gigaflop barrier.<ref>Due to Soviet propaganda, it can be read sometimes that the Soviet supercomputer M13 was the first to reach the gigaflops barrier. Actually, the M13 construction began in 1984, but it was not operational before 1986. [https://www.computer-museum.ru/english/galglory_en/Rogachev.php Rogachev Yury Vasilievich, Russian Virtual Computer Museum]</ref> The Cray-2 was a totally new design. It did not use chaining and had a high memory latency, but used much pipelining and was ideal for problems that required large amounts of memory.<ref name=Tokhi /> The software costs in developing a supercomputer should not be underestimated, as evidenced by the fact that in the 1980s the cost for software development at Cray came to equal what was spent on hardware.<ref name=MacKenzie >{{cite book | title = Knowing machines: essays on technical change | first = Donald | last = MacKenzie | year = 1998 | isbn = 0-262-63188-1 | pages = 149–151| publisher = MIT Press| url=https://archive.org/details/knowingmachinese0000mack/} }}</ref> That trend was partly responsible for a move away from the in-house, [[Cray Operating System]] to [[UNICOS]] based on [[Unix]].<ref name=MacKenzie />
 
The [[Cray Y-MP]], also designed by Steve Chen, was released in 1988 as an improvement of the X-MP and could have eight [[vector processor]]s at 167&nbsp;MHz with a peak performance of 333&nbsp;megaflops per processor.<ref name=Tokhi /> In the late 1980s, Cray's experiment on the use of [[gallium arsenide]] semiconductors in the [[Cray-3]] did not succeed. Seymour Cray began to work on a [[Multiple instruction, multiple data|massively parallel]] computer in the early 1990s, but died in a car accident in 1996 before it could be completed. Cray Research did, however, produce such computers.<ref name=Edwin65 >{{cite book | title = Milestones in computer science and information technology | url = https://archive.org/details/milestonesincomp0000reil | url-access = registration | first = Edwin D. | last = Reilly | year = 2003 | isbn = 1-57356-521-0 | page = [https://archive.org/details/milestonesincomp0000reil/page/65 65]| publisher = Bloomsbury Academic }}</ref><ref name="The Supermen 1997"/>
Line 50 ⟶ 60:
==Massive processing: the 1990s==
The [[Cray-2]] which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors. In the 1990s, supercomputers with thousands of processors began to appear. Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray-1.
 
During the first half of the [[Strategic Computing Initiative]], some massively parallel architectures were proven to work, such as the [[WARP (systolic array)|WARP systolic array]], message-passing [[Multiple instruction, multiple data|MIMD]] like the [[Caltech Cosmic Cube|Cosmic Cube]] hypercube, [[Single instruction, multiple data|SIMD]] like the [[Connection Machine]], etc. In 1987, a TeraOPS Computing Technology Program was proposed, with a goal of achieving 1 teraOPS (a trillion operations per second) by 1992, which was considered achievable by scaling up any of the previously proven architectures.<ref>{{Cite book |last1=Roland |first1=Alex |title=Strategic computing: DARPA and the quest for machine intelligence, 1983 - 1993 |last2=Shiman |first2=Philip |date=2002 |publisher=MIT Press |isbn=978-0-262-18226-3 |series=History of computing |___location=Cambridge, Mass. |pages=296}}</ref>
 
[[File:Paragon XP-E - mesh.jpg|thumb|left|Rear of the [[Intel Paragon|Paragon]] cabinet showing the bus bars and mesh routers]]
Line 223 ⟶ 235:
|align=right|>1.1&nbsp;EFLOPS
|
|[[Oak Ridge Leadership Computing Facility]], AMD[[Tennessee]], [[United States|USA]]
|-
 
Line 246 ⟶ 258:
==External links==
*[https://www.computerhistory.org/visiblestorage/1960s-1980s/supercomputers/ Supercomputers (1960s-1980s)] at the [[Computer History Museum]]
*[https://www.cs.cmu.edu/afs/cs/academic/class/15740-f03/public/doc/discussions/uniprocessors/vector/vector-past-present-future-supercomputing98.pdf R. Espasa, M. Valero, and J. E. Smith, “Vector architectures: past, present and future,” in Proceedings of the 12th international conference on Supercomputing, 1998]
 
[[Category:Supercomputers]]
[[Category:History of computing hardware|Supercomputing]]
[[Category:History of Silicon Valley|Supercomputing]]
 
 
==References==