NASA Advanced Supercomputing Division: Difference between revisions

Content deleted Content added
Monkbot (talk | contribs)
m Founding: Task 16: replaced (1×) / removed (0×) deprecated |dead-url= and |deadurl= with |url-status=;
Rescuing 16 sources and tagging 0 as dead.) #IABot (v2.0.9.5
 
(29 intermediate revisions by 22 users not shown)
Line 1:
{{Short description|Provides computing resources for various NASA projects}}
{{Infobox government agency
| agency_name = NASA Advanced Supercomputing Division
| picture = NASA_Advanced_Supercomputing_Facility.jpg
| picture_width = 250px
| picture_caption =
| formed = {{Startdate|1982}}
| preceding1 = Numerical Aerodynamic Simulation Division (1982)
| preceding2 = Numerical Aerospace Simulation Division (1995)
| dissolved =
| superseding =
| jurisdiction =
| headquarters = [[NASA Ames Research Center]], [[Moffett Federal Airfield|Moffett Field]], [[California]]
| coordinates = {{coord|37|25|16|N|122|03|53|W|type:landmark|display=inline}}
| motto =
| employees =
| budget =
| chief1_name = PiyushDonovan MehrotraMathias
| chief1_position = Division Chief (Acting)
| chief2_name =
| chief2_position = <!-- (etc.) -->
| agency_type =
<!-- (etc.) -->
| parent_department = Ames Research Center Exploration Technology Directorate
|agency_type =
| parent_agency = [[NASA|National Aeronautics and Space Administration (NASA)]]
|parent_department = Ames Research Center Exploration Technology Directorate
| child1_agency =
|parent_agency = [[NASA|National Aeronautics and Space Administration (NASA)]]
| child2_agency = <!-- (etc.) -->
|child1_agency =
| keydocument1 = <!-- (etc.) -->
|child2_agency =
| website = {{url|http://www.nas.nasa.gov}}
<!-- (etc.) -->
|keydocument1 footnotes =
| map =
<!-- (etc.) -->
| map_width =
|website = {{url|http://www.nas.nasa.gov}}
|footnotes map_caption =
|map =
|map_width =
|map_caption =
}}
{| class=infobox width=280px
|colspan=2 style="background:#DDDDDD" align=center|'''Current Supercomputing Systems'''
|-
|[[Pleiades (supercomputer)|'''Pleiades''']]
|SGI Altix ICE supercluster
|-
|[[Endeavour (supercomputer)|'''Endeavour''']]
|SGI UV shared-memory system
|-
|'''Merope'''<ref>{{cite web |title=Merope Supercomputer homepage |url=http://www.nas.nasa.gov/hecc/resources/merope.html |publisher=NAS}}</ref>
|SGI Altix supercluster
|}
The '''NASA Advanced Supercomputing (NAS) Division''' is located at [[NASA Ames Research Center]], [[Moffett Federal Airfield|Moffett Field]] in the heart of [[Silicon Valley]] in [[Mountain View, California|Mountain View]], [[California]]. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for over thirty years.
 
The '''NASA Advanced Supercomputing (NAS) Division''' is located at [[NASA Ames Research Center]], [[Moffett Federal Airfield|Moffett Field]] in the heart of [[Silicon Valley]] in [[Mountain View, California|Mountain View]], [[California]]. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for overalmost thirtyforty years.
The facility currently houses the [[petascale]] [[Pleiades (supercomputer)|Pleiades]] and [[Petascale computing|terascale]] [[Endeavour (supercomputer)|Endeavour]] [[supercomputer]]s based on [[Silicon Graphics|SGI]] architecture and [[Intel]] processors, as well as disk and archival tape storage systems with a capacity of over 126 petabytes of data, the hyperwall visualization system, and one of the largest [[InfiniBand]] network fabrics in the world.<ref name="HECdatasheet">{{cite web|title=NASA Advanced Supercomputing Division: Integrated High-End Computing Environment|url=http://www.nas.nasa.gov/assets/pdf/NAS_HEC_datasheet_Fall2013.pdf|publisher=NAS|year=2013}}</ref> The NAS Division is part of NASA's Exploration Technology Directorate and operates NASA's High-End Computing Capability (HECC) Project.<ref>{{cite web |url=http://www.nas.nasa.gov/about/about.html |title=NAS Homepage - About the NAS Division |publisher=NAS}}</ref>
 
The facility currently houses the [[petascale]] [[Pleiades (supercomputer)|Pleiades]], Aitken, and Electra [[Petascale computing|terascalesupercomputer]]s, as well as the terascale [[Endeavour (supercomputer)|Endeavour]] [[supercomputer]]s. The systems are based on [[Silicon Graphics|SGI]] and [[Hewlett Packard Enterprise|HPE]] architecture andwith [[Intel]] processors,. asThe wellmain building asalso houses disk and archival tape storage systems with a capacity of over 126an petabytes[[exabyte]] of data, the hyperwall visualization system, and one of the largest [[InfiniBand]] network fabrics in the world.<ref name="HECdatasheetAdvancedComputingDataSheet">{{cite web|title=NASA Advanced Supercomputing Division: Integrated High-EndAdvanced Computing Environment|url=httphttps://www.nas.nasa.gov/assets/pdf/NAS_HEC_datasheet_Fall2013AdvancedComputing_Brochure_Oct2019.pdf|publisher=NAS|year=20132019|access-date=2020-03-05|archive-date=2020-10-27|archive-url=https://web.archive.org/web/20201027211114/https://www.nas.nasa.gov/assets/pdf/AdvancedComputing_Brochure_Oct2019.pdf|url-status=dead}}</ref> The NAS Division is part of NASA's Exploration Technology Directorate and operates NASA's High-End Computing Capability (HECC) Project.<ref>{{cite web |url=http://www.nas.nasa.gov/about/about.html |title=NAS Homepage - About the NAS Division |publisher=NAS}}</ref>
 
==History==
Line 59 ⟶ 46:
In 1995, NAS changed its name to the Numerical Aerospace Simulation Division, and in 2001 to the name it has today.
 
===Industry Leadingleading Innovationsinnovations===
NAS has been one of the leading innovators in the supercomputing world, developing many tools and processes that became widely used in commercial supercomputing. Some of these firsts include:<ref>{{cite web |url=http://www.nas.nasa.gov/about/history.html |title=NAS homepage: Division History |publisher=NAS}}</ref>
* Installed [[Cray]]'s first [[UNIX]]-based supercomputer<ref name="gridpoints" />
* Implemented a client/server model linking the supercomputers and workstations together to distribute computation and visualization
* Developed and implemented a high-speed [[wide- area network]] (WAN) connecting supercomputing resources to remote users (AEROnet)
* Co-developed NASA's first method for dynamic distribution of production loads across supercomputing resources in geographically distant locations (NASA Metacenter)
* Implemented [[TCP/IP]] networking in a supercomputing environment
Line 73 ⟶ 60:
[[File:SSLV ascent.jpg|thumb|250px|An image of the flowfield around the Space Shuttle Launch Vehicle traveling at Mach 2.46 and at an altitude of {{convert|66000|ft}}. The surface of the vehicle is colored by the pressure coefficient, and the gray contours represent the density of the surrounding air, as calculated using the OVERFLOW code.]]
 
===Software Developmentdevelopment===
NAS develops and adapts software in order to "complimentcomplement and enhance the work performed on its supercomputers, including software for systems support, monitoring systems, security, and scientific visualization," and often provides this software to its users through the NASA Open Source Agreement (NOSA).<ref>{{cite web |url=http://www.nas.nasa.gov/publications/software_datasets.html |title=NAS Software and Datasets |publisher=NAS}}</ref>
 
A few of the important software developments from NAS include:
Line 80 ⟶ 67:
* '''[[Portable Batch System]] (PBS)''' was the first batch queuing software for parallel and distributed systems. It was released commercially in 1998 and is still widely used in the industry.
* '''[[PLOT3D file format|PLOT3D]]''' was created in 1982 and is a computer graphics program still used today to visualize the grids and solutions of structured CFD datasets. The PLOT3D team was awarded the fourth largest prize ever given by the NASA Space Act Program for the development of their software, which revolutionized scientific visualization and analysis of 3D CFD solutions.<ref name="25th" />
* '''FAST (Flow Analysis Software Toolkit)''' is a software environment based on PLOT3D and used to analyze data from numerical simulations which, though tailored to CFD visualization, can be used to visualize almost any [[Scalar (computing)|scalar]] and [[Vector graphics|vector]] data. It was awarded the NASA Software of the Year Award in 1995.<ref>{{cite web |url=https://www.nas.nasa.gov/Software/FAST/ |title=NASA Flow Analysis Software Toolkit |publisher=NASA |access-date=2020-03-05 |archive-date=2018-04-19 |archive-url=https://web.archive.org/web/20180419072050/https://www.nas.nasa.gov/Software/FAST/ |url-status=dead }}</ref>
* '''INS2D''' and '''INS3D''' are codes developed by NAS engineers to solve incompressible [[Navier-Stokes equations]] in two- and three-dimensional generalized coordinates, respectively, for steady-state and time varying flow. In 1994, INS3D won the NASA Software of the Year Award.<ref name="25th" />
* '''Cart3D''' is a high-fidelity analysis package for aerodynamic design which allows users to perform automated CFD simulations on complex forms. It is still used at NASA and other government agencies to test conceptual and preliminary air- and spacecraft designs.<ref>{{cite web |url=http://people.nas.nasa.gov/~aftosmis/cart3d/ |archive-url=https://web.archive.org/web/20020602073014/http://people.nas.nasa.gov/~aftosmis/cart3d/ |url-status=dead |archive-date=2002-06-02 |title=NASA Cart3D Homepage}}</ref> The Cart3D team won the NASA Software of the Year award in 2002.
* '''[[Overflow (software)|OVERFLOW]]''' (Overset grid flow solver) is a software package developed to simulate fluid flow around solid bodies using Reynolds-averaged, Navier-Stokes CFD equations. It was the first general-purpose NASA CFD code for overset (Chimera) grid systems and was released outside of NASA in 1992.
* '''Chimera Grid Tools (CGT)''' is a software package containing a variety of tools for the Chimera overset grid approach for solving CFD problems of surface and volume grid generation; as well as grid manipulation, smoothing, and projection.
* '''HiMAP''' A three level (Intra/Inter discipline, multicase) parallel HIgh fidelity Multidisciplinary (Fluids, Structures, Controls) Analysis Process,<ref>{{Cite web|url=https://www.nasa.gov/centers/ames/news/releases/2002/02_76AR.html|title=NASA.gov|access-date=2024-05-21|archive-date=2023-01-17|archive-url=https://web.archive.org/web/20230117100533/https://www.nasa.gov/centers/ames/news/releases/2002/02_76AR.html|url-status=dead}}</ref><ref>{{Cite web|url=https://www.nas.nasa.gov/assets/pdf/staff/Guruswamy_G_Development_and_Applications_of_a_Large_Scale_Fluids_Structures_Simulation_Process_on_Clusters.pdf|title=NASA.gov|access-date=2021-04-23|archive-date=2021-03-20|archive-url=https://web.archive.org/web/20210320213116/https://nas.nasa.gov/assets/pdf/staff/Guruswamy_G_Development_and_Applications_of_a_Large_Scale_Fluids_Structures_Simulation_Process_on_Clusters.pdf|url-status=dead}}</ref>
 
==Supercomputing Historyhistory==
Since its construction in 1987, the NASA Advanced Supercomputing DivisionFacility has housed and operated some of the most powerful supercomputers in the world. Many of these computers include [[testbed]] systems built to test new architecture, hardware, or networking set-ups that might be utilized on a larger scale.<ref name="25th" /><ref name="gridpoints">{{cite journal |title=NAS High-Performance Computer History |journal=Gridpoints |date=Spring 2002 |pages=1A-12A1A–12A}}</ref> Peak performance is shown in [[FLOPS|Floating Point Operations Per Second (FLOPS)]].
{| class="wikitable sortable"
|-
! Computer Name !! Architecture !! Peak Performance !! Number of CPUs !! Installation Date
|-
| || [[Cray]] [[Cray X-MP|XMP-12]] || <span data-sort-value="210.53">210.53 megaflops</span> || 1 || 1984
|-
| Navier || [[Cray-2|Cray 2]] || <span data-sort-value="1950">1.95 gigaflops</span> || 4 || 1985
|-
| Chuck || [[Convex Computer|Convex 3820]] || <span data-sort-value="1900">1.9 gigaflops</span> || 8 || 1987
|-
| rowspan="2"| Pierre || rowspan="2" | [[Thinking Machines Corporation|Thinking Machines]] [[Connection Machine|CM2]] || <span data-sort-value="14340">14.34 gigaflops</span> || 16,000 || 1987
|-
|| <span data-sort-value="43000">43 gigaflops</span> || 48,000 || 1991
|-
| Stokes || Cray 2 || <span data-sort-value="1950">1.95 gigaflops</span> || 4 || 1988
|-
| Piper || CDC/ETA-10Q || <span data-sort-value="840">840 megaflops</span> || 4 || 1988
|-
| rowspan="2" | Reynolds || rowspan="2" | [[Cray Y-MP]] || <span data-sort-value="2540">2.54 gigaflops</span> || 8 || 1988
|-
|| <span data-sort-value="2670">2.67 gigaflops</span> || 88 || 1988
|-
| Lagrange || [[Intel]] [[Intel iPSC|iPSC/860]] || <span data-sort-value="7880">7.88 gigaflops</span> || 128 || 1990
|-
| Gamma|| Intel iPSC/860 || <span data-sort-value="7680">7.68 gigaflops</span> || 128 || 1990
|-
| von Karman || Convex 3240 || <span data-sort-value="200">200 megaflops</span> || 4|| 1991
|-
| Boltzmann || Thinking Machines CM5 || <span data-sort-value="16380">16.38 gigaflops</span> || 128 || 1993
|-
| Sigma || [[Intel Paragon]] || <span data-sort-value="15600">15.60 gigaflops</span> || 208 || 1993
|-
| von Neumann || [[Cray C90]] || <span data-sort-value="15360">15.36 gigaflops</span> || 16 || 1993
|-
| Eagle || Cray C90 || <span data-sort-value="7680">7.68 gigaflops</span> || 8 || 1993
|-
| Grace || Intel Paragon || <span data-sort-value="15600">15.6 gigaflops</span> || 209 || 1993
|-
| rowspan="2" | Babbage || rowspan="2" | [[IBM ScalableRS/6000 POWERparallel|IBM SP-2SP2]] || <span data-sort-value="34050">34.05 gigaflops</span> || 128 || 1994
|-
|| <span data-sort-value="42560">42.56 gigaflops</span> || 160 || 1994
|-
| rowspan="2" | da Vinci || [[Silicon Graphics|SGI]] [[SGI Challenge#POWER Challenge|Power Challenge]] || || 16 || 1994
|-
|| SGI Power Challenge XL || <span data-sort-value="11520">11.52 gigaflops</span> || 32 || 1995
|-
| Newton|| [[Cray J90]] || <span data-sort-value="7200">7.2 gigaflops</span> || 36 || 1996
|-
| Piglet || [[SGI Origin 2000|SGI Origin 2000/250 MHz]] || <span data-sort-value="4000">4 gigaflops</span> || 8 || 1997
|-
| rowspan="2" | Turing || rowspan="2" | SGI Origin 2000/195&nbsp;MHz || <span data-sort-value="9360">9.36 gigaflops</span> || 24 || 1997
|-
|| <span data-sort-value="25000">25 gigaflops</span> || 64 || 1997
|-
| Fermi || SGI Origin 2000/195&nbsp;MHz || <span data-sort-value="3120">3.12 gigaflops</span> || 8 || 1997
|-
| Hopper || SGI Origin 2000/250&nbsp;MHz || <span data-sort-value="32000">32 gigaflops</span> || 64 || 1997
|-
| Evelyn || SGI Origin 2000/250&nbsp;MHz || <span data-sort-value="4000">4 gigaflops</span> || 8 || 1997
|-
| rowspan="2" | Steger || rowspan="2" | SGI Origin 2000/250&nbsp;MHz || <span data-sort-value="64000">64 gigaflops</span> || 128 || 1997
|-
|| <span data-sort-value="128000">128 gigaflops</span> || 256 || 1998
|-
| rowspan="2" | Lomax || rowspan="2" | SGI Origin 2800/300&nbsp;MHz || <span data-sort-value="307200">307.2 gigaflops</span> || 512 || 1999
|-
|| <span data-sort-value="409600">409.6 gigaflops</span> || 512 || 2000
|-
| Lou || SGI Origin 2000/250&nbsp;MHz || <span data-sort-value="4680">4.68 gigaflops</span> || 12 || 1999
|-
| Ariel || SGI Origin 2000/250&nbsp;MHz || <span data-sort-value="4000">4 gigaflops</span> || 8 || 2000
|-
| Sebastian || SGI Origin 2000/250&nbsp;MHz || <span data-sort-value="4000">4 gigaflops</span> || 8 || 2000
|-
| SN1-512 || [[SGI Origin 3000 and Onyx 3000#Origin 3000|SGI Origin 3000/400 MHz]] || <span data-sort-value="409600">409.6 gigaflops</span> || 512 || 2001
|-
| Bright || [[Cray SV1|Cray SVe1/500 MHz]] || <span data-sort-value="64000">64 gigaflops</span> || 32 || 2001
|-
| rowspan="2" | Chapman || rowspan="2" | SGI Origin 3800/400&nbsp;MHz || <span data-sort-value="819200">819.2 gigaflops</span> || 1,024 || 2001
|-
|| <span data-sort-value="1230000">1.23 teraflops</span> || 1,024 || 2002
|-
| Lomax II || SGI Origin 3800/400&nbsp;MHz || <span data-sort-value="409600">409.6 gigaflops</span>|| 512 || 2002
|-
| [[Kalpana (supercomputer)|Kalpana]]<ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2004/05-10-04.html |title=NASA to Name Supercomputer After Columbia Astronaut |publisher=NAS |date=May 2005 |access-date=2014-03-07 |archive-date=2013-03-17 |archive-url=https://web.archive.org/web/20130317094512/http://www.nas.nasa.gov/publications/news/2004/05-10-04.html |url-status=dead }}</ref> || [[Altix#Altix 3000|SGI Altix 3000]] <ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2003/11-17-03.html |title=NASA Ames Installs World's First Alitx 512-Processor Supercomputer |publisher=NAS |date=November 2003 |access-date=2014-03-07 |archive-date=2013-03-17 |archive-url=https://web.archive.org/web/20130317094536/http://www.nas.nasa.gov/publications/news/2003/11-17-03.html |url-status=dead }}</ref> || <span data-sort-value="2660000">2.66 teraflops</span> || 512 || 2003
|-
|| || [[Cray X1]]<ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2004/04-27-04.html |title=New Cray X1 System Arrives at NAS |publisher=NAS |date=April 2004 |access-date=2014-03-07 |archive-date=2013-03-17 |archive-url=https://web.archive.org/web/20130317094520/http://www.nas.nasa.gov/publications/news/2004/04-27-04.html |url-status=dead }}</ref> || <span data-sort-value="204800">204.8 gigaflops</span> || || 2004
|-
| rowspan="3" | [[Columbia (supercomputer)|Columbia]] || SGI Altix 3000<ref>{{cite web |url=http://www.nasa.gov/home/hqnews/2004/oct/HQ_04353_columbia.html |title=NASA Unveils Its Newest, Most Powerful Supercomputer |publisher=NASA |date=October 2004 |access-date=2014-03-07 |archive-date=2004-10-28 |archive-url=https://web.archive.org/web/20041028100627/http://www.nasa.gov/home/hqnews/2004/oct/HQ_04353_columbia.html |url-status=dead }}</ref> || <span data-sort-value="63000000">63 teraflops</span> || 10,240 || 2004
|-
| rowspan="2" | [[Altix#Altix 4000|SGI Altix 4700]] || || 10,296 || 2006
|-
|| <span data-sort-value="85800000">85.8 teraflops</span><ref>{{cite web |url=http://www.nas.nasa.gov/hecc/resources/columbia.html |title=Columbia Supercomputer Legacy homepage |publisher=NASA}}</ref> || 13,824 || 2007
|-
| Schirra || [[IBM POWER microprocessors#POWER5|IBM POWER5+]]<ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2007/06-06-07.html |title=NASA Selects IBM for Next-Generation Supercomputing Applications |publisher=NASA |date=June 2007 |access-date=2014-03-07 |archive-date=2013-03-16 |archive-url=https://web.archive.org/web/20130316185828/http://www.nas.nasa.gov/publications/news/2007/06-06-07.html |url-status=dead }}</ref> || <span data-sort-value="4800000">4.8 teraflops</span> || 640 || 2007
|-
| RT Jones || [[Altix#Altix ICE|SGI ICE 8200]], [[Xeon#5400-series "Harpertown"|Intel Xeon "Harpertown" Processors]] || <span data-sort-value="43500000">43.5 teraflops</span> || 4,096 || 2007
|-
| rowspan="910" | [[Pleiades (supercomputer)|Pleiades]] || rowspan="2" | SGI ICE 8200, Intel Xeon "Harpertown" Processors<ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2008/11-18-08.html |title=NASA Supercomputer Ranks Among World’sWorld's Fastest – November 2008 |publisher=NASA |date=November 2008 |access-date=2014-03-07 |archive-date=2019-08-25 |archive-url=https://web.archive.org/web/20190825041603/https://www.nas.nasa.gov/publications/news/2008/11-18-08.html |url-status=dead }}</ref> || <span data-sort-value="487000000">487 teraflops</span> || 51,200 || 2008
|-
|| <span data-sort-value="544000000">544 teraflops</span><ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2010/02-08-10.html |title=’Live’'Live' Integration of Pleiades Rack Saves 2 Million Hours |publisher=NAS |date=February 2010 |access-date=2014-03-07 |archive-date=2013-03-16 |archive-url=https://web.archive.org/web/20130316185608/http://www.nas.nasa.gov/publications/news/2010/02-08-10.html |url-status=dead }}</ref> || 56,320 || 2009
|-
|| SGI ICE 8200, Intel Xeon "Harpertown"/[[Xeon#5500-series "Gainestown"|"Nehalem"]] Processors<ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2010/06-02-10.html |title=NASA Supercomputer Doubles Capacity, Increases Efficiency |publisher=NASA |date=June 2010 |access-date=2014-03-07 |archive-date=2019-08-25 |archive-url=https://web.archive.org/web/20190825041604/https://www.nas.nasa.gov/publications/news/2010/06-02-10.html |url-status=dead }}</ref> || <span data-sort-value="773000000">773 teraflops</span> || 81,920 || 2010
|-
|| SGI ICE 8200/8400, Intel Xeon "Harpertown"/"Nehalem"/[[Xeon#3600/5600-series "Gulftown" & "Westmere-EP"|"Westmere"]] Processors<ref>{{cite web |url=http://www.nasa.gov/home/hqnews/2011/jun/HQ-11-194_Supercomputer_Ranks.html |title=NASA's Pleiades Supercomputer Ranks Among World's Fastest |publisher=NASA |date=June 2011 |access-date=2014-03-07 |archive-date=2011-10-21 |archive-url=https://web.archive.org/web/20111021054516/http://www.nasa.gov/home/hqnews/2011/jun/HQ-11-194_Supercomputer_Ranks.html |url-status=dead }}</ref> || <span data-sort-value="1090000000">1.09 petaflops</span> || 111,104|| 2011
|-
|| SGI ICE 8200/8400/X, Intel Xeon "Harpertown"/"Nehalem"/"Westmere"/[[Xeon#Sandy Bridge- and Ivy Bridge-based Xeon|"Sandy Bridge"]] Processors<ref>{{cite web |url=http://www.nasa.gov/home/hqnews/2012/jun/HQ_12_206_Pleiades_Supercomputer.html |title=Pleiades Supercomputer Gets a Little More Oomph |publisher=NASA |date=June 2012 |access-date=2014-03-07 |archive-date=2013-11-22 |archive-url=https://web.archive.org/web/20131122155911/http://www.nasa.gov/home/hqnews/2012/jun/HQ_12_206_Pleiades_Supercomputer.html |url-status=dead }}</ref> || <span data-sort-value="1240000000">1.24 petaflops</span> || 125,980 || 2012
|-
| rowspan="2" | SGI ICE 8200/8400/X, Intel Xeon "Nehalem"/"Westmere"/"Sandy Bridge"/[[Ivy Bridge (microarchitecture)|"Ivy Bridge"]] Processors<ref name="Upgrade2013">{{cite web |url=http://www.nas.nasa.gov/publications/news/2013/09-19-13.html |title=NASA's Pleiades Supercomputer Upgraded, Harpertown Nodes Repurposed |publisher=NAS |date=August 2013 |access-date=2014-03-07 |archive-date=2019-08-25 |archive-url=https://web.archive.org/web/20190825041604/https://www.nas.nasa.gov/publications/news/2013/09-19-13.html |url-status=dead }}</ref>|| <span data-sort-value="2870000000">2.87 petaflops</span> || 162,496 || 2013
|-
|| <span data-sort-value="3590000000">3.59 petaflops</span> || 184,800 || 2014
|-
| rowspan="2" | SGI ICE 8400/X, Intel Xeon "Westmere"/"Sandy Bridge"/"Ivy Bridge"/[[Haswell_Haswell (microarchitecture)#Server_processorsServer processors|"Haswell"]] Processors<ref name="Upgrade2014">{{cite web |url=http://www.nas.nasa.gov/publications/news/2014/10-28-14.html |title=NASA's Pleiades Supercomputer Upgraded, Gets One Petaflops Boost |publisher=NAS |date=October 2014 |access-date=2014-12-29 |archive-date=2019-08-25 |archive-url=https://web.archive.org/web/20190825041557/https://www.nas.nasa.gov/publications/news/2014/10-28-14.html |url-status=dead }}</ref>|| <span data-sort-value="4490000000">4.49 petaflops</span> || 198,432 || 2014
|-
|| <span data-sort-value="5350000000">5.35 petaflops</span><ref name="UpgradeJan2015">{{cite web |url=http://www.nas.nasa.gov/publications/news/2015/01-22-15.html |title=Pleiades Supercomputer Performance Leaps to 5.35 Petaflops with Latest Expansion |publisher=NAS |date=January 2015 |access-date=2015-02-05 |archive-date=2019-01-15 |archive-url=https://web.archive.org/web/20190115171959/https://www.nas.nasa.gov/publications/news/2015/01-22-15.html |url-status=dead }}</ref> || 210,336 || 2015
|-
| [[Endeavour (supercomputer)|Endeavour]] || [[Altix#Altix UV|SGI UVICE 2000]]X, Intel Xeon "Sandy Bridge"/"Ivy Bridge"/"Haswell"/[[Broadwell (microarchitecture)#Server processors|"Broadwell"]] Processors<ref>{{cite web |url=httphttps://www.nas.nasa.gov/heccpublications/resourcesnews/endeavour2016/06-01-16.html |title=EndeavourPleiades Supercomputer ResourcePeak homepagePerformance Increased, Long-Term Storage Capacity Tripled |publisher=NAS |date=July 2016 |access-date=2020-03-05 |archive-date=2019-06-19 |archive-url=https://web.archive.org/web/20190619184616/https://www.nas.nasa.gov/publications/news/2016/06-01-16.html |url-status=dead }}</ref> || 32<span teraflopsdata-sort-value="7250000000">7.25 petaflops</span> || 1246,536048 || 20132016
|-
| rowspan="2"[[Endeavour (supercomputer)| MeropeEndeavour]] || [[Altix#Altix UV|SGI ICEUV 82002000]], Intel Xeon "HarpertownSandy Bridge" Processors<ref>{{cite nameweb |url="Upgrade2013"http:/>/www.nas.nasa.gov/hecc/resources/endeavour.html |title=Endeavour Supercomputer Resource homepage |publisher=NAS}}</ref>|| 61<span data-sort-value="32000000">32 teraflops</span> || 51,120536 || 2013
|-
| rowspan="2" | [[Merope (supercomputer)|Merope]] || SGI ICE 84008200, Intel Xeon "Nehalem"/"WestmereHarpertown" Processors<ref name="Upgrade2014Upgrade2013"/> || 141<span data-sort-value="61000000">61 teraflops</span> || 15,152120 || 20142013
|-
|| SGI ICE 8400, Intel Xeon "Nehalem"/"Westmere" Processors<ref name="Upgrade2014"/> || <span data-sort-value="141000000">141 teraflops</span> || 1,152 || 2014
|-
| rowspan="3" | Electra || SGI ICE X, Intel Xeon "Broadwell" Processors<ref>{{cite web |url=https://www.nas.nasa.gov/publications/articles/feature_MSF_Kickoff.html |title=NASA Ames Kicks off Pathfinding Modular Supercomputing Facility |publisher=NAS |date=February 2017 |access-date=2020-03-05 |archive-date=2019-09-10 |archive-url=https://web.archive.org/web/20190910051255/https://www.nas.nasa.gov/publications/articles/feature_MSF_Kickoff.html |url-status=dead }}</ref>|| <span data-sort-value="1900000000">1.9 petaflops</span> || 1,152 || 2016
|-
| rowspan="2" | SGI ICE X/HPE SGI 8600 E-Cell, Intel Xeon "Broadwell"/[[Skylake (microarchitecture)#Server processors|"Skylake"]] Processors<ref>{{cite web |url=https://www.nas.nasa.gov/publications/news/2017/11-13-17.html |title=Recently Expanded, NASA's First Modular Supercomputer Ranks 15th in the U.S. on TOP500 List |publisher=NAS |date=November 2017 |access-date=2020-03-05 |archive-date=2019-09-10 |archive-url=https://web.archive.org/web/20190910051306/https://www.nas.nasa.gov/publications/news/2017/11-13-17.html |url-status=dead }}</ref> || <span data-sort-value="4790000000">4.79 petaflops</span> || 2,304 || 2017
|}-
|| <span data-sort-value="8320000000">8.32 petaflops</span> <ref>{{cite web |url=https://www.nas.nasa.gov/publications/news/2018/11-12-18.html |title=NASA's Electra Supercomputer Rises to 12th Place in the U.S. on the TOP500 List |publisher=NAS |date=November 2018 |access-date=2020-03-05 |archive-date=2021-03-21 |archive-url=https://web.archive.org/web/20210321103825/https://nas.nasa.gov/publications/news/2018/11-12-18.html |url-status=dead }}</ref> || 3,456 || 2018
|-
| Aitken || HPE SGI 8600 E-Cell, Intel Xeon [[Cascade Lake (microarchitecture)|"Cascade Lake"]] Processors<ref>{{cite web |url=https://www.nas.nasa.gov/assets/pdf/MSF_Brochure_Oct2019.pdf |title=NASA Advanced Supercomputing Division: Modular Supercomputing |publisher=NAS |date=2019 |access-date=2020-03-05 |archive-date=2020-10-30 |archive-url=https://web.archive.org/web/20201030140033/https://www.nas.nasa.gov/assets/pdf/MSF_Brochure_Oct2019.pdf |url-status=dead }}</ref>|| <span data-sort-value="3690000000">3.69 petaflops</span> || 1,150 || 2019
|-
! Computer Name !! Architecture !! Peak Performance !! Number of CPUs !! Installation Date
|}
 
==Storage Resources==
 
===Disk Storage= resources==
 
===Disk storage===
In 1987, NAS partnered with the [[Defense Advanced Research Projects Agency]] (DARPA) and the [[University of California, Berkeley]] in the [[Redundant Array of Inexpensive Disks]] (RAID) project, which sought to create a storage technology that combined multiple disk drive components into one logical unit. Completed in 1992, the RAID project lead to the distributed data storage technology used today.<ref name="25th" />
 
The NAS facility currently houses disk mass storage on an SGI parallel DMF cluster with high-availability software consisting of four 32-processor front-end systems, which are connected to the supercomputers and the archival tape storage system. The system has 64192 GB of memory per front-end<ref name="nasstorage">{{cite web |url=http://www.nas.nasa.gov/hecc/resources/storage_systems.html |title=HECC Archival Storage System Resource homepage |publisher=NAS}}</ref> and 257.6 petabytes (PB) of RAID disk capacitycache.<ref>{{cite web |titlename=NASA"AdvancedComputingDataSheet" Advanced Supercomputing Division Brochure |url=http://www.nas.nasa.gov/assets/pdf/NAS_Trifold_Winter2013.pdf |publisher=NAS |year=2013}}</ref> Data stored on disk is regularly migrated to the tape archival storage systems at the facility to free up space for other user projects being run on the supercomputers.
 
===Archive and Storagestorage Systemssystems===
In 1987, NAS developed the first UNIX-based hierarchical mass storage system, named NAStore. It contained two [[StorageTek]] 4400 cartridge tape robots, each with a storage capacity of approximately 1.1 terabytes, cutting tape retrieval time from 4 minutes to 15 seconds.<ref name="25th" />
 
With the installation of the Pleiades supercomputer in 2008, the StorageTek systems that NAS had been using for 20 years were unable to meet the needs of the greater number of users and increasing file sizes of each project's [[dataset]]s.<ref>{{cite web |url=http://www.nas.nasa.gov/SC09/PDF/Datasheets/Powers_NASSilo.pdf |title=NAS Silo, Tape Drive, and Storage Upgrades - SC09 |publisher=NAS |date=November 2009}}</ref> In 2009, NAS brought in [[Spectra Logic]] T950 robotic tape systems which increased the maximum capacity at the facility to 16 petabytes of space available for users to archive their data from the supercomputers.<ref>{{cite web |title=New NAS Data Archive System Installation Completed |url=http://www.nas.nasa.gov/publications/news/2009/06-01-09.html |publisher=NAS |year=2009 |access-date=2014-03-07 |archive-date=2013-03-16 |archive-url=https://web.archive.org/web/20130316185711/http://www.nas.nasa.gov/publications/news/2009/06-01-09.html |url-status=dead }}</ref> As of March 20142019, the NAS facility increased the total archival storage capacity of the Spectra Logic tape libraries to 1261,048 petabytes (or 1 exabyte) with 35% compression.<ref name="nasstorage" /> SGI's Data Migration Facility (DMF) and OpenVault manage disk-to-tape data migration and tape-to-disk de-migration for the NAS facility.
 
As of March 20142019, there is over 30110 petabytes of unique data stored in the NAS archival storage system.<ref name="nasstorage" />
 
==Data Visualizationvisualization Systemssystems==
In 1984, NAS purchased 25 SGI IRIS 1000 graphics terminals, the beginning of their long partnership with the Silicon Valley-basedValley–based company, which made a significant impact on post-processing and visualization of CFD results run on the supercomputers at the facility.<ref name="25th" /> Visualization became a key process in the analysis of simulation data run on the supercomputers, allowing engineers and scientists to view their results spatially and in ways that allowed for a greater understanding of the CFD forces at work in their designs.
 
{{multiple image|direction=vertical|width=250|footer=The hyperwall visualization system at the NAS facility allows researchers to view multiple simulations run on the supercomputers, or a single large image or animation.|image1=NASA_Hyperwall_2.jpg|alt1=Hyperwall displaying multiple images|image2 = Hyperwall-2.jpg|alt2=Hyperwall displaying one single image}}
 
===The hyperwall===
In 2002, NAS visualization experts developed a visualization system called the "hyperwall" which included 49 linked [[Liquid crystal display|LCD]] panels that allowed scientists to view complex [[dataset]]s on a large, dynamic seven-by-seven screen array. Each screen had its own processing power, allowing each one to display, process, and share datasets so that a single image could be displayed across all screens or configured so that data could be displayed in "cells" like a giant visual spreadsheet.<ref name="marsviz">{{cite web |url=http://www.nas.nasa.gov/publications/news/2003/09-16-03.html |title=Mars Flyer Debuts on Hyperwall |publisher=NAS |date=September 2003 |access-date=2014-03-07 |archive-date=2013-03-17 |archive-url=https://web.archive.org/web/20130317094542/http://www.nas.nasa.gov/publications/news/2003/09-16-03.html |url-status=dead }}</ref>
 
The second generation "hyperwall-2" was developed in 2008 by NAS in partnership with Colfax International and is made up of 128 LCD screens arranged in an 8x16 grid 23 feet wide by 10 feet tall. It is capable of rendering one quarter billion [[pixels]], making it the highest resolution scientific visualization system in the world.<ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2008/06-25-08.html |title=NASA Develops World's Highest Resolution Visualization System |publisher=NAS |date=June 2008 |access-date=2014-03-07 |archive-date=2013-03-16 |archive-url=https://web.archive.org/web/20130316185742/http://www.nas.nasa.gov/publications/news/2008/06-25-08.html |url-status=dead }}</ref> It contains 128 nodes, each with two quad-core [[AMD]] [[Opteron]] ([[Opteron#Opteron .2865 nm SOI.29|Barcelona]]) processors and a [[Nvidia]] [[GeForce 400 Seriesseries|GeForce 480 GTX]] [[graphics processing unit]] (GPU) for a dedicated peak processing power of 128 teraflops across the entire system—100 times more powerful than the original hyperwall.<ref>{{cite web |url=http://www.nas.nasa.gov/hecc/resources/viz_systems.html |title=NAS Visualization Systems Overview |publisher=NAS}}</ref> The hyperwall-2 is directly connected to the Pleiades supercomputer's filesystem over an InfiniBand network, which allows the system to read data directly from the filesystem without needing to copy files onto the hyperwall-2's memory.
 
In 2014, the hyperwall was upgraded with new hardware: 256 Intel Xeon "Ivy Bridge" processors and 128 NVIDIA Geforce 780 Ti GPUs. The upgrade increased the system's peak processing power from 9 teraflops to 57 teraflops, and now has nearly 400 gigabytes of graphics memory.<ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2014/10-21-14.html |title=NAS hyperwall Visualization System Upgraded with Ivy Bridge Nodes |publisher=NAS |date=October 2014 |access-date=2014-12-29 |archive-date=2021-03-23 |archive-url=https://web.archive.org/web/20210323161517/https://nas.nasa.gov/publications/news/2014/10-21-14.html |url-status=dead }}</ref>
The second generation "hyperwall-2" was developed in 2008 by NAS in partnership with Colfax International and is made up of 128 LCD screens arranged in an 8x16 grid 23 feet wide by 10 feet tall. It is capable of rendering one quarter billion [[pixels]], making it the highest resolution scientific visualization system in the world.<ref>{{cite web |url=http://www.nas.nasa.gov/publications/news/2008/06-25-08.html |title=NASA Develops World's Highest Resolution Visualization System |publisher=NAS |date=June 2008}}</ref> It contains 128 nodes, each with two quad-core [[AMD]] [[Opteron]] ([[Opteron#Opteron .2865 nm SOI.29|Barcelona]]) processors and a [[Nvidia]] [[GeForce 400 Series|GeForce 480 GTX]] [[graphics processing unit]] (GPU) for a dedicated peak processing power of 128 teraflops across the entire system—100 times more powerful than the original hyperwall.<ref>{{cite web |url=http://www.nas.nasa.gov/hecc/resources/viz_systems.html |title=NAS Visualization Systems Overview |publisher=NAS}}</ref> The hyperwall-2 is directly connected to the Pleiades supercomputer's filesystem over an InfiniBand network, which allows the system to read data directly from the filesystem without needing to copy files onto the hyperwall-2's memory.
 
In 20142020, the hyperwall was further upgraded with new hardware: 128256 Intel Xeon "IvyPlatinum Bridge"8268 (Cascade Lake) processors and 128 NVIDIA GeforceQuadro 780RTX Ti6000 GPUs with a total of 3.1 terabytes of graphics memory. The upgrade increased the system's peak processing power from 957 teraflops to 57512 teraflops, and now has nearly 400 gigabytes of graphics memory.<ref>{{cite web|url=httphttps://www.nas.nasa.gov/publicationshecc/newsresources/2014/10-21-14viz_systems.html |title=NAS hyperwall Visualization SystemSystems: Upgraded with Ivy Bridge Nodeshyperwall |publisher=NAS |date=OctoberDecember 20142020}}</ref>
 
===Concurrent Visualizationvisualization===
An important feature of the hyperwall technology developed at NAS is that it allows for "concurrent visualization" of data, which enables scientists and engineers to analyze and interpret data while the calculations are running on the supercomputers. Not only does this show the current state of the calculation for runtime monitoring, steering, and termination, but it also "allows higher temporal resolution visualization compared to post-processing because I/O and storage space requirements are largely obviated... [and] may show features in a simulation that would otherwise not be visible."<ref>{{cite journal |last=Ellsworth |first=David |author2=Bryan Green |author3=Chris Henze |author4=Patrick Moran |author5=Timothy Sandstrom |title=Concurrent Visualization in a Production Supercomputing Environment |journal=IEEE Transactions on Visualization and Computer Graphics |volume=12 |issue=5 |date=September–October 2006 |pages=997–1004 |doi=10.1109/TVCG.2006.128 |pmid=17080827 |s2cid=14037933 |url=http://www.nas.nasa.gov/assets/pdf/techreports/2007/nas-07-002.pdf |archive-date=2016-12-24 |access-date=2014-03-07 |archive-url=https://web.archive.org/web/20161224015017/https://www.nas.nasa.gov/assets/pdf/techreports/2007/nas-07-002.pdf |url-status=dead }}</ref>
 
The NAS visualization team developed a configurable concurrent [[Pipeline (computing)|pipeline]] for use with a massively parallel forecast model run on the Columbia supercomputer in 2005 to help predict the Atlantic hurricane season for the [[National Hurricane Center]]. Because of the deadlines to submit each of the forecasts, it was important that the visualization process would not significantly impede the simulation or cause it to fail.
Line 253 ⟶ 254:
* [http://www.nas.nasa.gov/hecc/resources/environment.html NAS Computing Environment homepage]
* [http://www.nas.nasa.gov/hecc/resources/pleiades.html NAS Pleiades Supercomputer homepage]
* [http://www.nas.nasa.gov/hecc/resources/aitken.html NAS Aitken Supercomputer homepage]
* [http://www.nas.nasa.gov/hecc/resources/electra.html NAS Electra Supercomputer homepage]
* [http://www.nas.nasa.gov/hecc/resources/storage_systems.html NAS Archive and Storage Systems homepage]
* [http://www.nas.nasa.gov/hecc/resources/viz_systems.html NAS hyperwall-2 homepage]
Line 261 ⟶ 264:
* [http://www.nas.nasa.gov/hecc NASA's High-End Computing Capability Project homepage]
* [http://top500.org TOP500 official website]
 
{{authority control}}
 
[[Category:Ames Research Center]]