Slurm Workload Manager: Difference between revisions

Content deleted Content added
Improve reference
 
(196 intermediate revisions by more than 100 users not shown)
Line 1:
{{short description|Free and open-source job scheduler for Linux and similar computers}}
'''Simple Linux Utility for Resource Management''' (or simply '''SLURM''') is an [[open source|open-source]] [[job scheduler]] used by many of the world's [[supercomputer]]s and computer clusters. It provides three key functions. First, it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job such as [[Message Passing Interface|MPI]]) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending jobs.
{{Infobox software
| title = Slurm
| name = Slurm
| logo = Slurm logo.svg
| logo caption =
| developer = [[SchedMD]]
| screenshot = <!-- Image name is enough -->
| caption =
| collapsible =
| author =
| released = <!-- {{Start date and age|YYYY|MM|DD|df=yes/no}} -->
| discontinued =
| latest release version = {{wikidata|property|preferred|references|edit@end|P348|P548=Q2804309}}
| latest release date = {{start date and age|{{wikidata|qualifier|P348|P548=Q2804309|P577}}|df=yes}}
| programming language = [[C (programming language)|C]]
| operating system = [[Linux]]
| platform =
| size =
| genre = Job Scheduler for Clusters and Supercomputers
| license = [[GNU General Public License]]
| website = {{Official URL}}
| logo_size = 170px
| logo_alt =
| screenshot_size =
| screenshot_alt =
}}
 
The '''Slurm Workload Manager''', formerly known as '''Simple Linux Utility for Resource Management''' ('''SLURM'''), or simply '''Slurm''', is a [[free and open-source]] [[job scheduler]] for [[Linux]] and [[Unix-like]] [[kernel (operating system)|kernels]], used by many of the world's [[supercomputer]]s and [[computer cluster]]s.
SLURM is the batch system on many of the [[TOP500]] supercomputers, including the second fastest one in the world, China's [[Tianhe-1]]. SLURM is designed to handle thousands of nodes in a single cluster and can sustain throughput of 120,000 jobs per hour.
 
It provides three key functions:
SLURM uses a best fit algorithm based on [[Hilbert curve scheduling]] in order to optimize locality of task assignments on parallel computers.<ref name=Eitan>''Job Scheduling Strategies for Parallel Processing:'' by Eitan Frachtenberg and Uwe Schwiegelshohn 2010 ISBN 3642046320 pages 138-144</ref>
* allocating exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work,
* providing a framework for starting, executing, and monitoring work, typically a parallel job such as [[Message Passing Interface]] (MPI) on a set of allocated nodes, and
* arbitrating contention for resources by managing a queue of pending jobs.
 
Slurm is the workload manager on about 60% of the [[TOP500]] supercomputers.<ref>{{Cite web|url=https://hpcc.usc.edu/support/documentation/slurm/|title=Running a Job on HPC using Slurm|publisher=Center for High-Performance Computing - University of Southern California|website=hpcc.usc.edu|access-date=2019-03-05|archive-date=2019-03-06|archive-url=https://web.archive.org/web/20190306044130/https://hpcc.usc.edu/support/documentation/slurm/|url-status=dead}}</ref>
 
Slurm uses a [[Best-fit_bin_packing|best fit algorithm]] based on [[Hilbert curve scheduling]] or [[fat tree]] network topology in order to optimize locality of task assignments on parallel computers.<ref name=Eitan>{{Cite conference|doi=10.1007/978-3-642-04633-9_8|title=Effects of Topology-Aware Allocation Policies on Scheduling Performance|conference=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2009|last1=Pascual|first1=Jose Antonio|last2=Navaridas|first2=Javier|last3=Miguel-Alonso|first3=Jose|isbn=978-3-642-04632-2|volume=5798|pages=138–144}}</ref>
 
==History==
SLURMSlurm began development as a collaborative effort primarily by [[Lawrence Livermore National Laboratory]], ''[http[SchedMD]],<ref>{{cite web|url=https://www.schedmd.com/ SchedMD]''|title=Slurm Commercial Support, Development, and Installation |publisher=SchedMD |access-date=2014-02-23}}</ref> Linux NetworX, [[Hewlett-Packard]], and [[Groupe Bull]] as ana OpenFree SourceSoftware resource manager. It haswas sinceinspired evolvedby intothe closed source [[Quadrics_(company)| Quadrics RMS]] and shares a sophisticatedsimilar batchsyntax. schedulerThe capablename ofis satisfyinga reference to the requirements[[Fry ofand manythe largeSlurm computerFactory#Slurm|soda]] centersin [[Futurama]].<ref>{{cite web|url=https://slurm.schedmd.com/slurm_design.pdf |title=SLURM: isSimple currentlyLinux usedUtility onfor manyResource ofManagement the|date=23 largestJune computers2003 in|access-date=11 January 2016}}</ref> Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.
 
{{As of|2021|November}}, [[TOP500]] list of most powerful computers in the world indicates that Slurm is the workload manager on more than half of the top ten systems.
 
==Structure==
SLURMSlurm's design is very modular with dozensabout of100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization. SLURM also works with several meta-schedulers such as [[Moab Cluster Suite]], [[Maui Cluster Scheduler]], and [[Platform LSF]].
 
==Features==
==Supported platforms==
Slurm features include:{{Citation needed|date=September 2014}}
While SLURM was originally written for Linux, the latest version supports many other [[operating system]]s:<ref>[http://www.schedmd.com/slurmdocs/platforms.html Platforms]</ref>
 
* No single point of failure, backup daemons, fault-tolerant job options
* [[AIX operating system|AIX]]
* Highly scalable (schedules up to 100,000 independent jobs on the 100,000 sockets of [[IBM Sequoia]])
* [[BSD]] - [[FreeBSD]], [[NetBSD]], [[OpenBSD]]
* High performance (up to 1000 job submissions per second and 600 job executions per second)
* [[Linux]]
* Free and open-source software ([[GNU General Public License]])
* [[Mac OS X]]
* Highly configurable with about 100 plugins
* [[Solaris (operating system)|Solaris]]
* Fair-share scheduling with hierarchical bank accounts
* Preemptive and gang scheduling (time-slicing of parallel jobs)
* Integrated with database for accounting and configuration
* Resource allocations optimized for network topology and on-node topology (sockets, cores and hyperthreads)
* Advanced reservation
* Idle nodes can be powered down
* Different operating systems can be booted for each job
* Scheduling for generic resources (e.g. [[Graphics processing unit]])
* Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
* Resource limits by user or bank account
* Accounting for power consumption by job
* Support of IBM Parallel Environment (PE/POE)
* Support for job arrays
* Job profiling (periodic sampling of each task's CPU use, memory use, power consumption, network and file system use)
* Sophisticated multifactor job prioritization algorithms
* Support for MapReduce+
* Support for [[burst buffer]] that accelerates scientific data movement
* Support for heterogeneous generic resources
* Automatic job requeue policy based on exit value
 
==Supported platforms==
SLURM also supports several unique computer architectures including:
Recent Slurm releases run only on [[Linux]]. Older versions had been ported to a few other [[POSIX]]-based [[operating system]]s, including [[BSD]]s ([[FreeBSD]], [[NetBSD]] and [[OpenBSD]]),<ref>[https://slurm.schedmd.com/platforms.html Slurm Platforms]</ref> but this is no longer
 
feasible as Slurm now requires [[cgroups]] for core operations. Clusters running operating systems other than Linux will need to use
* [[IBM]] [[BlueGene]] L, P and Q models including the 20 petaflop [[IBM Sequoia]]
a different batch system, such as LPJS. Slurm also supports several unique computer architectures, including:
* [[Cray]] XT and XE
* [[IBM]] [[BlueGene]]/Q models, including the 20 petaflop [[IBM Sequoia]]
* [[Tianhe-IA]]
* [[Cray]] XT, XE and Cascade
* [[Anton_(computer)]]
* [[Tianhe-2]] a 33.9 petaflop system with 32,000 Intel Ivy Bridge chips and 48,000 Intel Xeon Phi chips with a total of 3.1 million cores
* IBM Parallel Environment
* [[Anton (computer)|Anton]]
 
==License==
SLURMSlurm is available under the [[GNU General Public License#History|GNU General Public License v2]] V2.
 
==Commercial support==
In 2010, the developers of SLURMSlurm founded ''[http://www.schedmd.com/ SchedMD]'', which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from [[Groupe Bull|Bull]], [[Cray]], and Science + Computing (subsidiary of [[Atos]]).
 
==References Usage ==
[[File:EstadosTrabajosSLURM.jpg|thumb|Slurm distinguishes several stages for a job]]
{{Reflist}}
The <code>slurm</code> system has three main parts:
* Balle, S. M. Balle and D. Palermo '''Enhancing an Open Source Resource Manager with Multi-Core/Multi-threaded Support''', ''Job Scheduling Strategies for Parallel Processing'', 2007.
 
* <code>slurmctld</code>, a central control [[Daemon (computing)|daemon]] running on a single control node (optionally with [[failover]] backups);
* Jette, M. and M. Grondona, [http://www.schedmd.com/slurmdocs/slurm_design.pdf SLURM: Simple Linux Utility for Resource Management] ''Proceedings of ClusterWorld Conference and Expo'', San Jose, California, June 2003.
* many computing nodes, each with one or more <code>slurmd</code> daemons;
* clients that connect to the manager node, often with [[Secure Shell|ssh]].
 
The clients can issue commands to the control daemon, which would accept and divide the workload to the computing daemons.
* Layton, Jeffrey B. [http://www.linux-mag.com/id/7239/1/ Caos NSA and Perceus: All-in-one Cluster Software Stack] Linux Magazine,5 February 2009.
 
For clients, the main commands are <code>srun</code> (queue up an interactive job), <code>sbatch</code> (queue up a job), <code>squeue</code> (print the job queue) and <code>scancel</code> (remove a job from the queue).
* Yoo, A., M. Jette, and M. Grondona, '''SLURM: Simple Linux Utility for Resource Management''', ''Job Scheduling Strategies for Parallel Processing'', volume 2862 of ''Lecture Notes in Computer Science'', pages 44–60, Springer-Verlag, 2003.
 
Jobs can be run in [[Batch processing|batch mode]] or [[Interactive computing|interactive mode]]. For interactive mode, a compute node would start a shell, connects the client into it, and run the job. From there the user may observe and interact with the job while it is running. Usually, interactive jobs are used for initial debugging, and after debugging, the same job would be submitted by <code>sbatch</code>. For a batch mode job, its <code>stdout</code> and <code>stderr</code> outputs are typically directed to text files for later inspection.
 
==See also==
{{Portal|Free and open-source software}}
* [[Job scheduler#Batch queuing for HPC clusters|Job Scheduler and Batch Queuing for Clusters]]
<!-- sorted alphabetically: -->
* [[Beowulf cluster]]
* [[Maui Cluster Scheduler]]
* [[Open Source Cluster Application Resources]] (OSCAR)
* [[TORQUE]]
* [[Univa Grid Engine]]
* [[Platform LSF]]
 
==References==
{{Reflist|30em}}
 
==Further reading==
{{Div col|colwidth=30em}}
* {{Cite conference|doi=10.1007/978-3-540-78699-3_3|title=Enhancing an Open Source Resource Manager with Multi-core/Multi-threaded Support|conference=Job Scheduling Strategies for Parallel Processing|series=[[Lecture Notes in Computer Science]]|year=2008|last1=Balle|first1=Susanne M.|last2=Palermo|first2=Daniel J.|isbn=978-3-540-78698-6|volume=4942|page=37}}
* {{Cite journal|last1=Jette|first1= M. |first2= M. |last2=Grondona|url=https://slurm.schedmd.com/slurm_design.pdf |title=SLURM: Simple Linux Utility for Resource Management|journal=Proceedings of ClusterWorld Conference and Expo|___location=San Jose, California|date=June 2003}}
* {{cite journal|last=Layton|first= Jeffrey B. |url=http://www.linux-mag.com/id/7239/1/|archive-url=https://web.archive.org/web/20090211041650/http://www.linux-mag.com/id/7239/1/|url-status=usurped|archive-date=February 11, 2009|title= Caos NSA and Perceus: All-in-one Cluster Software Stack|journal= Linux Magazine|date=5 February 2009}}
* {{cite conference|doi=10.1007/10968987_3|title=SLURM: Simple Linux Utility for Resource Management|conference=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2003|last1=Yoo|first1=Andy B.|last2=Jette|first2=Morris A.|last3=Grondona|first3=Mark|isbn=978-3-540-20405-3|volume=2862|page=[https://archive.org/details/jobschedulingstr0000jssp_q2o1/page/44 44]|citeseerx=10.1.1.10.6834|url-access=registration|url=https://archive.org/details/jobschedulingstr0000jssp_q2o1/page/44}}
{{Div col end}}
 
==External links==
* {{Official website}}
* [http://www.schedmd.com/slurmdocs/ SLURM home page]
* [https://slurm.schedmd.com Slurm Documentation]
* [https://www.open-mpi.org/video/slurm/Slurm_EMC_Dec2012.pdf Slurm Workload Manager Architecture Configuration and Use ]
* [https://s3-us-west-2.amazonaws.com/imss-hpc/index.html Caltech HPC Center: Job Script Generator]
 
{{Linux kernel}}
 
{{DEFAULTSORT:Slurm}}
Line 51 ⟶ 139:
[[Category:Grid computing]]
[[Category:Cluster computing]]
[[Category:Free software programmed in C]]
 
[[Category:Software using the GNU General Public License]]
[[fr:SLURM]]