Slurm Workload Manager: Difference between revisions

Content deleted Content added
Add image, add links
Improve reference
 
(10 intermediate revisions by 9 users not shown)
Line 1:
{{short description|Free and open-source job scheduler for Linux and similar computers}}
{{primary sources|date=July 2010}}
{{Infobox software
| title = Slurm
Line 13 ⟶ 12:
| released = <!-- {{Start date and age|YYYY|MM|DD|df=yes/no}} -->
| discontinued =
| latest release version = {{URLwikidata|https://www.schedmd.com/downloads.phpproperty|preferred|references|edit@end|P348|P548=Q2804309}}
| latest release date = <!-- {{Startstart date and age|YYYY{{wikidata|MMqualifier|DDP348|P548=Q2804309|P577}}|df=yes/no}} -->
| programming language = [[C (programming language)|C]]
| operating system = [[Linux]], [[BSD]]s
| platform =
| size =
| genre = Job Scheduler for Clusters and Supercomputers
| license = [[GNU General Public License]]
| website = {{Official URL|https://slurm.schedmd.com/}}
| logo_size = 170px
| logo_alt =
Line 35 ⟶ 34:
* arbitrating contention for resources by managing a queue of pending jobs.
 
Slurm is the workload manager on about 60% of the [[TOP500]] supercomputers.<ref>{{Cite web|url=https://hpcc.usc.edu/support/documentation/slurm/|title=Running a Job on HPC using Slurm|publisher=Center {{!}}for HPCHigh-Performance {{!}}Computing USC- University of Southern California|website=hpcc.usc.edu|access-date=2019-03-05|archive-date=2019-03-06|archive-url=https://web.archive.org/web/20190306044130/https://hpcc.usc.edu/support/documentation/slurm/|url-status=dead}}</ref>
 
Slurm uses a [[Best-fit_bin_packing|best fit algorithm]] based on [[Hilbert curve scheduling]] or [[fat tree]] network topology in order to optimize locality of task assignments on parallel computers.<ref name=Eitan>{{Cite conference|doi=10.1007/978-3-642-04633-9_8|title=Effects of Topology-Aware Allocation Policies on Scheduling Performance|conference=Job Scheduling Strategies for Parallel Processing|series=Lecture Notes in Computer Science|year=2009|last1=Pascual|first1=Jose Antonio|last2=Navaridas|first2=Javier|last3=Miguel-Alonso|first3=Jose|isbn=978-3-642-04632-2|volume=5798|pages=138–144}}</ref>
Line 72 ⟶ 71:
* Support for MapReduce+
* Support for [[burst buffer]] that accelerates scientific data movement
 
The following features are announced for version 14.11 of Slurm, was released in November 2014:<ref>{{cite web|url=https://slurm.schedmd.com/news.html |title=Slurm - What's New |publisher=SchedMD |access-date=2014-08-29}}</ref>
 
* Improved job array data structure and scalability
* Support for heterogeneous generic resources
* Add user options to set the CPU governor
* Automatic job requeue policy based on exit value
* Report API use by user, type, count and time consumed
* Communication gateway nodes improve scalability
 
==Supported platforms==
Recent Slurm isreleases primarilyrun developedonly to work alongsideon [[Linux]]. distributions, althoughOlder thereversions ishad alsobeen supportported forto a few other [[POSIX]]-based [[operating system]]s, including [[BSD]]s ([[FreeBSD]], [[NetBSD]] and [[OpenBSD]]).,<ref>[https://slurm.schedmd.com/platforms.html Slurm Platforms]</ref> but Slurmthis alsois supportsno several unique computer architectures, including:longer
feasible as Slurm now requires [[cgroups]] for core operations. Clusters running operating systems other than Linux will need to use
a different batch system, such as LPJS. Slurm also supports several unique computer architectures, including:
* [[IBM]] [[BlueGene]]/Q models, including the 20 petaflop [[IBM Sequoia]]
* [[Cray]] XT, XE and Cascade
Line 98 ⟶ 92:
== Usage ==
[[File:EstadosTrabajosSLURM.jpg|thumb|Slurm distinguishes several stages for a job]]
The `<code>slurm`</code> system has three main parts:
 
* <code>slurmctld</code>, a central `slurmctld` (slurm control) [[Daemon (computing)|daemon]] running on a single control node (optionally with [[failover]] backups);
* many computing nodes, each with one or more `<code>slurmd`</code> daemons;
* clients that connect to the manager node, often with [[Secure Shell|ssh]].
 
The clients can issue commands to the control daemon, which would accept and divide the workload to the computing daemons.
 
For clients, the main commands are `<code>srun`</code> (queue up an interactive job), `<code>sbatch`</code> (queue up a job), `<code>squeue`</code> (print the job queue), `and <code>scancel`</code> (remove a job from the queue).
 
Jobs can be run in [[Batch processing|batch mode]] or [[Interactive computing|interactive mode]]. For interactive mode, a compute node would start a shell, connects the client into it, and run the job. From there the user may observe and interact with the job while it is running. Usually, interactive jobs are used for initial debugging, and after debugging, the same job would be submitted by `<code>sbatch`</code>. For a batch mode job, its `<code>stdout`</code> and `<code>stderr`</code> outputs are typically directed to text files for later inspection.
 
==See also==
Line 133 ⟶ 127:
 
==External links==
* {{Official website}}
* [https://slurm.schedmd.com Slurm Documentation]
* [https://www.schedmd.com SchedMD]
* [https://www.open-mpi.org/video/slurm/Slurm_EMC_Dec2012.pdf Slurm Workload Manager Architecture Configuration and Use ]
* [https://s3-us-west-2.amazonaws.com/imss-hpc/index.html Caltech HPC Center: Job Script Generator]
Line 146 ⟶ 140:
[[Category:Cluster computing]]
[[Category:Free software programmed in C]]
[[Category:Software using the GPLGNU licenseGeneral Public License]]