Slurm Job Scheduler#

The HPC cluster uses the Slurm Job Scheduler to assign users jobs to compute nodes. Jobs are allocated based on the requested resources, submission time, and the users previous usage. We use the FairShare algorithm which adjusts priority to balance usage across of our users.

Command Quick Reference

  • squeue lists your jobs in the queue

  • sinfo lists the state of all computers in the HPC cluster

  • sbatch submits batch jobs

  • sprio Displays the priorities of pending jobs in the queue

  • scancel can be used to cancel jobs

  • srun allocates a compute node for use

  • sacct display historical report data for jobs

  • seff displays job CPU and Memory efficiency