Monitoring and Managing Your Jobs, etc.
- Seeing what jobs are running/queued
- When will my job run
- Detailed information about your jobs
- Viewing output of jobs in progress
- Cancelling your jobs
- Monitoring the cluster
Seeing what jobs are running/queued
The slurm command to list what jobs are running is
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1243530 standard test2.sh payerle R 18:47:23 2 compute-b18-[2-3]
1244127 standard slurm.sh kevin R 1:15:47 1 compute-b18-4
1230562 standard test1.sh payerle PD 0:00 1 (Resources)
1244242 standard test1.sh payerle PD 0:00 2 (Resources)
1244095 standard slurm2.sh kevin PD 0:00 1 (ReqNodeNotAvail)
login-1: squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 1243530 standard test2.sh payerle R 18:47:23 2 compute-b18-[2-3] 1244127 standard slurm.sh kevin R 1:15:47 1 compute-b18-4 1230562 standard test1.sh payerle PD 0:00 1 (Resources) 1244242 standard test1.sh payerle PD 0:00 2 (Resources) 1244095 standard slurm2.sh kevin PD 0:00 1 (ReqNodeNotAvail)
ST column gives the state of the job,
with the following codes:
- R for Running
- PD for PenDing
- TO for TimedOut
- PR for PReempted
- S for Suspended
- CD for CompleteD
- CA for CAncelled
- F for FAILED
- NF for jobs terminated due to Node Failure
NODELIST(REASON) field will tell you on which nodes jobs
that are currently running are running on. If the job is pending (i.e.
not running), it will give a short explanation for why the job is not running
(as of the last time the scheduler examined the job). Typically one might
see something like:
(Resources)if the scheduler is unable to find sufficient idle resources to run your job (i.e. the cluster is too busy to run your job at this time. The job should run once resources become available (i.e. some currently running jobs complete, freeing resources)
(Priority)if their are other jobs with higher priority ahead of yours in the queue. The job should run once the jobs ahead of it get scheduled.
(AssociationJobLimit): these generally mean that your allocation account has insufficient funds available to complete this job and all currently running jobs charging against that allocation account. See the relevant FAQ entry for more information. This job will only run if the currently running jobs complete using much less SUs than predicted (based on their wall time limit) and/or if the allocation account gets replenished.
(QOSResourceLimit)generally occur only if you have submitted a large number of jobs. Some of those jobs will be held in a pending state to prevent adverse impact on the rest of the cluster. These jobs will typically run once the job count is reduced (by currently running jobs completing). See the relevant FAQ entry for more information.
Typically, if you see something note in the above list, there is a problem and you will want to contact systems staff to assist.
squeue command also takes a wide range of options, including
options to control what is output and how. See the
man page (
for more information.
For example: if you add the following to your
file (assuming you are using a C-shell variant):
alias sqp 'squeue -S -Q -o "%.18i %.9P %.8j %.8u %.2t %.10M %.6D %Q %R"'
sqpwill list jobs in the queue in order of descending priority.
For those fond of the Torque/PBS
showq command, Slurm has one
also. It takes somewhat different arguments than the original Moab/Torque version;
the main difference is that
-u displays your jobs only, and
does NOT take an username as argument. The
option is the equivalent of the orginal Moab/Torque
displaying only jobs for the specified user.
login-2> showq ACTIVE JOBS-------------------- JOBID JOBNAME USERNAME STATE CORE REMAINING STARTTIME ================================================================================ 238 slurmtest. kevin Running 1 0:29:49 Thu May 22 11:18:47 239 slurmtest. kevin Running 1 0:29:52 Thu May 22 11:18:50 2 active jobs WAITING JOBS------------------------ JOBID JOBNAME USERNAME STATE CORE WCLIMIT QUEUETIME ================================================================================ 240 slurmtest. kevin Waiting 1 0:30:00 Thu May 22 11:18:57 Total Jobs: 3 Active Jobs: 2 Idle Jobs: 1 Blocked Jobs: 0
When will my job start?
The scheduler tries to schedule all jobs as quickly as possible, subject to cluster policies, available hardware, allocation priority (contributers to the cluster get higher priority allocations), etc. Typically jobs run within a day or so, but this can vary and usage of the cluster can vary widely at times.
squeue command, with the appropriate
arguments, can show you the scheduler's
estimate of when a pending/idle job will start running. It is, of course,
just the scheduler's best estimate, given current conditions, and the actual
time a job starts might be earlier or later than that depending on factors such
as the behavior of currently running jobs, the submission of new jobs, and
hardware issues, etc.
To see this, you need to request that
%S field in the output format option, e.g.
login-1> squeue -o "%.9i %.9P %.8j %.8u %.2t %.10M %.6D %S" JOBID PARTITION NAME USER ST TIME NODES START_TIME 473 standard test1.sh payerle PD 0:00 4 2014-05-08T12:44:34 479 standard test1.sh kevin PD 0:00 4 N/A 489 standard tptest1. payerle PD 0:00 2 N/A
Obviously, the times given are estimates. The job could start earlier if other jobs ahead of it in the queue do not use their full walltime, or could get delayed if jobs with a higher priority than yours are submitted before your start time.
Detailed information about your jobs
To get more detailed information about your job, you can use the
scontrol show job JOBNUMBER command. This command
provides much detail about your job, eg.
login-2> scontrol show job 486 JobId=486 Name=test1.sh UserId=payerle(34676) GroupId=glue-staff(8675) Priority=33 Account=test QOS=normal JobState=PENDING Reason=Priority Dependency=(null) Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0 RunTime=00:00:00 TimeLimit=00:03:00 TimeMin=N/A SubmitTime=2014-05-06T11:20:20 EligibleTime=2014-05-06T11:20:20 StartTime=Unknown EndTime=Unknown PreemptTime=None SuspendTime=None SecsPreSuspend=0 Partition=standard AllocNode:Sid=pippin:31236 ReqNodeList=(null) ExcNodeList=(null) NodeList=(null) NumNodes=2 NumCPUs=8 CPUs/Task=1 ReqS:C:T=*:*:* MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0 Features=(null) Gres=(null) Reservation=(null) Shared=OK Contiguous=0 Licenses=(null) Network=(null) Command=/export/home/pippin/payerle/slurm-tests/test1.sh WorkDir=/home/pippin/payerle/slurm-tests
Viewing output of jobs in progress
Slurm outputs the
for your job to the files you specified on the shared filesystem
in real time. There is no need for an extra command like
under the PBS/Moab/Torque environment.
Cancelling Your Jobs
Sometimes one needs to kill a job. To kill/cancel a job that is
waiting in the queue, or is already running, use the
login-1> scancel -i 122488 Cancel job_id=122488 name=test1.sh partition=standard [y/n]? y login-1>
Monitoring the Cluster
Sometimes you want a broader overview of the cluster. The
squeue command can give you information on what jobs are
running on the cluster. The
sinfo -N command can show
you attributes of the nodes on the cluster. But both of these use
a text orientated display, which while providing fairly dense amount of
information, is often difficult to digest.
smap tries to present this more graphically. While
still text based, the display starts with a representation of the nodes
in the cluster, showing letters indexed to running jobs in a list below.
More informantion can be found on its
sview command is even prettier, but as it uses real
(not text mode) graphics it requires an X server running on the computer
you are sitting at. This will present a graphical overview of the nodes
in the cluster and their state, as well as the job queue.
PLEASE SET THE REFRESH INTERVAL to something like 300 seconds (5 minutes). Select
For an even prettier view, there are HPC dashboards available on line for the clusters at:
The above items show the current state of the cluster, but sometimes one wishes a more historical perspective. E.g, how was my allocation used over the past year. Historical metrics for Deepthought clusters are available from the Open XDMoD (XD Metrics on Demand) package.
Finally, sometimes you wish to look in more detail at how a node or bunch of nodes are performing. I.e., you wish to get a better idea of how much memory your job is using. We provide various metrics for the nodes in the cluster at the ganglia sites: