The National Institute for Computational Sciences

Running jobs

  Running Jobs

General Information


When you log in, you will be directed to one of the login nodes. The login nodes should only be used for basic tasks such as file editing, code compilation, and job submission.

The login nodes should not be used to run production jobs. Production work should be performed on the system's compute resources. Serial jobs (pre- and post-processing, etc.) may be run on the compute nodes. Access to compute resources is managed by Torque (a PBS-like system). Job scheduling is handled by Moab, which interacts with Torque and system software.

This page provides information for getting started with the batch facilities of Torque with Moab as well as basic job execution. Sometimes you may want to chain your submissions to complete a full simulation without the need to resubmit, you can read about this here .

Batch Scripts


Batch scripts can be used to run a set of commands on a system's compute partition. Batch scripts allow users to run non-interactive batch jobs, which are useful for submitting a group of commands, allowing them to run through the queue, and then viewing the results. However It is sometimes useful to run a job interactively (primarily for debugging purposes). Please refer to the Interactive Batch Jobs section for more information on how to run batch jobs interactively.

All non-interactive jobs must be submitted on Beacon using job scripts via the qsub command. The batch script is a shell script containing PBS flags and commands to be interpreted by a shell. The batch script is submitted to the resource manager, Torque, where it is parsed. Based on the parsed data, Torque places the script in the queue as a job. Once the job makes its way through the queue, the script will be executed on the head node of the allocated resources.

Jobs are submitted to the batch job scheduler in units of nodes via the -l nodes=# option. By default, MPI jobs will place one task per node. The default behavior can be overridden by adding the '-ppn=# -f $PBS_NODEFILE' option to the mpirun command. Nodes can be oversubscribed (i.e. utilizing more MPI ranks than the node has cores); however, the default behavior will be to fill all cores on all nodes before adding the additional MPI ranks. This will be done by adding ranks to each node again up to the number of cores per node available. This process is repeated until all MPI ranks have been allocated. For example a job that requests 3 nodes (-l nodes=3) that have 16 total cores available and submits an MPI job using 144 ranks (mpirun -n 144) will first place 16 MPI ranks on each node on each of the 3 nodes (48 ranks over 3 nodes) before placing an addition set of 48 ranks in the same way (16 ranks per node over 3 nodes). Finally, the remaining set of 48 ranks will be allocated to all the nodes in the same way (16 ranks per node over 3 nodes).

If all MPI ranks have not been allocated it will place this same number of MPI ranks starting again on each node, starting with the first, until all MPI Ranks have been allocated. In cases were the number of MPI ranks per node is less than the available cores per node, these MPI ranks are evenly spread across processor cores. For example if 8 MPI ranks are placed on a 16 core node (2 processors of 8 cores each) then four MPI ranks will land on the first processor and the four MPI ranks will land on the second processor.

All job scripts start with an interpreter line, followed by a series of #PBS declarations that describe requirements of the job to the scheduler. The rest is a shell script, which sets up and runs the executable.

Batch scripts are divided into the following three sections:

  1. Shell interpreter (one line)
    • The first line of a script can be used to specify the script's interpreter.
    • This line is optional.
    • If not used, the submitter's default shell will be used.
    • The line uses the syntax #!/path/to/shell, where the path to the shell may be
      • /usr/bin/csh
      • /usr/bin/ksh
      • /bin/bash
      • /bin/sh
  2. PBS submission options
    • The PBS submission options are preceded by #PBS, making them appear as comments to a shell.
    • PBS will look for #PBS options in a batch script from the script's first line through the first non-comment line. A comment line begins with #.
    • #PBS options entered after the first non-comment line will not be read by PBS.
  3. Shell commands
    • The shell commands follow the last #PBS option and represent the executable content of the batch job.
    • If any #PBS lines follow executable statements, they will be treated as comments only. The exception to this rule is shell specification on the first line of the script.
    • The execution section of a script will be interpreted by a shell and can contain multiple lines of executables, shell commands, and comments.
    • During normal execution, the batch script will end and exit the queue after the last line of the script.

The following examples show typical job script header with various mpirun commands to submit a parallel job that executes ./a.out on 3 nodes with a wall clock limit of two hours:

#PBS -S /bin/bash
#PBS -A ACF-UTK0011
#PBS -l nodes=3,walltime=02:00:00

cd $PBS_O_WORKDIR

Option 1:
mpirun -n 48 ./a.out    
Places 48 MPI ranks (16 per node, placed 1 per node round robin)

Option 2:
mpirun -n 96 ./a.out   
Places 96 MPI ranks (32 ranks per node).  

Option 3:
mpirun -n 96 -ppn=32 -f $PBS_NODEFILE  ./a.out
Places 96 MPI ranks (32 ranks per node ).  

Option 4:
mpirun -n 24 -ppn=8 -f $PBS_NODEFILE  ./a.out
Places 24 MPI ranks (8 per node in groups of 8).  Ranks 0-7 will be on node 1, Ranks 8-15 will be on node 2, and Ranks 16-23 will be on node 3.

Jobs should be submitted from within a directory in the Lustre file system. It is best to always execute cd $PBS_O_WORKDIR as the first command. Please refer to the PBS Environment Variables section for further details.

Documentation that describes PBS options can be used for more complex job scripts.

Unless otherwise specified, your default shell interpreter will be used to execute shell commands in job scripts. If the job script should use a different interpreter, then specify the correct interpreter using:

 #PBS -S /bin/XXXX

Altering Batch Jobs


This section shows how to remove or alter batch jobs.

Remove Batch Job from the Queue

Jobs in the queue in any state can be stopped and removed from the queue using the command qdel.

For example, to remove a job with a PBS ID of 1234, use the following command:

> qdel 1234

More details on the qdel utility can be found on the qdel man page.

Hold Queued Job

Jobs in the queue in a non-running state may be placed on hold using the qhold command. Jobs placed on hold will not be removed from the queue, but they will not be eligible for execution.

For example, to move a currently queued job with a PBS ID of 1234 to a hold state, use the following command:

> qhold 1234

More details on the qhold utility can be found on the qhold man page.

Release Held Job

Once on hold the job will not be eligible to run until it is released to return to a queued state. The qrls command can be used to remove a job from the held state.

For example, to release job 1234 from a held state, use the following command:

> qrls 1234

More details on the qrls utility can be found on the qrls man page.

Modify Job Details

Non-running (or on hold) jobs can only be modified with the qalter PBS command. For example, this command can be used to:

Modify the job´s name,

$ qalter -N <newname> <jobid>

Modify the number of requested nodes,

$ qalter -l nodes=<NumNodes> <jobid>

Modify the job´s wall time

$ qalter -l walltime=<hh:mm:ss> <jobid>

Set job´s dependencies

$ qalter -W  depend=type:argument <jobid>

Remove a job´s dependency (omit :argument):

$ qalter -W  depend=type <jobid>

Notes:

  • Use qstat -f <jobid> to gather all the information about a job, including job dependencies.
  • Use qstat -a <jobid> to verify the changes afterward.
  • Users cannot specify a new walltime for their job that exceeds the maximum walltime of the queue where your job is.
  • If you need to modify a running job, please contact us. Certain alterations can only be performed by administrators.

Interactive Batch Jobs


Interactive batch jobs give users interactive access to compute resources. A common use for interactive batch jobs is debugging. This section demonstrates how to run interactive jobs through the batch system and provides common usage tips.

Users are not allowed to run interactive jobs from login nodes. Running a batch-interactive PBS job is done by using the -I option with qsub. After the interactive job starts, the user should run the computationally intense applications on the lustre scratch space, and place the executable after the mpirun command.

Interactive Batch Example

For interactive batch jobs, PBS options are passed through qsub on the command line. Refer to the following example:

qsub -I -A UT-NTNL0121 -l nodes=1,walltime=1:00:00

Option

Description

-I Start an interactive session
-A Charge to the “UT-NTNL0121” project
-l Request 1 physical compute node (16 cores) for one hour

After running this command, you will have to wait until enough compute nodes are available, just as in any other batch job. However, once the job starts, the standard input and standard output of this terminal will be linked directly to the head node of our allocated resource. The executable should be placed on the same line after the mpirun command, just like it is in the batch script.


> cd /lustre/medusa/$USER > mpirun -n 16 ./a.out

Issuing the exit command will end the interactive job.

Common PBS Options


This section gives a quick overview of common PBS options.

Necessary PBS options

Option

Use

Description

A #PBS -A <account> Causes the job time to be charged to <account>. The account string is typically composed of three letters followed by three digits and optionally followed by a subproject identifier. The utility showusage can be used to list your valid assigned project ID(s). This is the only option required by all jobs.
l #PBS -l nodes=<nodes> Number of requested nodes.
  #PBS -l walltime=<time> Maximum wall-clock time. <time> is in the format HH:MM:SS. Default is 1 hour.

Other PBS Options

Option

Use

Description

o #PBS -o <name> Writes standard output to <name> instead of <job script>.o$PBS_JOBID. $PBS_JOBID is an environment variable created by PBS that contains the PBS job identifier.
e #PBS -e <name> Writes standard error to <name> instead of <job script>.e$PBS_JOBID.
j #PBS -j {oe,eo} Combines standard output and standard error into the standard error file (eo) or the standard out file (oe).
m #PBS -m a Sends email to the submitter when the job aborts.
  #PBS -m b Sends email to the submitter when the job begins.
  #PBS -m e Sends email to the submitter when the job ends.
M #PBS -M <address> Specifies email address to use for -m options.
N #PBS -N <name> Sets the job name to <name> instead of the name of the job script.
S #PBS -S <shell> Sets the shell to interpret the job script.
qos #PBS -q <queue> Directs the job to the run under the specified QoS. This option is not required to run in the default QoS.

Note:  Please do not use the PBS -V option. This can propagate large numbers of environment variable settings from the submitting shell into a job which may cause problems for the batch environment. Instead of using PBS -V, please pass only necessary environment variables using -v <comma_separated_list_of_ needed_envars>. You can also include module load statements in the job script.

Example:

#PBS -v PATH,LD_LIBRARY_PATH,PV_NCPUS,PV_LOGIN,PV_LOGIN_PORT

Further details and other PBS options may be found using the man qsub command.

PBS Environment Variables


This section gives a quick overview of useful environment variable sets within PBS jobs.

  • PBS_O_WORKDIR
    • PBS sets the environment variable PBS_O_WORKDIR to the directory from which the batch job was submitted.
    • By default, a job starts in your home directory. Often, you would want to do cd $PBS_O_WORKDIR to move back to the directory you were in. The current working directory when you start mpirun should be on Lustre Space.

Include the following command in your script if you want it to start in the submission directory:

cd $PBS_O_WORKDIR
  • PBS_JOBID
    • PBS sets the environment variable PBS_JOBID to the job's ID.
    • A common use for PBS_JOBID is to append the job's ID to the standard output and error file(s).

Include the following command in your script to append the job's ID to the standard output and error file(s)

#PBS -o scriptname.o$PBS_JOBID
  • PBS_NNODES
    • PBS sets the environment variable PBS_NNODES to the number of logical cores requested (not nodes). Given that Beacon has 16 physical cores per node, the number of nodes would be given by $PBS_NNODES/16.
    • For example, a standard MPI program is generally started with mpirun -n $($PBS_NNODES) ./a.out. See the Job Execution section for more details.

Back to Contents

Monitoring Job Status


This page lists some ways to monitor jobs in the batch queue. Torque and Moab provide multiple tools to view the queues, system, and job status. Below are the most common and useful ones of these tools.

qstat

Use qstat -a to check the status of submitted jobs.


> qstat -a 
ocoee.nics.utk.edu: 
Job ID               Username    Queue    Jobname          SessID NDS   TSK    Memory   Time     S  Time
-----------------  -----------     --------   ----------------   ------    -----   ------  ------        --------   -  --------
102903              lucio       batch    STDIN              9317    --       16       --            01:00:00 C 00:06:17
102904              lucio       batch    STDIN              9590    --       16       --            01:00:00 R      -- 
>

The qstat output shows the following:

Job ID The first column gives the PBS-assigned job ID.
Username The second column gives the submitting user's login name.
Queue The third column gives the queue into which the job has been submitted.
Jobname The fourth column gives the PBS job name. This is specified by the PBS -N option in the PBS batch script. Or, if the -N option is not used, PBS will use the name of the batch script.
SessID The fifth column gives the associated session ID.
NDS The sixth column gives the PBS node count. Not accurate; will be one.
Tasks The seventh column gives the number of logical cores requested by the job's -size option.
Req’d Memory The eighth column gives the job's requested memory.
Req’d Time The ninth column gives the job's requested wall time.
S The tenth column gives the job's current status. See the status listings below.
Elap Time The eleventh column gives the job's time spent in a running status. If a job is not currently or has not been in a run state, the field will be blank.

The job's current status is reported by the qstat command. The possible values are listed in the table below.

Status value

Meaning

E Exiting after having run
H Held
Q Queued
R Running
S Suspended
T Being moved to new location
W Waiting for its execution time
C Recently completed (within the last 5 minutes)

showq

The Moab showq utility gives a different view of jobs in the queue. The utility will show jobs in the following states:

Active These jobs are currently running.
Eligible These jobs are currently queued awaiting resources. A user is allowed five jobs in the eligible state.
Blocked These jobs are currently queued but are not eligible to run. Common reasons for jobs in this state are jobs on hold and the owner currently having five jobs in the eligible state.

checkjob

The Moab checkjob utility can be used to view details of a job in the queue. For example, if job 736 is currently in a blocked state, the following can be used to view the reason:

> checkjob 736

The return may contain a line similar to the following:

BLOCK MSG: job 736 violates idle HARD MAXIJOB limit of 5 for user <your_username>  partition ALL (Req: 1  InUse: 5) 

This line indicates the job is in the blocked state because the owning user has reached the limit of five jobs currently in the eligible state.

showstart

The Moab showstart utility gives an estimate of when the job will start.

> showstart 100315
job 100315 requires 16384 procs for 00:40:00

Estimated Rsv based start in 15:26:41 on Fri Sep 26 23:41:12
Estimated Rsv based completion in 16:06:41 on Sat Sep 27 00:21:12

The start time may change dramatically as new jobs with higher priority are submitted, so you need to periodically rerun the command.

showbf

The Moab showbf utility gives the current backfill. This can help you create a job which can be backfilled immediately. As such, it is primarily useful for short jobs.

Scheduling Policy


The ACF uses TORQUE as the resource manager and Moab as the scheduler to schedule jobs. Currently, the ACF staff are reviewing job throughput and adjusting scheduling policies in order to adapt and better serve all users.

The scheduler gives preference to large core count jobs and to projects where an investment was made into ACF resources. Users who use institutional investment "opportunistic" projects get the lowest priority but as a group are treated equally for priority. Moab is configured to do “first fit” backfill. Backfilling allows smaller, shorter jobs to use otherwise idle resources.

Users can alter certain attributes of queued jobs until they start running. The order in which jobs are run depends on the following factors:

  • number of cores requested - jobs that request more cores get a higher priority.
  • queue wait time - a job's priority increases along with its queue wait time (not counting blocked jobs as they are not considered "queued.")
  • account balance - jobs that submit using an "investor" project will have a higher priority. Users who use a project with a negative balance will have significantly lowered priority. If your project has "Opportunistic Project" in the description then you are using one of these lower priority projects.
  • number of jobs - a maximum of five jobs per user, at a time, will be eligible to run. The rest will be blocked.

Currently, single core jobs by the same user will get scheduled on the same node. Unlike Newton, the ACF only allows the same user to share a node.

In certain special cases, the priority of a job may be manually increased upon request. To request priority change you may contact NICS User Support. NICS will need the job ID and reason to submit the request.

More detailed information can be found in the Queues section.

Queues


Queues are used by the batch scheduler to aid in the organization of jobs. There is currently only one queue -'batch'. Jobs are instead categorized by the Quality of Service (QoS) attribute.

Job priority is based on several factors including the QoS, the number of nodes and wall clock time requested. Jobs with larger node counts receive higher priority. Jobs with smaller node counts do run effectively as backfill. While the scheduler is collecting nodes for larger jobs, those with short wall clock limits and small node counts may use those nodes without delaying the start time of the larger job.


Jobs on are given a specific QoS based on investment type.

QoS

Priority

Jobs Queued/Running

Min Size

Max Size

Max Wall Clock Limit

priority High 64/16 1 node None 6 days
campus Medium 32/8 1 node 64 nodes 3 days
backfill Backfill 100/None 1 node 16 nodes 24:00:00

Job Accounting


Projects are charged based on usage of compute resources. This section gives details on how each job’s usage is calculated. PBS allocates cores to batch jobs in units of the number of cores available per node. A node cannot be allocated to multiple jobs, so a job is charged for the entire node whether or not it uses all its cores.

Features and Partitions


There are currently four active partitions within the ACF - Beacon, Rho, Monster, and KNL. Here is a quick overview of the nodes in each partition:

Node Set

CPU

Nodes

Cores/node

GB Mem/Node

Total Cores

Interconnect

Feature/Partition name

Beacon Intel® Xeon® E5-2670 44 16 256 704 FDR Infinibandbeacon
Rho Intel® Xeon® E5-2670 48 16 32 768 QDR Infinibandrho
Monster Intel® Xeon® E5-2687W 1 24 1024 24 Ethernetmonster
KNL Intel® Xeon® Phi® 7210 4 64 197 256 EDRknl
To explicitly request nodes in any of these partitions, you can use the PBS option -l partition or -l feature option. For example to request nodes exclusively in the Beacon partition, you can either use the command
 #PBS -l feature=beacon
or
 #PBS -l partition=beacon
in your PBS script or interactive job script. If you do not explicitly request a partition or feature, your job will be assigned nodes in either the Rho or Beacon partitions. At this time, you cannot run jobs across multiple partitions or features.