The National Institute for Computational Sciences

Running Jobs

General Information

When you log into Darter, you will be directed to one of the login nodes. The login nodes should only be used for basic tasks such as file editing, code compilation, data backup, and job submission.

The login nodes should not be used to run production jobs. Production work should be performed on the system's compute resources. The serial jobs (pre- and post-processing, etc.) may be run on the compute nodes as long as they are statically linked. For one or more single-processor jobs please refer to the Job Execution section for more information. Access to compute resources is managed by the Portable Batch System (PBS). Job scheduling is handled by Moab, which interacts with PBS and the XC30 system software.

This page provides information for getting started with the batch facilities of PBS with Moab as well as basic job execution. Sometimes you may want to chain your submissions to complete a full simulation without the need to resubmit, you can read about this here (Please read it carefully).

Back to Contents

Batch Scripts

Batch scripts can be used to run a set of commands on a system's compute partition. Batch scripts allow users to run non-interactive batch jobs, which are useful for submitting a group of commands, allowing them to run through the queue, and then viewing the results. However It is sometimes useful to run a job interactively (primarily for debugging purposes). Please refer to the Interactive Batch Jobs section for more infomation on how to run batch jobs interactively.

All non-interactive jobs must be submitted on Darter using job scripts via the qsub command. The batch script is a shell script containing PBS flags and commands to be interpreted by a shell. The batch script is submitted to the batch manager, PBS, where it is parsed. Based on the parsed data, PBS places the script in the queue as a job. Once the job makes its way through the queue, the script will be executed on the head node of the allocated resources.

All job scripts start with an Interpreter line, followed by a series of #PBS declarations that describe requirements of the job to the scheduler. The rest is a shell script, which sets up and runs the executable.

Batch scripts are divided into the following three sections:

  1. Shell interpreter (one line)
    • The first line of a script can be used to specify the script's interpreter.
    • This line is optional.
    • If not used, the submitter's default shell will be used.
    • The line uses the syntax #!/path/to/shell, where the path to the shell may be
      • /usr/bin/csh
      • /usr/bin/ksh
      • /bin/bash
      • /bin/sh
  2. PBS submission options
    • The PBS submission options are preceded by #PBS, making them appear as comments to a shell.
    • PBS will look for #PBS options in a batch script from the script's first line through the first non-comment line. A comment line begins with #.
    • #PBS options entered after the first non-comment line will not be read by PBS.
  3. Shell commands
    • The shell commands follow the last #PBS option and represent the executable content of the batch job.
    • If any #PBS lines follow executable statements, they will be treated as comments only. The exception to this rule is shell specification on the first line of the script.
    • The execution section of a script will be interpreted by a shell and can contain multiple lines of executables, shell commands, and comments.
    • During normal execution, the batch script will end and exit the queue after the last line of the script.

The following example shows a typical job script that includes the minimal requirements to submit a parallel job that executes ./a.out on 96 cores, charged to the fictitious account UT-NTNL0121 with a wall clock limit of one hour and 35 minutes:

#PBS -S /bin/bash
#PBS -l size=96,walltime=01:35:00

aprun -n 96 ./a.out

Jobs should be submitted from within a directory in the Lustre file system. It is best to always execute cd $PBS_O_WORKDIR as the first command. Please refer to the PBS Environment Variables section for further details.

On Darter you must request size=cores to be a multiple of 16 - since there are 16 physical cores per node and it is not possible to allocate less than 16 cores even if planning to use less. If you want to run only on 8 cores (-n 8), for example, you still need to request 16 cores (#PBs -l size=16). Otherwise you will receive the following error:

Notice: Your job was NOT submitted 

  Core requests on Darter must be a multiple of 16.  You have requested 
  an invalid number of cores ( 8 ). Please resubmit the 
  job requesting an appropriate number of cores. 

There is an online documentation that describes the PBS options that can be used for more complex job scripts.

Unless otherwise specified your default shell interpreter will be used to execute shell commands in job scripts. In some cases it may even try to guess what interpreter to use. If the job script should use a different interpreter, then specify the correct interpreter using:

 #PBS -S /bin/XXXX

The following example shows a typical job script that saves a file to HPSS. Note that you must log in using your OTP token before submitting an HPSS job. The job is charged to the fictitious account UT-NTNL0121 with a wall clock limit of 5 hours and 20 minutes. You should not specify -l size in HPSS jobs.

#PBS -S /bin/bash
#PBS -l walltime=05:20:00
#PBS -q hpss

hsi put file01.tar

Back to Contents

Altering Batch Jobs

This section shows how to remove or alter batch jobs.

Remove Batch Job from the Queue

Jobs in the queue in any state can be stopped and removed from the queue using the command qdel.

For example, to remove a job with a PBS ID of 1234, use the following command:

> qdel 1234

More details on the qdel utility can be found on the qdel man page.

Hold Queued Job

Jobs in the queue in a non-running state may be placed on hold using the qhold command. Jobs placed on hold will not be removed from the queue, but they will not be eligible for execution.

For example, to move a currently queued job with a PBS ID of 1234 to a hold state, use the following command:

> qhold 1234

More details on the qhold utility can be found on the qhold man page.

Release Held Job

Once on hold the job will not be eligible to run until it is released to return to a queued state. The qrls command can be used to remove a job from the held state.

For example, to release job 1234 from a held state, use the following command:

> qrls 1234

More details on the qrls utility can be found on the qrls man page.

Modify Job Details

Non-running (or on hold) jobs can only be modified with the qalter PBS command. For example, this command can be used to:

Modify the job´s name,

$ qalter -N <newname> <jobid>

Modify the number of requested cores,

$ qalter -l size=<NumCores> <jobid>

Modify the job´s wall time

$ qalter -l walltime=<hh:mm:ss> <jobid>

Set job´s dependencies

$ qalter -W  depend=type:argument <jobid>

Remove a job´s dependency (omit :argument):

$ qalter -W  depend=type <jobid>


  • Use qstat -f <jobid> to gather all the information about a job, including job dependencies.
  • Use qstat -a <jobid> to verify the changes afterward.
  • Users cannot specify a new walltime for their job that exceeds the maximum walltime of the queue where your job is.
  • If you need to modify a running job, please contact us. Certain alterations can only be performed by NICS operators.

Back to Contents

Interactive Batch Jobs

Interactive batch jobs give users interactive access to compute resources. A common use for interactive batch jobs is debugging. This section demonstrates how to run interactive jobs through the batch system and provides common usage tips.

Users are not allowed to run interactive jobs on compute resources from the login nodes. Running a batch-interactive PBS job is done by using the -I option with qsub. After the interactive job starts, the user should run the computationally intense applications on the lustre scratch space, and place the executable after the aprun command. The aprun command will send the application to the compute nodes to run.

Interactive Batch Example

For interactive batch jobs, PBS options are passed through qsub on the command line. Refer to the following example:

ssh -X darter    "note the capital X, small x turns off X11 forwarding"
xclock           "if a clock pops up X11 is correctly functioning"
qsub -I -A UT-NTNL0121 -X -l size=16,walltime=1:00:00



-I Start an interactive session
-A Charge to the “UT-NTNL0121” project
-X Enables X11 forwarding which is necessary for interactive GUIs. Note that you must have X11 forwarding enabled when you log in to Darter
-l size=16,walltime=1:00:00 Request 16 physical compute cores for one hour

After running this command, you will have to wait until enough compute nodes are available, just as in any other batch job. However, once the job starts, the standard input and standard output of this terminal will be linked directly to the head node of our allocated resource. The executable should be placed on the same line after the aprun command, just like it is in the batch script.

> cd /lustre/medusa/$USER > aprun -n 16 ./a.out

Issuing the exit command will end the interactive job. From here commands may be executed directly instead of through a batch script.

Using Interactive Batch Jobs to Debug

A common use of interactive batch jobs is debugging (see the Debugging page). The tips below may be useful while interactively debugging the code through PBS. To help a job run quickly rather than sit in the queue it is important to choose the job size appropriately. You can use the showbf command (for “show back fill) to see immediately available resources that would allow your job to be backfilled (and thus started) by the scheduler. For example, the snapshot below shows that there are 9 nodes available, so a job requesting 2 compute nodes would run immediately.

% showbf -p darter

Partition     Tasks  Nodes      Duration   StartOffset       StartDate
---------     -----  -----  ------------  ------------  --------------
darter          192      9       1:49:03      00:00:00  15:35:20_07/18
darter           64      1      INFINITY      00:00:00  15:35:20_07/18

The following command would then take advantage of this window for an interactive session:

qsub -I -A UT-NTNL0121 -X -l size=32,walltime=1:00:00

See showbf -help for additional options. For more information, see the online user guide for the Moab Workload Manager.

Back to Contents

Common PBS Options

This section gives a quick overview of common PBS options.

Necessary PBS options




A #PBS -A <account> Causes the job time to be charged to <account>. The account string UT-NTNL0121 is typically composed of three letters followed by three digits and optionally followed by a subproject identifier. The utility showusage can be used to list your valid assigned project ID(s). This is the only option required by all jobs.
l #PBS -l size=<cores> Number of physical cores. Must request entire nodes (multiples of 16).
  #PBS -l walltime=<time> Maximum wall-clock time. <time> is in the format HH:MM:SS. Default is 1 hour.

Other PBS Options




o #PBS -o <name> Writes standard output to <name> instead of <job script>.o$PBS_JOBID. $PBS_JOBID is an environment variable created by PBS that contains the PBS job identifier.
e #PBS -e <name> Writes standard error to <name> instead of <job script>.e$PBS_JOBID.
j #PBS -j {oe,eo} Combines standard output and standard error into the standard error file (eo) or the standard out file (oe).
m #PBS -m a Sends email to the submitter when the job aborts.
  #PBS -m b Sends email to the submitter when the job begins.
  #PBS -m e Sends email to the submitter when the job ends.
M #PBS -M <address> Specifies email address to use for -m options.
N #PBS -N <name> Sets the job name to <name> instead of the name of the job script.
S #PBS -S <shell> Sets the shell to interpret the job script.
q #PBS -q <queue> Directs the job to the specified queue.This option is not required to run in the general production queue.

Note:  Please do not use the PBS -V option. This can propagate large numbers of environment variable settings from the submitting shell into a job which may cause problems for the batch environment. Instead of using PBS -V, please pass only necessary environment variables using -v <comma_separated_list_of_ needed_envars>. You can also include module load statements in the job script.



Further details and other PBS options may be found using the man qsub command.

Back to Contents

PBS Environment Variables

This section gives a quick overview of useful environment variable sets within PBS jobs.

    • PBS sets the environment variable PBS_O_WORKDIR to the directory from which the batch job was submitted.
    • By default, a job starts in your home directory. Often, you would want to do cd $PBS_O_WORKDIR to move back to the directory you were in. The current working directory when you start aprun should be on Lustre Space.

Include the following command in your script if you want it to start in the submission directory:

    • PBS sets the environment variable PBS_JOBID to the job's ID.
    • A common use for PBS_JOBID is to append the job's ID to the standard output and error file(s).

Include the following command in your script to append the job's ID to the standard output and error file(s)

#PBS -o scriptname.o$PBS_JOBID
    • PBS sets the environment variable PBS_NNODES to the number of logical cores requested (not nodes). Given that Darter has 16 physical cores per node, the number of nodes would be given by $PBS_NNODES/16.
    • For example, a standard MPI program is generally started with aprun -n $($PBS_NNODES) ./a.out. See the Job Execution section for more details.

Back to Contents

Monitoring Job Status

This page lists some ways to monitor jobs in the batch queue. PBS and Moab provide multiple tools to view the queues, system, and job status. Below are the most common and useful ones of these tools.


Use qstat -a to check the status of submitted jobs.

> qstat -a 
Job ID               Username    Queue    Jobname          SessID NDS   TSK    Memory   Time     S  Time
-----------------  -----------     --------   ----------------   ------    -----   ------  ------        --------   -  --------
102903.ocoee.nic     lucio       batch    STDIN              9317    --       16       --            01:00:00 C 00:06:17
102904.ocoee.nic     lucio       batch    STDIN              9590    --       16       --            01:00:00 R      -- 

The qstat output shows the following:

Job ID The first column gives the PBS-assigned job ID.
Username The second column gives the submitting user's login name.
Queue The third column gives the queue into which the job has been submitted.
Jobname The fourth column gives the PBS job name. This is specified by the PBS -n option in the PBS batch script. Or, if the -n option is not used, PBS will use the name of the batch script.
SessID The fifth column gives the associated session ID.
NDS The sixth column gives the PBS node count. Not accurate; will be one.
Tasks The seventh column gives the number of logical cores requested by the job's -size option.
Req’d Memory The eighth column gives the job's requested memory.
Req’d Time The ninth column gives the job's requested wall time.
S The tenth column gives the job's current status. See the status listings below.
Elap Time The eleventh column gives the job's time spent in a running status. If a job is not currently or has not been in a run state, the field will be blank.

The job's current status is reported by the qstat command. The possible values are listed in the table below.

Status value


E Exiting after having run
H Held
Q Queued
R Running
S Suspended
T Being moved to new location
W Waiting for its execution time
C Recently completed (within the last 5 minutes)


The Moab showq utility gives a different view of jobs in the queue. The utility will show jobs in the following states:

Active These jobs are currently running.
Eligible These jobs are currently queued awaiting resources. A user is allowed five jobs in the eligible state.
Blocked These jobs are currently queued but are not eligible to run. Common reasons for jobs in this state are jobs on hold and the owner currently having five jobs in the eligible state.


The Moab checkjob utility can be used to view details of a job in the queue. For example, if job 736 is currently in a blocked state, the following can be used to view the reason:

> checkjob 736

The return may contain a line similar to the following:

BLOCK MSG: job 736 violates idle HARD MAXIJOB limit of 5 for user <your_username>  partition ALL (Req: 1  InUse: 5) 

This line indicates the job is in the blocked state because the owning user has reached the limit of five jobs currently in the eligible state.


The Moab showstart utility gives an estimate of when the job will start.

> showstart 100315
job 100315 requires 16384 procs for 00:40:00

Estimated Rsv based start in 15:26:41 on Fri Sep 26 23:41:12
Estimated Rsv based completion in 16:06:41 on Sat Sep 27 00:21:12

The start time may change dramatically as new jobs with higher priority are submitted, so you need to periodically rerun the command.


The Moab showbf utility gives the current backfill. This can help you create a job which can be backfilled immediately. As such, it is primarily useful for short jobs.


The utility xtnodestat can be used to see which jobs are currently running and which cabinets, nodes, and processors they are running on.

Back to Contents

Scheduling Policy

Darter uses TORQUE and Moab to schedule jobs. NICS is constantly reviewing the scheduling policies in order to adapt and better serve users.

The scheduler gives preference to large core count jobs. Moab is configured to do “first fit” backfill. Backfilling allows smaller, shorter jobs to use otherwise idle resources.

Users can alter certain attributes of queued jobs until they start running. The order in which jobs are run depends on the following factors:

  • number of cores requested - jobs that request more cores get a higher priority.
  • queue wait time - a job's priority increases as the time it waits to run.
  • account balance - jobs that use an account with a negative balance will have significantly lowered priority.
  • number of jobs - a maximum of five jobs per user, at a time, will be eligible to run. The rest will be blocked.

In certain special cases, the priority of a job may be manually increased upon request. To request priority change you may contact NICS User Support. NICS will need the job ID and reason to submit the request.

More detailed information can be found in the Queues section.

Back to Contents


Queues are used by the batch scheduler to aid in the organization of jobs. This section lists the available queues on Darter. An individual user may have up to 5 jobs eligible to start at any one time (regardless of how many jobs may already be running), while a project may have a total of 10 jobs eligible to run across all the users charging against that project. Jobs in excess of these limits will not be considered for execution. Additionally, users are limited to 25 simultaneous running jobs and projects are limited to 40 simultaneous running jobs.

For example, if you submit 12 jobs, 5 would be eligible, and 7 would be blocked (with an "Idle" state). If three of the jobs run, some blocked jobs will be released so that there are still 5 eligible jobs, and 4 blocked jobs. This continues until all jobs are run. This is done to make it easier to schedule the jobs (there are fewer jobs to consider), and to prevent a single user from dominating the system with many small jobs.

Job priority on Darter is based on the number of cores and wall clock time requested. Jobs with large core counts intentionally get the highest priority. Jobs with smaller core counts do run effectively on Darter as backfill. While the scheduler is collecting nodes for larger jobs, those with short wall clock limits and small core counts may use those nodes temporarily without delaying the start time of the larger job.

Capability jobs on Darter

Users are encouraged to submit capability on Darter at any time. However, capability jobs are only executed at specific times at the discretion of NICS. Capability jobs are generally executed after preventative maintenance periods or by demand, if preventative maintenance is not performed. Users who plan on running or have questions concerning capability jobs are encouraged to contact or their NICS point of contact.

Darter Queues

Jobs on Darter are sorted into queues based on size and walltime.

Darter Queue

Min Size

Max Size

Max Wall Clock Limit

hpss n/a n/a 24:00:00
batch 16 11,968 24:00:00
capability 6,016 11,584 24:00:00

* Requests for jobs on Darter must be multiples of 16. For example, the smallest job on Darter would request 16 (physical) cores.

Back to Contents

Job Execution

Once the access to compute resources has been allocated through the batch system, users have the ability to execute jobs on the allocated resources. This section gives examples of job execution and provides common tips.

The PBS script is executed on the aprun node (or login node for interactive jobs). All execution calls made directly to programs(eg ./a.out), they will be executed on the service node. This may be useful for records keeping, staging data, etc. Any memory- or computationally-intensive programs should be run using aprun, otherwise it bogs down the node, and may cause system problems. You may run non-MPI programs on a compute node using aprun, see the Single-Processor (Serial) jobs and Multiple Single-Processor Programs sections below.

To launch parallel jobs on one or more compute nodes, use the aprun command. System specifications for Darter should be kept in mind when running a job using aprun. A Darter XC30 node consists of two sockets, each with 8 physical cores (16 logical cores if using Hyper-threading ), so there are 16 physical cores per node (32 if using HT). The PBS size option requests physical compute cores, not logical cores. This is not necessarily the number of cores that will be used, but rather the number of physical cores that will be made available for your job (idle cores are still inaccessible to other users). The easiest way to determine this number may be by calculating the number of nodes that will be occupied (with or without Hyper-threading) and multiplying that number by 16 physical cores/node.

The following options are commonly used with aprun:

Commonly used options for aprun

-n Total number of MPI processes (default: 1)
-N Number of MPI processes per node (1-16)
-S Number of MPI processes per socket (1-8)
-d Specifies number of cores per MPI process (for use with OpenMP, 1-16)
-j 2 Turns on Hyper-threading on the processor (by default is always off) and indicates to allocate two processes per physical core

The best way to understand the effects of these options is to try them yourself. Please see our tutorial on the subject.

MPI examples

aprun -n $PBS_NNODES ./a.out

This uses all physical cores, one MPI process on each core. The environmental variable PBS_NNODES is the number of cores requested at the top of the PBS script. In most cases, it is unnecessary to do anything beyond this.

aprun -n 25 ./a.out

If for some reason you want to use a number of cores that is not a multiple of 16, that is valid. Round up to the next multiple of 16 for the resource request, the extra cores will remain idle. This example would require #PBS -l size=32.

aprun -n 12 -N 6 ./a.out

This will make the XC30 to emulate the 6 cores/socket layout of the XT5: there will be six MPI processes per node, all on one socket. This example would require you to request 32 physical cores on the XC30.

aprun -n 8 -S 2 ./a.out

On the XC30, this is similar to the previous example, running 4 MPI processes per node. However, now they are running two on each socket. This ensures that both sockets are used, and that the memory is evenly distributed among the sockets. This ensures even distribution of L3 cache, and memory (a process can access memory on the other socket, but not as quickly as its own memory).

aprun -n 32 -j 2 ./a.out

On the XC30, there will be 32 running MPI processes on the same node running one MPI rank per logical core by exploiting hyper-threading.


Darter supports threaded programming within a node. The aprun -d flag is used to specify the number of cores per MPI process, so with OpenMP, aprun -d $OMP_NUM_THREADS uses one thread per core. When using every core, this would require at least n*d cores to be requested, the following examples assume that three nodes have been requested – #PBS -l size=48.

aprun -n24 -N8 -S4 -d2 ./a.out

Here, each MPI process has two OpenMP threads, filling three whole nodes. For some codes, two OpenMP threads per MPI process may be optimal. If the reason for using OpenMP is instead to increase the available memory, you may want to use 8 or even 16 threads per MPI process instead, though there is some performance penalty for using OpenMP across sockets in XC30's current configuration (using QPI QuickPath Interconnect).

aprun -n6 -N2 -S1 -d7 ./a.out

The -d flag specifies the depth, or number of cores to assign to each MPI process (when the MPI process spawns an OpenMP thread, it has a dedicated core to put it on). The -S option causes the second process to be put all on the second socket, rather than filling out the first socket first.

Single-Processor (Serial) Jobs

Serial programs which are memory or computationally intensive should never be run on the service nodes (anything outside of aprun). Service nodes have limited resources shared by all users, and when they run out, system problems may take place. To run serial programs on the compute nodes, the program must be compiled with the compiler wrappers (cc, CC or ftn). You would then request one node (16 cores) with PBS (#PBS -l size=16). Use the following line to run a serial executable on a compute node:

aprun -n 1 ./a.out 

Running Multiple Single-Processor Programs

If you need to run many instances of a serial code (as in a typical parameter sweep study for instance), we highly recommend using Eden. Eden is a simple script-based master-worker framework for running multiple serial jobs within a single PBS job. Detailed instructions for using Eden are found here.

Job Accounting

Projects are charged based on usage of compute resources. This section gives details on how each job’s usage is calculated. PBS allocates cores to batch jobs in units of the number of cores available per node. A node cannot be allocated to multiple jobs, so a job is charged for the entire node whether or not it uses all its cores. The PBS -l size option specifies the number of cores to allocate to a job. For example on Darter a multiple of 16 must be requested.

Getting Accounting Information

This section illustrates the usage of two commonly  used utilities for obtaining accounting information.


The showusage utility can be used to view your project allocation and overall usage through the last job accounting posting (usually the previous night).


More detailed accounting information can be obtained using the glsjob command:

glsjob -u <username>
Prints current accounting information for a particular user.
glsjob -J <jobid>.xt5
Can be used to find information for a particular job.
glsjob -p <project>
Prints current accounting information for all jobs charged to a particular project account.
glsjob --man
Displays documentation for glsjob

Note: The user can grep the particular information from the output as they need.

On Darter the service unit charge for each job is:

32 x walltime x number of nodes

where walltime is the number of wall clock hours used by the job; number of nodes is PBS'size'/ 16

Job Refund Policy

NICS will provide refunds for user jobs which are adversely impacted by system issues beyond the control of the user. Refund requests must be made within two calendar weeks of a job’s completion date by submitting a ticket to Please provide: username, machine name, jobID, reason for refund request.

Examples of refund requests that will not be approved include: jobs run on projects that have a negative balance, jobs that started and completed after the project’s end date, and jobs that failed because they reached the user-specified wallclock limit.

NICS strongly encourages the use of application checkpoint restart files. Users should only request refunds from the time of the last successful checkpoint. The refund limit for eligible jobs is six hours. Exceptions to the maximum refund will only be considered for cases where appropriate checkpointing can not effectively mitigate loss due to the nature of the underlying machine problem.

Back to Contents