The National Institute for Computational Sciences

Nautilus Quick Start Guide

Nautilus was decommissioned on May 1, 2015. For more information, see Nautilus Decommission FAQs.

Overview

Nautilus is a robust multicore shared memory system that is primarily used for serial and parallel visualization and analysis of data from simulations, sensors, or experiments. It allows for both utilization of a large number of processors for distributed processing and the execution of legacy serial analysis algorithms for very large data processing by large numbers of users simultaneously. It has a peak performance of 8.2 Teraflops.

Nautilus Architecture

  • SGI Altix UV 1000
  • 1024 cores
  • Intel Nehalem EX processors (128 8-core)
  • 4 UV10s (Harpoon nodes with 32 cores, 128 GB, and 2 GPUs each)
  • 4 TB of global shared memory
  • 4 NVIDIA Tesla (Fermi) GPUs
  • 1.3 PB Lustre Medusa Filesystem

Access and Login

In order to use ssh to access Nautilus, you must use a one-time password (OTP) token. For more information on obtaining a token, see the Access and Login page.

To connect to Nautilus, type the following at a shell prompt:

ssh username@login.nautilus.nics.utk.edu

or access via GSI by first accessing the "Single Sign-On Hub":

ssh username@login.xsede.org

and then typing the following command:

gsissh gsissh.nautilus.nics.xsede.org

Nautilus has 3 login nodes named: Arronax, Conseil, and Nedland (GSI)

Currently, gridftp, gsissh, ftp, sftp, scp, bbcp, and hsi are available on Nautilus. See the Data Transfer page for more information.


How to get your code compiled?

The easiest way to get your code compiled on Nautilus can be achieved by using the default Intel compilers. See the following examples to get an idea about how it works:

Serial code

icc serialcode.c -o serialprogram

MPI and C program

icc myprogram.c -o myprogram -lmpi

MPI and C++ program

icc myprogram.cxx -o myprogram -lmpi++ -lmpi

MPI and Fortran program

ifort hello_mpi.f90 -o hello_mpi -lmpi

For more information about other available compilers and options, please go to Compiling on Nautilus page.


Running Jobs

To submit a job to PBS, use the qsub command. The most common way to use qsub is to simply pass it the name of a 'batch script' that contains all of the information about running your job:

> qsub myscript.pbs

The following is an example of how to write a batch script.

#PBS -S /bin/bash
#PBS -A UT-NTNL0121
#PBS -j oe
#PBS -l ncpus=96, mem=128GB, walltime=01:35:00
cd /lustre/medusa/$USER
mpiexec -np 96 ./a.out

OpenMP

OpenMP programs must also be run as batch/interactive jobs. The number of threads spawned by an OpenMP program is controlled by the environment variable OMP_NUM_THREADS. For programs that use pthreads, use the OMP_NUM_THREADS environment variable in the same way to control the number of threads. See the Batch Scripts page for more information on batch jobs.

You can also use qsub to start an interactive session on Nautilus by including the -I option:

> qsub -I -A XXXYYY -l ncpus=16,walltime=1:00:00,mem=64GB

By default, jobs with 32 cores or less will be placed on a Harpoon node if available.

To guarantee job placement on Nautilus use:

#PBS -l feature=nautilus

To guarantee placement on a Harpoon node (with 32 cores or less) use:

#PBS -l feature=uv10

Note: For job submission, the only required option is -l ncpus=. If you only specify ncpus then your default account will be used, walltime defaults to 1 hour, and the memory allocation will default to 4000MB per CPU requested. See the Job Accounting page for more information about how jobs are charged.

Eden

In addition to running traditional parallel MPI and/or OpenMP applications, Nautilus can also run many concurrent instances of serial applications using Eden.

For even more advanced options, see the Nautilus Running jobs page.


Filesystems

Nautilus has access to two file systems:

  • $HOME points to /nics/[a-d]/home/$USER, the Network File System (NFS home space, quota enforced)
  • Lustre Medusa File System is located at /lustre/medusa/$USER (Lustre scratch space, no quota and 30 day purge policy)

Network File System (NFS)

The Network File Service (NFS) server contains user's home directories, project directories, and software directories. Home and project directories can be used to store frequently used items such as source code, binaries, and scripts. Both are accessible from all interactive NICS resources. See the File systems page for more information.

Lustre Medusa File System

This space is intended for production work and not long term storage. This space is temporary and is subject to frequent purging to ensure the stability of the system for all users.

Optimizing your IO performance will not only lessen the load on Lustre, it will save you compute time as well. Please consider reading the Lustre Striping Guide which we believe will help you make the best use of the parallel Lustre filesystem and improve your application's I/O performance.


Modules

module avail

Lists all available installed packages/software.

module list

Lists all currently loaded modules in your environment.

module avail {package}

Lists all currently installed versions of the specified package. Notice the "default" version of the package. If you want a different version, you will have to use the module load {package/version#} command.

module load {package/version#}

Loads package.

module unload {package}

Removes package from environment.

module swap {A} {B}

Swaps loaded {A} package with {B} package.

module show {package}

Shows the modified paths of the compiler/software, in case you would like to see libraries or the executables.


Compiling

module avail PE

Lists all installed compilers.

To compile OpenMP code with PGI, you need to use the -mp flag, whereas with GNU, you would use -fopenmp. There are no parallel compiler executables on Nautilus, so in order to compile MPI codes, provide the -lmpi flag. For details on the flags accepted by each compiler, see the man pages for each executable name.


GNU Intel PGI
C gcc icc pgcc
C++ g++ icpc pgCC
Fortran gfortran ifort pgf90
help@nics.utk.edu