The National Institute for Computational Sciences

Beacon Quick Start Guide

System Overview

Beacon is an energy efficient cluster that utilizes Intel® Xeon Phi™ coprocessors.  It is funded by NSF through the Beacon project to port and optimize scientific codes to the coprocessors based on Intel's Many Integrated Core (MIC) architecture.

System Configuration

The Beacon system offers access to the following:

  • 48 compute nodes and 6 I/O nodes
  • FDR InfiniBand interconnect providing 56 Gb/s of bi-directional bandwidth
  • Each compute node is equipped with:

  • 2 8-core Intel® Xeon® E5-2670 processors
  • 256 GB of memory
  • 4 Intel® Xeon Phi™ coprocessors 5110P with 8 GB of memory each
  • 960 GB of SSD storage
  • Each I/O node provides:

  • Access to an additional 4.8 TB of SSD storage
  • Overall, Beacon provides 768 conventional cores and 11,520 accelerator cores that provide over 210 TFLOP/s of combined computational performance, 12 TB of system memory, 1.5 TB of coprocessor memory, and over 73 TB of SSD storage, in aggregate.

    Access and Login

    In order to use ssh to access Beacon, you must use a one-time password (OTP) token. For more information on obtaining a token, see the Access and Login page.

    To connect to Beacon, type the following at a shell prompt:

    or access via GSI by doing the following:

    Currently, gridftp, gsissh, ftp, sftp and scp are available on Beacon. See the Data Transfer page for more information.


    Beacon has access to three file systems: NFS, Lustre, and Local SSD.

    • $HOME points to /nics/[a-d]/home/$USER, the Network File System (NFS home space, quota enforced)
    • Lustre Medusa File System is located at /lustre/medusa/$USER (Lustre scratch space, no quota and 30 day purge policy)
    • A root directory on the local SSD scratch space contains folders named mic0, mic1, etc., and is mounted by the compute nodes.

    Network File System (NFS)

    The Network File Service (NFS) server contains user's home directories, project directories, and software directories. Home and project directories can be used to store frequently used items such as source code, binaries, and scripts. Both are accessible from all interactive NICS resources.

    Lustre Medusa File System

    This space is intended for production work and not long term storage. This space is temporary and is subject to frequent purging to ensure the stability of the system for all users. Compute nodes do not have access to home and project directories on NFS. Compute nodes only have access to the Lustre filesystems.

    Optimizing your IO performance will not only lessen the load on Lustre, it will save you compute time as well. Please consider reading the Lustre Striping Guide which we believe will help you make the best use of the parallel Lustre filesystem and improve your application's I/O performance.

    Local SSD

    Visual representation of the Local SSD file system

    A root directory on the local SSD scratch space contains folders named mic0, mic1, etc., and is mounted by the compute nodes.  The coprocessors on the compute nodes mount their respective mic# folder.  These unique directories have an absolute path determined by the job id assigned by the scheduler, and can be accessed through the environment variable TMPDIR.

    See the File systems page for more information.


    Most applications will need to be compiled with the proper Intel compiler and/or MPI wrapper

    For information on using the CAPS compiler please see the Beacon Compiling page for more information.

      Intel Compiler/MPI Wrapper
    C icc/mpiicc
    C++ icpc/mpiicpc
    Fortran ifort/mpiifort

    Custom Scripts

    Any secure communication with a MIC requires unique ssh keys that are automatically generated once the scheduler assigns compute nodes. Custom scripts have been created to use these ssh keys, which prevent prompts asking using users for passwords.

    Traditional Command Custom Beacon Script
    ssh micssh
    scp micscp
    mpirun/mpiexec micmpiexec

    For more information about running jobs, see Running Jobs.


    module avail

    Lists all available installed packages/software.

    module list

    Lists all currently loaded modules in your environment.

    module avail {package}

    Lists all currently installed versions of the specified package. Notice the "default" version of the package. If you want a different version, you will have to use the module load {package/version#} command.

    module load {package/version#}

    Loads package.

    module unload {package}

    Removes package from environment.

    module swap {A} {B}

    Swaps loaded {A} package with {B} package.

    module show {package}

    Shows the modified paths of the compiler/software, in case you would like to see libraries or the executables.