The National Institute for Computational Sciences

File Systems


The below table describes the ACF file systems.
File System PurposePath to User's DirectoryQuota, Purge policy
Home Directory /nics/[a,b,c,d]/home/{username} 10GB quota, not purged
Lustre Scratch Directory /lustre/haven/user/{username} No Quota , purged
Lustre Project Directory /lustre/haven/proj/{project} By Request, not purged
Newton Gamma Directory/lustre/haven/gamma/{directory} Newton Amount, not purged
Lustre Medusa Directory /lustre/medusa/proj/{project} Retired Jan 17, 2018
ACF file systems are generally very reliable, however, data may still be lost or corrupted. Users are responsible for backing up critical data unless arrangements are made in advance. Backups can be provided by request for a fee for critical data.

Backups are performed on the Home Directory file system. Lustre Haven project directories are only backed up by request for a fee. Lustre Haven scratch directories are NOT backed up and are purged periodically. Home directory and project directory space are the only space that is not subject to purge.

Network File System (NFS)

NFS space is available and is used for home directories and project space mounted across many ACF resources. There is approximately 15 terabytes (TB) of space available in this file system.

NFS Home Directories

User home directories are provided by NFS and is unpurged with a default quota of 10 gigabytes (GB) of storage space. This is the location to store user files up to the quota limit. The environment variable $HOME points to your home directory path. To request an increase in home directory quota limit submit a request to Project space on NFS is discussed below.

Home directories are regularly backed up.

NFS Project Space

For sharing data among a research group, project directories on NFS can be provided. To request an NFS project directory see the Project Directory Request page. NFS project directories are located at /nics/a/proj/{directory}.

NFS Project space directories are regularly backed up.

Lustre Haven

The ACF global file system was purchased in the summer of 2017 by JICS using JICS funds. This file system resides on a Data Direct Networks (DDN) 14K storage subsystem and is called Lustre Haven or simply Haven. Haven provides approximately 1.7 petabyes (PB) of usable storage and is available on all ACF login, data transfer nodes (DTNs) and compute nodes mounted at /lustre/haven. Lustre is a high performance parallel file system which can achieve up to approximately 24GB/s file system performance. Lustre Haven provides global high performance scratch space for data sets related to running jobs and global project space for the ACF resources. These are described in more detail below.

Scratch Directories on Lustre Haven

The Haven file system provides global high performance scratch space for data sets related to running jobs on the ACF resources and transferring data in and out of the DTNs. Every user has their own scratch directory created at account creation time located in /lustre/haven/user/{username}.The environment variable $SCRATCHDIR points to each users scratch directory location. Scratch space on Haven can be purged weekly, but has no storage space or quota limit associated with it.

Lustre Haven Scratch directories are NOT backed up.

Important Points for Users Using Lustre Haven Scratch

  • The Lustre Haven Scratch file system is scratch space, intended for work related to job setup, running jobs, and job cleanup and post-processing on ACF resources and not for long term data storage. Files in scratch directories are not backed up and data that has not been used for 30 days is subject to being purged. It is the user's responsibility to back up all important data to another storage resource.

    Special Note: In accordance with the Acceptable Use Policies, modifying file access times (using touch or any other method) for the purpose of circumventing purge policies may result in the loss of access to the scratch file systems. Under special circumstances, users may request a purge exemption by submitting a request in a timely manner that includes detailed justification to Please include file system (ACF-Open/ACF-SIP), PI of the project, user requesting exemption, ACF-Account, time requested (e.g. two weeks), and detailed justification.

    The Lustre find command can be used to determine files that are eligible to purge:

    > lfs find /lustre/haven/user/$USER -mtime +30 -type f
  • This will recursively list all regular files in your Lustre scratch area that are in eligible to be purged.

  • Striping is an important concept with Lustre—. Striping is the ability to break files into chunks and spread them across multiple storage targets (called OSTs). The striping defaults set up for NICS resources are usually sufficient but may need to be altered in certain use cases, like when dealing with very large files. Please see our Lustre Striping Guide for details.

  • Beware of using normal Linux commands for inspecting and managing your files and directories in Lustre scratch space. Using ls -l can cause undue load and may hang because it necessitates access to all OSTs holding your files. Make sure that your ls is not aliased to ls -l.

  • Use lfs quota to see your total usage on the Lustre system. You must specify your username and the Lustre path with this command, for example:

    > lfs quota -u <username> /lustre/haven

For more detailed information regarding Lustre usage, see the following pages:

Lustre Haven Project Directories

For sharing data among a research group, project directories on Lustre can be provided. To request a Lustre Haven project directory see the Project Directory Request page. Lustre Haven project directories are located at /lustre/haven/proj/{project-name}.

Lustre project directories are NOT normally backed up and can be backed up by request for a fee.

Lustre Medusa Scratch and Project Directories

The Lustre Medusa file system was retired on January 17, 2017. All files and directories under /lustre/medusa are no longer recoverable. Lustre Medusa directories were NOT backed up.

Newton Storage Allocation Transition to ACF

i.e. Newton Home directories, /data, /lustre/scratch, and /gamma directories

In order to streamline the transition from Newton to the ACF, the ACF will honor all approved Newton project storage allocations that exist in /data, /lustre/projects, and /gamma for one year until September 20, 2018. A directory will be created in the Lustre Haven file system at /lustre/haven/gamma/{directory} to correspond to the Newton directory. Users and/or workgroups will only be allowed to have one /lustre/haven/gamma directory and they can create subdirectories off of that directory as needed. The NICS User Portal has been updated to list Newton project space allocations and the directory location on the ACF. Users must have an ACF account and perform the "associate your NetID with your NICS account" process in the NICS user portal in order to get a /lustre/haven/gamma directory created. Every Monday an attempt to create /lustre/haven/gamma directories for users transitioning from Newton will be performed. Submit a ticket if you think you should have a Newton project directory transitioned to ACF but you don't see it listed in the NICS user portal. Transferring data from Newton to the ACF is the responsibility of the user.

Newton /gamma (/gpfs/gpfs0) file system will be decommissioned on Aug 1, 2018. Any files left on Newton /gamma were tarred up and moved to Newton /lustre/tarfiles. If you had a directory on the Newton GPFS file system in, say, directory /gamma/victor, then a compressed tar file with your /gamma files will be available on Newton in /lustre/tarfiles/gamma_victor.tar.gz. The MD5 checksum of the uncompressed tar file is available at /lustre/tarfiles/gamma_victor.tar.md5sum. Some files due to their size were not compressed and are available in /lustre/tarfiles/gamma_{directory-name}.tar. All these files are owned by the owner and group corresponding to the /gamma directory.

The Newton home directories will be decommissioned on Sept 17, 2018. Any files left on the Newton home directory file system will no longer be available after this date.

The Newton Lustre file system will be decommissioned on Oct 15, 2018. Any files left on the Newton Lustre file system will no longer be available after this date.

All remainingNewton resources (login nodes, VMs, and /data file system) are planned to be decommissioned on Dec 3, 2018. The Newton cluster resources will all be turned off on this date.

Important Notes: Newton home directories are not included in the transfer of storage allocation space to the ACF. JICS is providing 500 TBs of projects space in Lustre Haven for UTK researchers which is greater than the space that was provided by the Newton /gpfs (/gamma) file system.

Lustre Haven directories are NOT backed up and can be backed up by request for a fee.

NICS will be developing additional storage policies and will notify users about storage policy changes several months prior to the expiration of Newton project directories transitioned to the ACF.