The National Institute for Computational Sciences

File Systems

Summary

The below table describes the ACF file systems. Backups are only performed on the Home Directory file system and on the Lustre Project Directory file system by request for a fee.
File System PurposePath to User's DirectoryQuota
Home Directory /nics/[a,b,c,d]/home/{username} 10GB
Lustre Scratch Directory /lustre/haven/user/{username} No Quota
Lustre Project Directory /lustre/haven/proj/{project} By Request
Lustre Medusa Directory /lustre/medusa/proj/{project} To Be Retired
Newton Gamma Directory/lustre/haven/gamma/{directory} Newton Amount
ACF file systems are generally very reliable, however, data may still be lost or corrupted. Users are responsible for backing up critical data unless arrangements are made in advance. Backups can be provided by request for a fee for critical data. See backup specifics for each file system listed below.

Home Directories

A user's home directory is unpurged with a default of 10 gigabytes of space. This is the location to store user files up to the quota limit. The environment variable $HOME points to your home directory path. To request an increase in home directory quota limit submit a request to help@nics.utk.edu.

Home directories are regularly backed up.

Lustre Scratch Directories on Lustre Haven

The Lustre Scratch file system resides on the new DDN 14K storage subsystem called Haven and is available on all ACF login, data transfer and compute nodes. Lustre is a high performance parallel file system which can achieve approximately 24GB/s file transfer performance. The environment variable $SCRATCHDIR points to your scratch directory location. This lustre scratch directory is purged weekly, but has no quota limit specified on it.

Lustre Scratch directories are NOT backed up.

Important Points for Users Using Lustre Scratch

  • The Lustre Scratch file system is scratch space, intended for production work and not for long term storage. Files in scratch directories are not backed up and data that has not been used for 30 days is subject to being purged. It is the user's responsibility to back up all important data to another storage resource.

    The Lustre find command can be used to determine files that are eligible to purge:

    > lfs find /lustre/haven/user/$USER -mtime +30 -type f
    
  • This will recursively list all regular files in your Lustre scratch area that are in eligible to be purged.

  • Striping is an important concept with Lustre—. Striping is the ability to break files into chunks and spread them across multiple storage targets (called OSTs). The striping defaults set up for NICS resources are usually sufficient but may need to be altered in certain use cases, like when dealing with very large files. Please see our Lustre Striping Guide for details.

  • Beware of using normal Linux commands for inspecting and managing your files and directories in Lustre scratch space. Using ls -l can cause undue load and may hang because it necessitates access to all OSTs holding your files. Make sure that your ls is not aliased to ls -l.

  • Use lfs quota to see your total usage on the Lustre system. You must specify your username and the Lustre path with this command, for example:

    > lfs quota -u <username> /lustre/haven
    

For more detailed information regarding Lustre usage, see the following pages:

Lustre and NFS Project Directories on Lustre Haven

For sharing data among a research group, project directories on NFS or Lustre can be provided. To request an NFS or Lustre project directory see the Project Directory Request page. NFS project directories are located at /nics/a/proj/{directory}. All Lustre project directories will be located at /lustre/haven/proj/{project-name}.

NFS project directories are regularly backed up. Lustre project directories are NOT backed up and can be backed up by request for a fee.

Lustre Scratch and Project Directories on Lustre Medusa

The Lustre Medusa file system is older DDN equipment and is being retired. All project directories under /lustre/medusa will be moved to /lustre/haven/proj. Lustre Medusa scratch data must be moved by users. Approximately 30 days before being retired, the Lustre Medusa file system will be set to readonly where users can no longer write files to this file system.

NFS Medusa directories are NOT backed up.

Newton /data, /lustre/scratch, and /gamma Directories

In order to streamline the transition from Newton to the ACF, the ACF will honor all approved Newton project storage allocations that exist in /data, /lustre/projects, and /gamma for one year until September 20, 2018. A directory will be created in the Lustre Haven file system at /lustre/haven/gamma/{directory} to correspond to the Newton directory. Users and/or workgroups will only be allowed to have one /lustre/haven/gamma directory and they can create subdirectories off of that directory as needed. The NICS User Portal has been updated to list Newton project space allocations and the directory location on the ACF. Users must have an ACF account and perform the "associate your NetID with your NICS account" process in the NICS user portal in order to get a /lustre/haven/gamma directory created. Every Monday an attempt to create /lustre/haven/gamma directories for users transitioning from Newton will be performed. Submit a ticket if you think you should have a Newton project directory transitioned to ACF but you don't see it listed in the NICS user portal. Transferring data from Newton to the ACF is the responsibility of the user. Note: Newton home directories are not included in the transfer of storage allocation space to the ACF.

Lustre Haven directories are NOT backed up and can be backed up by request for a fee.

NICS will be developing additional storage policies and will notify users about storage policy changes several months prior to the expiration of Newton project directories transitioned to the ACF.