The National Institute for Computational Sciences

System Overview

System Overview

  • Cray XC30 (Cascade) supercomputer
  • Cray Linux Environment CLE 5.2 UP01 (based on SLES 11.3)
  • 11,584 physical compute cores (23,168 logical cores with Hyper-Threading enabled)
  • 22.6 TB of compute memory
  • Peak performance of 240.9 TF
  • 1.3PB parallel Lustre file system for scratch
  • Interconnect has a Dragonfly network topology with Cray Aries technology
  • 4 cabinets
  • 724 compute nodes
  • 3 login nodes with 10GigE uplinks
  • Long term archival/storage available through HPSS

Compute Node Details

  • Two 2.6 GHz 64bit Intel 8-core XEON E5-2600 (Sandy Bridge) Series processors
  • 16 physical cores (32 logical cores if using Hyper-Threading)
  • 32 GB of memory
  • Connection via Cray Aries router with a bandwidth of 8GB/sec
  • Diskless nodes
  • Parallel Lustre scratch file system
  • Shared libraries access available via DVS service

Login Node Details

  • Four 2.6 GHz 64bit Intel 8-core XEON E5-2600 (Sandy Bridge) Series processors
  • 256 GB of memory
  • 10GigE connection to the Internet
  • Total of 3 login nodes
    • Two for OTP only access
    • One for GSI and GridFTP access

Lustre File System Details

  • 1.3PB of usable space
  • Lustre Parallel file system
  • Maximum aggregated bandwidth of 30.0 GB/sec with Lustre (10GB/sec per SSU)
  • FDR Infiniband connection
  • Default stripping values:
    • Stripe count of 2
    • Stripe size of 1MB
  • Maximum stripe count of 90
  • 900 2TB SATA disk drives (300 per SSU)
  • 8 LNET nodes
  • 3 Scalable Storage Units (SSU)
  • 4 OSS's per SSU
  • 7-8 OST's per OSS
  • 90 OST's Total

Useful Links