RMACC Summit is a supercomputing cluster that is available for use at no cost to researchers at the University of Colorado Boulder, Colorado State University, and members of the Rocky Mountain Advanced Computing Consortium (RMACC).  CU Boulder, in collaboration with CSU, designed and deployed Summit with funding from the National Science Foundation.

Summit uses Intel CPUs, an Intel Omni-Path interconnect, and a DDN GRIDScaler parallel file system with 1.5 petabytes of scratch storage. Compute tasks on Summit are deployed through a “batch” job management system called Slurm, with some resources reserved for interactive use. Users have access to Summit resources via general-access and project-specific resource allocations.

Summit was deployed with four types of nodes: general compute (shas), GPU (sgpu), high-memory (smem), and Intel Xeon Phi (sknl). An additional general compute type with newer processors (ssky) was added after deployment through condo contribution. All of the nodes have access to a high-performance interconnect with:

  • 2:1 oversubscribed Intel Omni-Path (8 Core Switches, 16/edge Core links)

  • 100 Gb/s/port throughput

  • 32 nodes/switch operating at 1:1 connectivity

As well as scratch storage:

  • DDN SFA14k GRIDScaler storage appliance

  • IBM GPFS file system

  • 4.742 TB of Metadata storage

  • 1.5 PB of Scratch storage

Node Type Number of Nodes CPU / GPU Memory Local Storage Interconnect



452 Intel Xeon E5-2680 v3 @2.50 GHz (2 CPUs/node, 24 cores/node) 2133 MT/s, Dual Rank, x4 Data Width RDIMM, (4.84 GB/core) 200 GB SSD (1/node) Omni-Path HFI (1/node)



20 Intel(R) Xeon(R) Gold 6126 CPU @ 2.60GHz (2 CPUs/node, 24 cores/node) 2666 MT/s, Dual Rank, x4 Data Width RDIMM (192 GiB/node) 200 GB SSD (1/node) Integrated Omni-Path
GPU* 11 Intel Xeon E5-2680 v3 @2.50 GHz (2 CPUs/node, 24 cores/node)

Nvidia Tesla K80 (2 accelerators/node)

2133 MT/s, Dual Rank, x4 Data Width RDIMM (128 GiB/node) 200 GB SSD (1/node) Omni-Path HFI (1/node)
High-Memory 5 Intel Xeon E704830 v3@2.10 GHz (4 CPUs/node, 48 cores/node) 2133 MT/s, Dual Rank, x4 Data Width RDIMM (2 TiB/node) 1 TB 7.2 RPM 6 GBps Near Line SAS 2.5" Hard Drive (12/node) in RAID6 Omni-Path HFI (1/node)
Phi 20 Intel Xeon Phi "Knights Landing" processor (1/node)

112 GiB/node, 6*16 GiB DIMMs and local Phi memory

200 GB SSD Omni-Path HFI (1/node)

*GPU Accelerator NVIDIA Tesla K80 (2/node)

Summit access is provided to RMACC members via the XSEDE system. More information on becoming an RMACC member can be found on the RMACC website.

Allocations of CPU time are available to all on Summit. An allocation allows Research Computing to keep track of how the system is being used and to configure relative priority between projects. All allocations are free of charge. Shares of Summit are allocated separately for CU, CSU, and RMACC. Users in each of these three allocated shares will automatically be given a general allocation within the larger share.  The general allocation is intended for testing and benchmarking jobs.  

Once a user has determined how much time they will need on the system, they are encouraged to submit a project allocation request. The required proposal documents the research activity and the resources required. Research Computing does not evaluate the merit of research through the allocation process but does evaluate that a user has appropriately calculated what resources they need and that the user is working within Research Computing’s acceptable use policies.

Research Computing offers in-person trainings at CU Boulder and many online training streams for all Summit users. One-on-one consultations to help establish, optimize and troubleshoot workflows are available upon request.  Research Computing also holds drop-in office hours, in conjunction with the Center for Research Data and Digital Scholarship (CRDDS), on Tuesdays from 12-2 pm in Norlin E206.

Summit is a valuable resource for researchers in need of high-performance computing resources. If you have any other questions about Summit or any other Research Computing resources please contact us at rc-help@colorado.edu.