Blanca High-Performance Compute Nodes (HPC) are the newest addition to the suite of nodes currently available to Blanca contributors. The HPC nodes are designed to support low-latency InfiniBand for Message Passing Interface (MPI) communication. This form of parallel computing allows discrete nodes to communicate during a distributed computation. This allows contributors to run multi-node distributed parallel computing jobs. Currently, the National Solar Observatory, the Aerospace Mechanics Research Center (AMReC), and Research Computing have contributed HPC nodes to Blanca.

Example Configuration Specifications Cost Notes
HPC Node
  • 2 Intel Xeon Gold 6130 (2.1 GHZ, 16-core, "Skylake")
  • 192 GiB RAM (2666 MT/s)
  • 480 GB local ssd
  • 10-gigabit Ethernet
  • Mellanox EDR (100Gb/sec)
$7,712.89/node batch processing, high-throughput computation, high-performance parallel/distributed computation

Research Computing is excited to welcome the new contributors as well as implementing the HPC nodes. If you would like more information on contributing to and using Blanca, please contact us at rc-help@colorado.edu.