Blanca is a “condo” compute cluster consisting of compute nodes and GPU nodes owned and contributed by multiple research groups. When a node is purchased the contributor retains priority access to their nodes and gains access to idle cycles from other contributors. Any Blanca user can submit jobs to the entire cluster, but contributors can submit jobs explicitly to their own nodes, preempting jobs from other accounts. Much like a condominium complex, Blanca contributors get access to added benefits such as high-speed connections and data transfer, scratch storage, and free system administration.

As of August 2019, Blanca is built with contributions from seventeen different research groups, in fields such as Behavioral Genetics, Cognitive Science, Applied Mathematics, and Geological Sciences. For the diverse research environment at CU Boulder, Blanca offers a centralized service for many compute requirements while maintaining a capital equipment funding model. Blanca takes the burden of system administration off of the contributor and frees them to concentrate their time and effort on pursuing their research, and makes idle cycles available to the community without interrupting the contributor’s access. Blanca nodes also have access to the RC Core Software stack, supporting a wide range of research needs as well as the opportunity for contributors to install additional software as they see fit.

Blanca offers three main node types: a legacy high-throughput compute node, a GPU node, and a new HPC node. The standard, high-throughput compute node has been utilized in the past for batch processing and other high-throughput computations. The GPU node is better equipped for such tasks as molecular dynamics, image processing, and deep learning. Table 1 shows a few example setups for these nodes, but all nodes may be customized to fit the contributor’s needs.

Type Specifications Cost1 Notes
High-throughput Compute Node
  • 2 Intel Xeon Gold 6130 (2.1 GHz, 16-core, "Skylake")
  • 192 GiB RAM (2666 MT/s)
  • 10-Gigabit Ethernet
$8,754.78/node Batch processing and other high-throughput computations
GPU Node
  • 2 Intel Xeon Gold 6130 (2.1 GHz, 16-core, "Skylake")
  • 1 NVIDIA Tesla T4 GPU coprocessor (12 GB memory)
  • 192 GiB RAM (2666 MT/s)
  • 10-Gigabit Ethernet
$13,604.11/node molecular dynamics, image processing, deep learning; alternate and additional GPUs supported
HPC Node
  • 2 Intel Xeon Gold 6130 (2.1 GHz, 16-core, "Skylake")
  • 192 GiB RAM (2666 MT/s)
  • 10-Gigabit Ethernet
  • Mellanox EDR (100Gb/sec)
$7,712.89/node Parallel computing, MPI

Table 1: Example specifications of three nodes as of 6/21/2019. All prices are subject to change. Additional configurations are available. 

Recently, Blanca has added new high performance compute nodes (HPC), piloted with our key contributor the National Solar Observatory. In the past, the use of Message Passing Interface (MPI) applications was discouraged on Blanca due to the lack of an HPC interconnect; but Blanca HPC nodes support low-latency MPI over EDR InfiniBand.  MPI is a form of parallel computing that allows separate nodes to communicate during a distributed computation. This allows Blanca contributors to run multi-node distributed parallel computing jobs.

If you are interested in contributing resources to Blanca please email rc-help@colorado.edu. Additional information on how to use Blanca can be found in the RC Documentation.