Hardware

Hardware Specifications of ARCC2 cluster:

Node Type Number of nodes Processors per node Cores per nodes Memory per node InfiniBand Accelerator card Speed
RM 95 2 AMD EPYC 7452 CPUS 64 256GB RAM 100 GB/s N/A 2.35-3.35 GHz
GPU 10 2 AMD EPYC 7452 CPUS 64 1024GB RAM 100 GB/s 2 NVIDIA Tesla A100-40GB GPU 2.35-3.35 GHz
Large Mem 1 2 AMD EPYC 7452 CPUS 64 2048GB RAM 100 GB/s N/A 2.35-3.35 GHz

UC’s newest High-Performance Computing, AI and High-Performance Data Analytics (HPDA) system

UC’s newest high-performance computing and data analytics system, available Fall 2022,  is funded in part by investments from the Office of Research, IT@UC, colleges and departments and a significant grant from the National Science Foundation’s Major Research Instrumentation (MRI) program (award #2018617). UC is partnered with Hewlett Packard Enterprises (HPE) to architect a purpose-built compute resource for demanding High-Performance Computing (HPC) and Artificial Intelligence (AI) applications. 

ARCC2 provides transformative capability for rapidly evolving, computation-intensive and data-intensive research, supporting both traditional and non-traditional research communities and applications. The converged, scalable HPC, machine learning and data tools create an opportunity for collaboration and converged research, prioritizing researcher productivity and ease of use with an easy-to-use web-based interface. 

Innovation 

  • AMD EPYC 7452 CPUs: 64-core 2.35–3.35 GHz  
  • AI scaling to 20 Tesla A100-40GB GPUs  
  • Mellanox HDR-100 InfiniBand supports in-network MPI-Direct, RDMA, GPUDirect, SR-IOV, and data encryption  
  • Cray ClusterStor E1000 Storage System  
  • Open OnDemand – Web based interface 

Regular Memory 
Regular Memory (RM) CPU nodes provide extremely powerful general-purpose computing, machine learning and data analytics, AI inferencing, and pre- and post-processing. 95 RM nodes have: 

  • Two AMD EPYC CPUS, with: 
  • 64 cores 
  • 2.35-3.35GHz 
  • 256GB of system RAM 
  • 8 memory channels 
  • SATA SSD (960GB) 
  • Mellanox ConnectX-6 HDR InfiniBand 100Gb/s Adapter 

Large Memory 
Large Memory (LM) nodes provides 2TB of shared memory for genome sequence assembly, graph analytics, statistics, and other applications requiring a large amount of memory for which distributed-memory implementations are not available. 

ARCC2’s 1 LM nodes will consist of: 

  • Two AMD EPYC CPUS, with: 
  • 64 cores 
  • 2.35-3.35GHz
  • 8 memory channels 
  • 2048GB of system RAM
  • Mellanox ConnectX-6 HDR InfiniBand 100Gb/s Adapter 

GPU 
10 GPU nodes provide exceptional performance and scalability for deep learning and accelerated computing.  Each GPU node contains: 

  • Two NVIDIA Tesla A100 40GB GPUs 
  • Two AMD EPYC CPUS, with: 
  • 64 cores 
  • 2.35-3.35GHz
  • 8 memory channels 
  • 1024GB of RAM 
  • SATA SSD (960GB) 
  • Mellanox ConnectX-6 HDR InfiniBand 100Gb/s Adapter 

Quick Summary of ARC Storage Services

For more information on storage please visit https://arc.uc.edu/hpc/arc-storage-services 

And Please contact Jane Combs or arc_info@uc.edu with questions