The HPC facilities at CSUC are comprised of three different machines integrated in a single queue system. The batch manager automatically assigns a job to computing nodes in all three machines depending on demanded resources and availability. Jobs submitted from the login node are automatically assigned to the most suitable node.
The queue system runs on SLURM Workload Manager software. You can find a starter's guide to SLURM here.
Canigó:
Status: UP
Model:
Bull Sequana X800
Specifications:
Shared memory
48 cores x 8 Intel® Xeon® Platinum 8168 CPU at 2.7 GHz
9 TB RAM
40 TB storage
Conected using Infiniband (100 Gbps) to BeeGFS cluster (shared storage /scratch).
Pirineus II:
Status: ALL UP
Model:
Bull Sequana X550
Specifications:
Heterogeneous cluster
44 standard nodes with:
24 cores x 2 Intel® Xeon® Platinum 8168 CPU at 2.7 GHz
192 GB RAM (4 GB/core)
4 TB of disk storage
6 High memory nodes with:
24 cores x 2 Intel® Xeon® Platinum 8168 CPU at 2.7 GHz
384 GB RAM (8 GB/core)
4 TB of disk storage
4 GPGPU nodes with:
24 cores x 2 Intel® Xeon® Platinum 8168 CPU at 2.7 GHz
192 GB RAM
4 TB of disk storage
2 Nvidia P100 GPUs:
3584 CUDA nucleous
12 GB RAM
Peak performance: 4.7 Tflop/s
4 Intel Knight's Landing nodes with:
68 cores x 1 Intel® Xeon Phi™ 7250 CPU at 1.4 GHz
384 GB RAM
4 TB of disk storage
Collserola:
Status: ALL UP
Model:
Hybrid Bull
Specifications:
10 nodes with:
24 cores x 2 Intel® Xeon E5-2697
60 cores Intel® Xeon Phi 5120P
512/256 GB main memory (InfiniBand connected)
20.78 TB disk space
Peak performance: 15.29 Tflop/s