The HPC facilities at CSUC are comprised of three different machines integrated in a single queue system. The batch manager automatically assigns a job to computing nodes in all three machines depending on demanded resources and availability. Jobs submitted from the login node are automatically assigned to the most suitable node.
The queue system runs on SLURM Workload Manager software. You can find a starter's guide to SLURM here.
Canigó: AVAILABLE
Model:
Bull Sequana X800
Specifications:
Shared memory
48 cores x 8 Intel® Xeon® Platinum 8168 CPU at 2.7 GHz
9 TB RAM
40 TB storage
Conected using Infiniband (100 Gbps) to BeeGFS cluster (shared storage /scratch).
Pirineus II: AVAILABLE (Details)
Collserola: AVAILABLE
Name | Model | Specifications |
---|---|---|
Collserola
| Hybrid Bull | 10 nodes x 2 Xeon E5-2697 (24 cores/node) + Intel Xeon Phi 5120P (60 cores/node) 512/256 GB main memory/node (InfiniBand connected) 20.78 TB disk space Peak performance: 15.29 Tflop/s |
Canigó | Bull Sequana X800 | Shared memory 48 cores x 8 Intel® Xeon® Platinum 8168 CPU at 2.7 GHz 9 TB RAM 40 TB storage Conected using Infiniband (100 Gbps) to BeeGFS cluster (shared storage /scratch). |
Pirineus 2 | Bull Sequana X550 | Heterogeneous cluster 44 standard nodes with: 24 cores x 2 Intel® Xeon® Platinum 8168 CPU at 2.7 GHz 192 GB RAM (4 GB/core) 4 TB of disk storage 6 High memory nodes with: 24 cores x 2 Intel® Xeon® Platinum 8168 CPU at 2.7 GHz 384 GB RAM (8 GB/core) 4 TB of disk storage 4 GPGPU nodes with: 24 cores x 2 Intel® Xeon® Platinum 8168 CPU at 2.7 GHz 192 GB RAM 4 TB of disk storage 2 Nvidia P100 GPUs: 3584 CUDA nucleous 12 GB RAM Peak performance: 4.7 Tflop/s 4 Intel Knight's Landing nodes with: 68 cores x 1 Intel® Xeon Phi™ 7250 CPU at 1.4 GHz 384 GB RAM 4 TB of disk storage |
Biosca | BeeGFS storage cluster 4 nodes with: 18 SATA disks of 4 TB High speed / low latency Infiniband EDR network Raid driver Intel RS3DC080 |