The HPC facilities at CSUC are comprised of three different machines integrated in a single queue system. The batch manager automatically assigns a job to computing nodes in all three machines depending on demanded resources and availability. Jobs submitted from the login node are automatically assigned to the most suitable node.
The queue system runs on SLURM Workload Manager software. You can find a starter's guide to SLURM here.
Name | Model | Specifications |
---|---|---|
Collserola
| Hybrid Bull | 10 nodes x 2 Xeon E5-2697 (24 cores/node) + Intel Xeon Phi 5120P (60 cores/node) 512/256 GB main memory/node (InfiniBand connected) 20.78 TB disk space Peak performance: 15.29 Tflop/s |
Canigó | Bull Sequana X800 | Shared memory 384 cores at 2.7 GHz 9 TB RAM 40 TB storage Conected using Infiniband (100 Gbps) to BeeGFS cluster (shared storage /scratch). |
Pirineus 2 | Bull Sequana X550 | Heterogeneous cluster 46 standard nodes 48 cores (2 cpus x 24 cores) Intel(R) Xeon(R) Platinum 8168 CPU at 2.7 GHz 192 GB RAM 4 TB storage 6 FAT nodes 48 cores (2 cpus x 24 cores) Intel(R) Xeon(R) Platinum 8168 CPU at 2.7 GHz 384 GB RAM 4 TB storage 4 nodes with 2 GPU’s P100 48 cores (2 cpus x 24 cores) Intel(R) Xeon(R) Platinum 8168 CPUat 2.7 GHz 192 GB RAM 4 TB storage GPU’s: 3584 CUDA nucleous 12 GB RAM Peak performance: 4.7 Tflop/s 4 nodes with KNL 68 cores with Intel(R) Xeon Phi(TM) CPU at 1.6 GHz 384 GB RAM |