The HPC facilities at CSUC are comprised of three different machines integrated in a single queue system. The batch manager automatically assigns a job to computing nodes in all three machines depending on demanded resources and availability. Jobs submitted from the login node are automatically assigned to the most suitable node.
The queue system runs on SLURM Workload Manager software. You can find a starter's guide to SLURM here.
Name | Status nodes | Model | Specifications |
---|---|---|---|
Collserola
| Collserola1 AVAILABLE Collserola2 AVAILABLE Collserola3 AVAILABLE Collserola4 AVAILABLE Collserola5 AVAILABLE Collserola6 AVAILABLE Collserola7 AVAILABLE Collserola8 AVAILABLE Collserola9 AVAILABLE Collserola10 AVAILABLE | Hybrid Bull | 10 nodes x 2 Xeon E5-2697 (24 cores/node) + Intel Xeon Phi 5120P (60 cores/node) 512/256 GB main memory/node (InfiniBand connected) 20.78 TB disk space Peak performance: 15.29 Tflop/s |
Canigó | AVAILABLE | Bull Sequana X800 | Shared memory 384 cores at 2.7 GHz 9 TB RAM 40 TB storage Conected using Infiniband (100 Gbps) to BeeGFS cluster (shared storage /scratch). |
Pirineus 2 | AVAILABLE | Bull Sequana X550 | Heterogeneous cluster 46 standard nodes 48 cores (2 cpus x 24 cores) at 2.7 GHz 192 GB RAM 4 TB storage 6 FAT nodes 48 cores (2 cpus x 24 cores) at 2.7 GHz 384 GB RAM 4 TB storage 4 nodes with 2 GPU’s 48 cores (2 cpus x 24 cores) at 2.7 GHz 192 GB RAM 4 TB storage GPU’s: 3584 CUDA nucleous 12 GB RAM Peak performance: 4.7 Tflop/s |