SLURM Submit script example
For more information use the Job Script Generator.
orca_example.slm
#!/bin/bash #SBATCH -J orca_example #SBATCH -e orca_example.err #SBATCH -o orca_example.out #SBATCH -p std #SBATCH --nodes=1 #SBATCH --ntasks=4 #SBATCH --mem-per-cpu=3998 module load apps/orca/4.1.0 cd $SCRATCH cp -r $SLURM_SUBMIT_DIR/*.inp $SCRATCH $(which orca) orca_example.inp > orca_example.out cp ./orca_example.out $SLURM_SUBMIT_DIR
Sbatch options:
- -J: Specify a name for the job allocation. The default is the name of the batch script.
- -e: Specify a name for the error output file.
- -o: Specify a name for the output file.
- -p: Specify the name of the partition (queue) where the job will be submited. The default is std.
- --nodes: Number of nodes requested for allocation.
- --ntasks: Number of MPI processes requested for allocation.
For parallel calculations, the number of processes must be explicitly indicated in orca_example.inp file using:
%pal nprocs 4 end
- --mem-per-cpu: Memory allocated per core.
ORCA uses 3000MB per core by default. This value can be changed in orca_example.inp using the directive:
%maxcore 3000 end
It is recommended to use ~75% of the requested memory to SLURM.