|
#!/bin/bash #SBATCH -J orca_example #SBATCH -e orca_example.err #SBATCH -o orca_example.out #SBATCH -p std #SBATCH --ntasks=4 #SBATCH -t 02-00:00 module load apps/orca/4.1.0 ## # Modify the input and output files! INPUT_FILE=orca_example.inp OUTPUT_FILE=orca_example.out ## # You don't need to modify nothing more cp -r ${SLURM_SUBMIT_DIR}/${INPUT_FILE} ${SCRATCH} cd ${SCRATCH} $(which orca) ${INPUT_FILE} > ${OUTPUT_FILE} cp ./${OUTPUT_FILE} ${SLURM_SUBMIT_DIR} |
The options shown in the example are detailed below. For more information and a more comprehensive list of available options, see the sbatch command page.
For parallel calculations, the number of processes must be explicitly indicated in orca_example.inp file using:
%pal nprocs 4 end |
-n: Number of tasks.
ORCA uses 3000MB per core by default. This value can be changed in orca_example.inp using the directive:
%maxcore 3000 end |
It is recommended to use ~75% of the requested memory to SLURM, which is 3990 MB per core.