Useful commands
conda env list: List available environments.
conda activate env_name: Activate the environment env_name.
conda list: List installed packages in the active conda environment.
Environments
By default there are some basic environments installed in the system separated by fields of knowledge:
- Machine learning / Deep learning:
- TensorFlow
- Pytorch
- Keras (Neural networks)
- Sklearn
- Quantum chemistry
- PySCF
- Modelling optimitzation
- Pyomo
- Pyomo
- Statistics & computing
- Dask
- Pandas
- Theano
With Conda, you can create, export, remove, and update your own custom environments that have different versions of Python and/or packages installed in them. Theses custom environments are installed in ~/.conda/env/<env_name> by default. Switching or moving between environments is called activating the environment.
SLURM Submit script example
This script example has been generated using the Job Script Generator.
conda_example.slm
#!/bin/bash #SBATCH -J conda_example #SBATCH -e conda_example.err #SBATCH -o conda_example.out #SBATCH -p std #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --mem=2000MB module load apps/conda/3 INPUT_DIR=${SLURM_SUBMIT_DIR} OUTPUT_DIR=${SLURM_SUBMIT_DIR} cd $SCRATCH cp -r $INPUT_DIR/* $SCRATCH conda activate environment_name python example.py cp ./* $OUTPUT_DIR
Sbatch options:
- -J: Specify a name for the job allocation. The default is the name of the batch script.
- -e: Specify a name for the error output file.
- -o: Specify a name for the output file.
- -p: Specify the name of the partition (queue) where the job will be submitted. The default is std.
- --nodes: Number of nodes requested for allocation.
- --ntasks: Number of processes requested for allocation.
- --mem, --mem-per-cpu: Memory allocated per node/core respectively. If it is not specified SLURM associates:
- 3998MB per requested core in std and gpu nodes.
- 24180MB per requested core in mem nodes.