Useful commands
conda env list: List available environments.
conda activate env_name: Activate the environment env_name.
conda list: List installed packages in the active conda environment.
Environments
Since the 3.x version of python is newer, this should be your default choice.
Switching or moving between environments is called activating the environment.
By default there are some basic environments installed in the system:
- Machine learning / Deep learning:
- TensorFlow
- Pytorch
- Keras (Neural networks)
- Sklearn
- Modelling optimitzation
- Pyomo
- Pyomo
- Statistics & computing
- Dask
- Pandas
- Theano
With Conda, you can create, export, remove, and update your own custom environments that have different versions of Python and/or packages installed in them. Theses custom environments are installed in ~/.conda/env/<env_name> by default. Switching or moving between environments is called activating the environment.
SLURM Submit script example
This script example has been generated using the Job Script Generator.
#!/bin/bash #SBATCH -J conda_example #SBATCH -e conda_example.err #SBATCH -o conda_example.out #SBATCH -p std #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --mem=2000MB module load apps/conda/3 INPUT_DIR=${SLURM_SUBMIT_DIR} OUTPUT_DIR=${SLURM_SUBMIT_DIR} cd $SCRATCH cp -r $INPUT_DIR/* $SCRATCH conda activate environment_name python example.py cp ./* $OUTPUT_DIR
Sbatch options:
- -J: Specify a name for the job allocation. The default is the name of the batch script.
- -e: Specify a name for the error output file.
- -o: Specify a name for the output file.
- -p: Specify the name of the partition (queue) where the job will be submitted. The default is std.
- --nodes: Number of nodes requested for allocation.
- --ntasks: Number of processes requested for allocation.
- --mem, --mem-per-cpu: Memory allocated per node/core respectively. If it is not specified SLURM associates:
- 3998MB per requested core in std and gpu nodes.
- 24180MB per requested core in mem nodes.