You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Table of Contents

Gromacs at a glance

Purpose: molecular dynamics
Latest version: 2016-3
Availability: all machines
Licence: open source
Documentation: Reference Manual


Description

Gromacs is a program for general molecular dynamics, notable for its fast computation of 1-4 interactions.

It can implement regular molecular dynamics simulations using a variety of integrators, Langevin dynamics, energy minimisation, test-particle insertion, etc. It implements a variety of methods to calculate electrostatics and Van der Waals interactions, thermostats and barostats, 2- and 3-dimensional periodic boundary conditions, etc.

A more detailed description can be found this page.


Licence

Gromacs operates under a GNU LGPL licence.


Available Versions

The currently available versions of Gromacs in our facilities are:

  • 4.5.5 (prades, pirineus)
  • 4.6.5 (all machines)
  • 5.1.1 (pirineus, collserola)
  • 5.1.2 (pirineus, collserola)
  • 5.1.2-omp (pirineus, collserola)
  • 5.1.3 (pirineus, collserola)
  • 2016 (pirineus, collserola)
  • 2016-3 (collserola)

All versions are compiled with MPI, except 5.1.2-omp which supports hybrid MPI-OpenMP paralellisation.

To load a version, call it in a script via 

module load gromacs/2016
 
source $GROMACS_HOME/bin/GMXRC

More detailed information on versions can be found here.

Benchmark

Gromacs 5.1.2 benchmark at collserola: actual speed-up (green) and ideal speed-up (blue) vs. number of cores for the NPT simulation of a protein with Amber force field, SPC water, PME electrostatics, no PME-dedicated node.

Note that, in runs with a large number of CPUs, performance can improve greatly by finding a correct balance between PME and particle threads in a problem-specific fashion. Here we show default behaviour, without custom optimisations.


LSF Example Scripts

Collserola

#!/bin/bash
##
# IMPORTANT NOTICE: replace the email address
# and the working directories with your own info
##
# BSUB -J gromacs
# BSUB -o gromacs.log # send standard output here
# BSUB -e gromacs.err # send error output here
#
# Pick a queue
# BSUB -q short
#
# Send an email notice once the job is finished
# BSUB -N -u youremail@wherever
#
# Indicate the number of cores
# BSUB -n 4
#
# BSUB -R collserola
 
date
 
# Set up the environment
. /opt/modules/default/init/bash
module load gromacs/5.1.2
source /prod/GROMACS/5.1.2/bin/GMXRC

# Launch mdrun via MPI
mpirun -np 24 -tm intel gmx_mpi mdrun -s <input .tpr file>

Pirineus

#!/bin/bash
##
# IMPORTANT NOTICE: replace the email address
# and the working directories with your own info
##
# BSUB -J gromacs
# BSUB -o gromacs.log # send standard output here
# BSUB -e gromacs.err # send error output here
#
# Pick a queue
# BSUB -q short
#
# Send an email notice once the job is finished
# BSUB -N -u youremail@wherever
#
# Indicate the number of cores
# BSUB -n 4
#
# Pick the machine
# BSUB -R pirineus
 
date
 
# Set up the environment
. /opt/modules/default/init/bash
module load gromacs/5.1.2
source /prod/GROMACS/5.1.2/bin/GMXRC
KMP_affinity=disabled # For optimal thread pinning

# Launch mdrun via MPI
mpirun -np 24 omplace gmx_mpi mdrun -s <input .tpr file>

For optimal performance, before a long production run, you are advised to experiment with different thread allocations to PME and DD tasks, using mdrun options. Please refer to the Gromacs documentation for more details.

Place the appropriate LSF script in a file (for instance xxx.lsf) and submit the job with the command

bsub < gromacs.lsf

Tutorial

You can follow this tutorial about simulation of a protein with Gromacs to get hands-on with the program.


  • No labels