GROMACS

GROMACS is a versatile package for performing molecular dynamics, for example, simulating the Newtonian equations of motion for systems with hundreds to millions of particles.

On this page:

GROMACS is primarily designed for biochemical molecules like proteins, lipids, and nucleic acids that have a lot of complicated bonded interactions. Because GROMACS is extremely fast at calculating the nonbonded interactions that usually dominate simulations, many groups are also using it for research on non-biological systems, for example, polymers.

Versions installed in Pawsey systems

To check the current installed versions, use the module avail command (current versions may be different from content shown here):

Terminal 1. Checking for installed versions
$ module avail gromacs
------------------------- /software/setonix/2024.05/modules/zen3/gcc/12.2.0/applications --------------------------
   gromacs-amd-gfx90a/2023    gromacs/2022.5-mixed    gromacs/2023-mixed (D)
   gromacs/2022.5-double      gromacs/2023-double

Modules with the -amd-gfx90a suffix support GPU offloading and are meant to be used within the gpu partition. mixed means mixed precision and double means double precision installations.

Gromacs is compiled with the GNU programming environment.

All GROMACS installations on Setonix have been patched with Plumed.

Example: Running GROMACS on CPU

This is an example of a GROMACS job queueing script.

Listing 1. Example of a job queueing script using GROMACS Test Case A
#!/bin/bash --login
#SBATCH --nodes=1
#SBATCH --ntasks=128
#SBATCH --exclusive
#SBATCH --time=00:05:00
#SBATCH --account=[your-project]

module load gromacs/2023-double

export OMP_NUM_THREADS=1

srun -N 1 -n 128 gmx_mpi_d mdrun -s ion_channel.tpr -maxh 0.50 -resethway -noconfout -nsteps 10000 -g logile

For more information on how to run jobs on the CPU partitions see: Example Slurm Batch Scripts for Setonix on CPU Compute Nodes.

Running GROMACS on GPUs

GROMACS supports GPU offloading of some of the operations to GPU. Acceleration is officially supported using the SYCL standard. Additionally, AMD staff maintains its own GROMACS GPU implementation using HIP. However, while significantly more performant, AMD implementation is not officially endorsed by the GROMACS developers. The version currently installed on Setonix, gromacs-amd-gfx90a/2023, is the AMD port. We will eventually switch to the SYCL implementation, or provide both.

GPU offloading can be enabled with the following options:

  • -npme 1 -pme gpu: compute long-range interactions on GPU, using a single task. 
  • -nb gpu: compute non bonded interactions on GPUs.
  • -bonded gpu: compute bonded interactions on GPUs.
  • -update gpu: compute constraints and update on GPUs. This option is not always available, GROMACS will print an error message when the operation cannot be performed on GPU.

More information can be found in Running mdrun with GPUs (external site).

As an example, we used the benchMEM.tpr benchmark case that can be found at the following page: A free GROMACS benchmark set (external site). Here is a very simple submission script.

Listing 2. A sample batch script to submit a GROMACS job to GPU.
#!/bin/bash

#SBATCH --nodes=1
#SBATCH --gres=gpu:1
#SBATCH --time=02:00:00
#SBATCH --partition=gpu
#SBATCH --account=[your-project]-gpu

module load gromacs-amd-gfx90a/2023

srun gmx_mpi mdrun -nb gpu -bonded gpu -ntomp 8 -s benchMEM.tpr

For more information on how to run jobs on the GPU partitions see Example Slurm Batch Scripts for Setonix on GPU Compute Nodes.

External links