NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulations. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM and X-PLOR.

NAMD is distributed free of charge with source code. You can build NAMD yourself or download binaries for a wide variety of platforms. Tutorials on the software website show you how to use NAMD and VMD for biomolecular modelling.

More information: NAMD homepage (external site)

On this page:

Before you begin

Your Pawsey user account is required to  be a member of the namd group in LDAP to be able to access the module.

NAMD is licensed software. Read the  NAMD licensing agreement (external site) , and then let us know that you agree to the licence. To give your written confirmation either send an email to help@pawsey.org.au or open a ticket using the User Support Portal.

Important

NAMD (at least version 2.10) is compiled to support SMP (Shared-Memory and Network-Based Parallelism). To achieve optimal performance you must be careful when using the options provided to both srun and NAMD itself. Running NAMD using MPI parallelism alone can result in very poor performance and you might need to experiment with hybrid OpenMP/MPI options.

For more information, refer to Shared-Memory and Network-Based Parallelism (external site).

How to run NAMD on Setonix

To run NAMD on Setonix, both the GNU Programming Environment and namd modules must be loaded.

$ module load namd/2.14

The NAMD executable is namd2, and multinode NAMD jobs on Setonix require the +ofi_runtime_tcp option to run successfully.

Example: Slurm batch scripts

A problem with a modest number of atoms (say 50,000) can be run in the following way.

Use srun to specify 16 MPI tasks, distributed between 8 nodes (2 MPI tasks per node, 1 per socket). Each MPI task is associated with 1 socket (--cpu-bind=socket).


Listing 1. Example specifying 16 MPI tasks distributed between 8 nodes
#!/bin/bash --login
#SBATCH --nodes=8
#SBATCH --exclusive
#SBATCH --time=00:30:00
#SBATCH --account=[your-account]
#SBATCH --cpus-per-task=64

module load namd/2.14
 
srun --export=ALL -n 16 -N 8 --threads-per-core=1 --cpu_bind=sockets -c 64 namd2 +ofi_runtime_tcp +ppn 63 +pemap 1-63,65-127 +commap 0,64 config_input

Note that the following arguments to NAMD itself are essential for optimal performance: +ppn 63 +pemap 1-63,65-127 +commap 0,64. They match the arguments specified to srun in that they specify 2 MPI tasks to run on cores 0 and 64 (+commap 0,64) on each node, and 63 working threads placed appropriately in each socket.

Another example assigns one task per NUMA domain, with 8 tasks per node and 16 threads per task, for a total of 32 tasks across 4 nodes.

Listing 2. Example specifying 576 tasks with 12 tasks per node
#!/bin/bash -l
#SBATCH --time=06:00:00
#SBATCH --nodes=4
#SBATCH --exclusive
#SBATCH --account=[your-account]
#SBATCH --cpus-per-task=16

module load namd/2.14
srun --export=ALL -n 32 -N 4 --cpu_bind=rank_ldom -c 16 --threads-per-core=1 namd2 +ofi_runtime_tcp +ppn 15 +pemap 1-15,17-31,33-47,49-63,65-79,81-95,97-111,113-127 +commap 0,16,32,48,64,80,96,112 config_input