Skip to end of banner
Go to start of banner

LAMMPS

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a classical molecular dynamics code with a focus on materials modeling.

Work in Progress for Phase-2 Documentation

The content of this section is currently being updated to include material relevant for Phase-2 of Setonix and the use of GPUs.
On the other hand, all the existing material related to Phase-1 and the use of CPU compute nodes can be considered safe, valid and up-to-date.

LAMMPS has potential for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

For more information, see the LAMMPS Molecular Dynamics Simulator homepage (external site).

On this page:

Current available versions of LAMMPS on Setonix

To check the current installed versions of LAMMPS installed in Pawsey software stack, use the module avail command:

Listing 1. Checking available installations
$ module avail lammps
------------------------- /software/setonix/2024.05/modules/zen3/gcc/12.2.0/applications --------------------------
   lammps-amd-gfx90a/20230802    lammps-amd-gfx90a/20230802.3 (D)    lammps/20230802    lammps/20230802.3 (D)

Example: Running LAMMPS on Setonix

The following job script is an example of running LAMMPS on Setonix. Change the project name from your-project to your own, and change the node count, input and output files.

Listing 1. Example job script for running LAMMPS
#!/bin/bash --login
  
# Replace "[your-project]" with the appropriate project code
# Maximum wall-clock time limit 24 hours (--time=24:00:00)
#SBATCH --job-name=lammps
#SBATCH --nodes=1
#SBATCH --exclusive
#SBATCH --account=[your-project]
#SBATCH --time=24:00:00
#SBATCH --cpus-per-task=1


# Load the lammps module so we can find the "lmp_mpi"
# executable
  
module load lammps/<version>
  
# Launch with srun (essential) using 128 MPI tasks ("-n 128")
  
srun --export=all -N 1 -n 128 -c 1 lmp -in lammps.inp -log lammps.log


To run the GPU-accelerated version, load the module lammps-amdgpu instead.

Restarting a LAMMPS job

Important

Checkpoint or restart is an important aspect in running codes on HPC resources due to wall time constraints and LAMMPS has capabilities for restart simulations.

As there are many different ways of setting up a LAMMPS simulation, and various packages may be used as well, it is not possible to provide a general recipe for restart. More details are on the read_restart manual page (external site).

Here we're providing some guidelines for the LAMMPS input file.

  1. Write checkpoints during your simulation, for example, to write a file "checkpoint_file" every 100,000 steps use  restart 100000 <checkpoint_file> .
  2. Change the way the input atomic structure is provided, to use a restart file from the previous job. In practice, replace the input directive using  read_data  (or whatever other method to input the structure) with  read_restart <checkpoint_file> .
  3. Do NOT initialise velocities when restarting, for instance, remove things like  velocity all create 300 123456  .
  4. Remove any preliminary/initialisation step, including but not restricted to energy minimisation with  minimize  .
  5. There are properties that are not included in the checkpoint file, and need to be then provided in the input file in some other way; check the LAMMPS documentation for the packages and options you're using.
  6. Be aware that some options and packages may not allow for an exactly reproducible restart. For instance, this is the case of  fix shake. Refer to the LAMMPS documentation for the packages and options you're using.

Ultimately, it is up to you as the user to assess whether you're happy or not with how the restart behaves.

Postponing trajectory analysis operations in a LAMMPS job

LAMMPS is optimised to run the molecular dynamics simulation at scale on an HPC cluster such as Setonix. In this context, computing averages and similar quantities must be considered post-processing operations. If performed during the main simulation, they can significantly slow it down. These include input directives such as  fix ave/time  and similar. So, one should aim at minimising the number of computing operations that are not strictly required during the simulation. Those should be performed as post-processing, either on a dedicated cluster such as Zeus or on a workstation.

Postponing computations after the simulation is trivial for anything computed out of the atomic positions (and eventually velocities), as these are normally output in the trajectory files. If you need other quantities (forces, stress, etc.), it is possible to get them printed out in additional output files, and then again, perform any additional analysis as post-processing.

There are quite a few publicly available post-processing tools, such as  MDAnalysis and  MDTraj. The VMD visualisation tool also provides some post-processing capabilities.

Related pages

External links

  • No labels