Download Notebook View in GitHub Open in Google Colab
Simulating an Interchange with LAMMPS
In this example, we’ll quickly construct an Interchange
and then run a simulation in LAMMPS.
We need an Interchange
to get started, so let’s put that together quickly. For more explanation on this process, take a look at the packed_box and protein_ligand examples.
import time
import mdtraj as md
import nglview
from lammps import lammps
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.typing.engines.smirnoff import ForceField
from openff.toolkit.utils import get_data_file_path
from openff.units import unit
from openmm.app import PDBFile
from pandas import read_csv
from openff.interchange import Interchange
from openff.interchange.components.mdconfig import MDConfig
# Read a structure from the Toolkit's test suite into a Topology
pdbfile = PDBFile(
get_data_file_path("systems/packmol_boxes/propane_methane_butanol_0.2_0.3_0.5.pdb")
)
molecules = [Molecule.from_smiles(smi) for smi in ["CCC", "C", "CCCCO"]]
off_topology = Topology.from_openmm(pdbfile.topology, unique_molecules=molecules)
# Construct the Interchange with the OpenFF "Sage" force field
interchange = Interchange.from_smirnoff(
force_field=ForceField("openff-2.0.0.offxml"),
topology=off_topology,
)
interchange.positions = pdbfile.positions
Tada! A beautiful solvent system:
interchange.visualize("nglview")
Run a simulation
First, we export a .lmp
file that can be read by LAMMPS’ read_data
command:
interchange.to_lammps("interchange.lmp")
Now we need to write an input file for LAMMPS. Parts of these input files depend on force field parameters, so we should use a sample input file written for our interchange as a starting point. We can generate such a sample file from MDConfig
:
mdconfig = MDConfig.from_interchange(interchange)
mdconfig.write_lammps_input(input_file="auto_generated.in")
with open("auto_generated.in") as f:
print(f.read())
units real
atom_style full
dimension 3
boundary p p p
bond_style hybrid harmonic
angle_style hybrid harmonic
dihedral_style hybrid fourier
improper_style cvff
special_bonds lj 0.0 0.0 0.5 coul 0.0 0.0 0.8333333333
pair_style lj/cut/coul/long 9.0 9.0
pair_modify mix arithmetic tail yes
read_data out.lmp
thermo_style custom ebond eangle edihed eimp epair evdwl ecoul elong etail pe
kspace_style pppm 1e-4
run 0
That sample file will only perform a single point energy calculation; here’s a more complete file that includes the above parameters but will run an actual MD simulation. Note that out.lmp
is changed to interchange.lmp
to reflect the filename we used earlier:
Note that this is still just an example! You should carefully write and inspect input files for all production simulations.
lammps_in = """ # These commands may not be appropriate for all systems
units real
atom_style full
# PBC in 3 dimensions
dimension 3
boundary p p p
# Bonded interactions in Sage force field
bond_style hybrid harmonic
angle_style hybrid harmonic
dihedral_style hybrid fourier
improper_style cvff
special_bonds lj 0.0 0.0 0.5 coul 0.0 0.0 0.8333333333
# Non-bonded interactions in Sage force field
pair_style lj/cut/coul/long 9.0 9.0
pair_modify mix arithmetic tail yes
# Load the parameterized system
read_data interchange.lmp
# Thermostat and velocity generation
fix 3 all nvt temp 300.0 300.0 500
velocity all create 300.0 29348 mom yes rot yes
# Output control
dump traj all dcd 10 traj.dcd
thermo_style custom ebond eangle edihed eimp epair evdwl ecoul elong etail pe
# PME electrostatics in Sage force field
kspace_style pppm 1e-6
# Run for 1000 steps at 2 fs δt
timestep 2
run 1000
"""
with open("lammps.in", "w") as f:
f.write(lammps_in)
We’ll use LAMMPS’ Python interface to run the simulation. lmp.file("lammps.in")
is equivalent to lammps -in lammps.in
on the command line. This will run in serial, which is fine for so few steps and such a small system.
lmp = lammps()
lmp.file("lammps.in")
LAMMPS (2 Aug 2023)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
Reading data file ...
triclinic box = (-1.655 -2.004 -2.677) to (32.992 32.643 31.97) with tilt (0 0 0)
1 by 1 by 1 MPI processor grid
reading atoms ...
3808 atoms
scanning bonds ...
4 = max bonds/atom
scanning angles ...
6 = max angles/atom
scanning dihedrals ...
12 = max dihedrals/atom
reading bonds ...
3468 bonds
reading angles ...
6086 angles
reading dihedrals ...
7174 dihedrals
Finding 1-2 1-3 1-4 neighbors ...
special bond factors lj: 0 0 0.5
special bond factors coul: 0 0 0.8333333333
4 = max # of 1-2 neighbors
6 = max # of 1-3 neighbors
10 = max # of 1-4 neighbors
14 = max # of special neighbors
special bonds CPU = 0.002 seconds
read_data CPU = 0.055 seconds
PPPM initialization ...
using 12-bit tables for long-range coulomb (src/kspace.cpp:342)
G vector (1/distance) = 0.33515343
grid = 32 32 32
stencil order = 5
estimated absolute RMS force accuracy = 0.00028214644
estimated relative force accuracy = 8.4967562e-07
using double precision FFTW3
3d grid and FFT values/proc = 59319 32768
Generated 10 of 10 mixed pair_coeff terms from arithmetic mixing rule
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 11
ghost atom cutoff = 11
binsize = 5.5, bins = 7 7 7
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut/coul/long, perpetual
attributes: half, newton on
pair build: half/bin/newton/tri
stencil: half/bin/3d/tri
bin: standard
Setting up Verlet run ...
Unit style : real
Current step : 0
Time step : 2
Per MPI rank memory allocation (min/avg/max) = 23.98 | 23.98 | 23.98 Mbytes
E_bond E_angle E_dihed E_impro E_pair E_vdwl E_coul E_long E_tail PotEng
11.541868 4291.6175 313.10955 0 -2157.8002 -1715.4373 5816.1679 -6258.5308 -116.79125 2458.4687
1077.6693 5476.3857 516.05351 0 -1855.4265 -1271.7292 5681.8912 -6265.5885 -116.79125 5214.682
Loop time of 29.0881 on 1 procs for 1000 steps with 3808 atoms
Performance: 5.941 ns/day, 4.040 hours/ns, 34.378 timesteps/s, 130.912 katom-step/s
84.5% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 18.953 | 18.953 | 18.953 | 0.0 | 65.16
Bond | 0.95191 | 0.95191 | 0.95191 | 0.0 | 3.27
Kspace | 4.9923 | 4.9923 | 4.9923 | 0.0 | 17.16
Neigh | 3.9154 | 3.9154 | 3.9154 | 0.0 | 13.46
Comm | 0.13078 | 0.13078 | 0.13078 | 0.0 | 0.45
Output | 0.020409 | 0.020409 | 0.020409 | 0.0 | 0.07
Modify | 0.081397 | 0.081397 | 0.081397 | 0.0 | 0.28
Other | | 0.04326 | | | 0.15
Nlocal: 3808 ave 3808 max 3808 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 12620 ave 12620 max 12620 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 971409 ave 971409 max 971409 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 971409
Ave neighs/atom = 255.0969
Ave special neighs/atom = 8.3392857
Neighbor list builds = 72
Dangerous builds = 0
And now we visualize! We can construct an MDTraj Topology
from our Interchange by using OpenMM as a lingua franca. LAMMPS produces coordinates that are in the central unit cell, so for a simple system like this we just need to make molecules whole to visualize:
traj = md.load(
"traj.dcd",
top=md.Topology.from_openmm(interchange.topology.to_openmm()),
)
traj.make_molecules_whole()
nglview.show_mdtraj(traj)