Abstract
Since the classical molecular dynamics simulator LAMMPS was released as an open source code in 2004, it has become a widely-used tool for particle-based modeling of materials at length scales ranging from atomic to mesoscale to continuum. Reasons for its popularity are that it provides a wide variety of particle interaction models for different materials, that it runs on any platform from a single CPU core to the largest supercomputers with accelerators, and that it gives users control over simulation details, either via the input script or by adding code for new interatomic potentials, constraints, diagnostics, or other features needed for their models. As a result, hundreds of people have contributed new capabilities to LAMMPS and it has grown from fifty thousand lines of code in 2004 to a million lines today. In this paper several of the fundamental algorithms used in LAMMPS are described along with the design strategies which have made it flexible for both users and developers. We also highlight some capabilities recently added to the code which were enabled by this flexibility, including dynamic load balancing, on-the-fly visualization, magnetic spin dynamics models, and quantum-accuracy machine learning interatomic potentials. Program Summary: Program Title: Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) CPC Library link to program files: https://doi.org/10.17632/cxbxs9btsv.1 Developer's repository link: https://github.com/lammps/lammps Licensing provisions: GPLv2 Programming language: C++, Python, C, Fortran Supplementary material: https://www.lammps.org Nature of problem: Many science applications in physics, chemistry, materials science, and related fields require parallel, scalable, and efficient generation of long, stable classical particle dynamics trajectories. Within this common problem definition, there lies a great diversity of use cases, distinguished by different particle interaction models, external constraints, as well as timescales and lengthscales ranging from atomic to mesoscale to macroscopic. Solution method: The LAMMPS code uses parallel spatial decomposition, distributed neighbor lists, and parallel FFTs for long-range Coulombic interactions [1]. The time integration algorithm is based on the Størmer-Verlet symplectic integrator [2], which provides better stability than higher-order non-symplectic methods. In addition, LAMMPS supports a wide range of interatomic potentials, constraints, diagnostics, software interfaces, and pre- and post-processing features. Additional comments including restrictions and unusual features: This paper serves as the definitive reference for the LAMMPS code. References: [1] S. Plimpton, Fast parallel algorithms for short-range molecular dynamics. J. Comp. Phys. 117 (1995) 1–19. [2] L. Verlet, Computer experiments on classical fluids: I. Thermodynamical properties of Lennard–Jones molecules, Phys. Rev. 159 (1967) 98–103.
Original language | English (US) |
---|---|
Article number | 108171 |
Journal | Computer Physics Communications |
Volume | 271 |
DOIs | |
State | Published - Feb 2022 |
Funding
This work was supported in part by the Office of Fusion Energy Sciences program “Scientific Machine Learning and Artificial Intelligence.” Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science. Much of the recent work on LAMMPS described in this paper was supported by the the EXAALT and CoPA projects within the Exascale Computing Project (No. 17-SC-20-SC) , a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration . We thank the many people who have contributed code and expertise to LAMMPS to help make it a broad and powerful tool. We have mentioned some of them in the text or footnotes or cited their papers, but unfortunately many other contributions could not be explicitly recognized, due to space limitations. See the Authors page https://www.lammps.org/authors.html on the LAMMPS website for a list of significant contributors. Much of the recent work on LAMMPS described in this paper was supported by the the EXAALT and CoPA projects within the Exascale Computing Project (No. 17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. GPU benchmarking described in Section 5.1 was performed using the Lassen machine at Lawrence Livermore National Laboratory. The load-balancing results in Table 2 used resources of the Argonne Leadership Computing Facility, which is a U.S. Department of Energy Office of Science User Facility operated under contract DE-AC02-06CH11357. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science. This work was supported in part by the Office of Fusion Energy Sciences program ?Scientific Machine Learning and Artificial Intelligence.?, Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Keywords
- LAMMPS
- Materials modeling
- Molecular dynamics
- Parallel algorithms
ASJC Scopus subject areas
- Hardware and Architecture
- General Physics and Astronomy
Fingerprint
Dive into the research topics of 'LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales'. Together they form a unique fingerprint.Datasets
-
LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales
Thompson, A. P. (Contributor), Aktulga, H. M. (Contributor), Berger, R. (Contributor), Bolintineanu, D. S. (Contributor), Brown, W. M. (Contributor), Crozier, P. S. (Contributor), in 't Veld, P. J. (Contributor), Kohlmeyer, A. (Contributor), Moore, S. G. (Contributor), Nguyen, T. D. (Contributor), Shan, R. (Contributor), Stevens, M. J. (Contributor), Tranchida, J. (Contributor), Trott, C. (Contributor) & Plimpton, S. J. (Contributor), Mendeley Data, Nov 15 2021
DOI: 10.17632/cxbxs9btsv.1, https://data.mendeley.com/datasets/cxbxs9btsv
Dataset