Benchmarking structural evolution methods for training of machine learned interatomic potentials

Michael J. Waters, James M. Rondinelli*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

When creating training data for machine-learned interatomic potentials (MLIPs), it is common to create initial structures and evolve them using molecular dynamics (MD) to sample a larger configuration space. We benchmark two other modalities of evolving structures, contour exploration (CE) and dimer-method (DM) searches against MD for their ability to produce diverse and robust density functional theory training data sets for MLIPs. We also discuss the generation of initial structures which are either from known structures or from random structures in detail to further formalize the structure-sourcing processes in the future. The polymorph-rich zirconium-oxygen composition space is used as a rigorous benchmark system for comparing the performance of MLIPs trained on structures generated from these structural evolution methods. Using Behler-Parrinello neural networks as our MLIP models, we find that CE and the DM searches are generally superior to MD in terms of spatial descriptor diversity and statistical accuracy.

Original languageEnglish (US)
Article number385901
JournalJournal of Physics Condensed Matter
Volume34
Issue number38
DOIs
StatePublished - Sep 21 2022

Keywords

  • density functional theory
  • interatomic potentials
  • machine learning
  • zirconia

ASJC Scopus subject areas

  • Condensed Matter Physics
  • General Materials Science

Fingerprint

Dive into the research topics of 'Benchmarking structural evolution methods for training of machine learned interatomic potentials'. Together they form a unique fingerprint.

Cite this