Understanding and Mitigating Distribution Shifts For Machine Learning Force Fields
Abstract
Machine Learning Force Fields (MLFFs) are a promising alternative to expensive ab initio quantum mechanical molecular simulations. Given the diversity of chemical spaces that are of interest and the cost of generating new data, it is important to understand how MLFFs generalize beyond their training distributions. In order to characterize and better understand distribution shifts in MLFFs, we conduct diagnostic experiments on chemical datasets, revealing common shifts that pose significant challenges, even for large foundation models trained on extensive data. Based on these observations, we hypothesize that current supervised training methods inadequately regularize MLFFs, resulting in overfitting and learning poor representations of out-of-distribution systems. We then propose two new methods as initial steps for mitigating distribution shifts for MLFFs. Our methods focus on test-time refinement strategies that incur minimal computational cost and do not use expensive ab initio reference labels. The first strategy, based on spectral graph theory, modifies the edges of test graphs to align with graph structures seen during training. Our second strategy improves representations for out-of-distribution systems at test-time by taking gradient steps using an auxiliary objective, such as a cheap physical prior. Our test-time refinement strategies significantly reduce errors on out-of-distribution systems, suggesting that MLFFs are capable of and can move towards modeling diverse chemical spaces, but are not being effectively trained to do so. Our experiments establish clear benchmarks for evaluating the generalization capabilities of the next generation of MLFFs. Our code is available at https://tkreiman.github.io/projects/mlff_distribution_shifts/.
Community
We have created a new benchmark to assess distribution shifts for Machine Learning Force Fields and provided methods to mitigate those distribution shifts. Check out our project page: https://tkreiman.github.io/projects/mlff_distribution_shifts/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards Fast, Specialized Machine Learning Force Fields: Distilling Foundation Models via Energy Hessians (2025)
- AtomProNet: Data flow to and from machine learning interatomic potentials in materials science (2025)
- PFD: Automatically Generating Machine Learning Force Fields from Universal Models (2025)
- Energy&Force Regression on DFT Trajectories is Not Enough for Universal Machine Learning Interatomic Potentials (2025)
- Teacher-student training improves accuracy and efficiency of machine learning inter-atomic potentials (2025)
- Enhancing Machine Learning Potentials through Transfer Learning across Chemical Elements (2025)
- To Use or Not to Use a Universal Force Field (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper