
Machine-learned interatomic potentials (MLIPs) have become a cornerstone of modern computational chemistry, enabling simulations that approach quantum accuracy at a fraction of the cost of traditional methods such as density functional theory (DFT). However, a central challenge in designing MLIPs lies in respecting the fundamental symmetries of molecular systems, especially rotational and translational invariance, while maintaining scalability and flexibility.
In our recent work, we introduced TransIP, a novel framework that formulates how symmetry is incorporated into molecular models by learning symmetry directly in the latent space of an atomic transformer model, in which we treat atoms as tokens, instead of hard-coding equivariance into the neural network architecture.
At the core of TransIP is a simple yet powerful idea: instead of enforcing SO(3) equivariance through specialized layers, the model is trained with a contrastive objective that aligns representations of rotated molecular configurations. A learned transformation network maps latent embeddings under rotations, encouraging the model to discover symmetry-consistent representations implicitly. This design preserves the flexibility and scalability of standard Transformers while still capturing the geometric structure of molecular systems.
Continue reading


