The Experimentally Relevant Future of Molecular Dynamics: Lessons from the Annual Danish Workshop on Advanced Molecular Simulations

I recently had the opportunity to present part of my PhD work on molecular dynamics (MD) studies of engineered T Cell Receptors at the Annual Danish Conference on Advanced Molecular Simulations in Aarhus, Denmark. The meeting had an emphasis on membrane biophysics, multi- & mesoscale simulations, with keynotes focusing on connecting MD to experimental relevance.

What I mainly got from the keynotes, Weria Pezeshkian, Mohsen Sadeghi, Matteo Degiacomi, Lucie Delemotte, and Ilpo Vattulainen is that the community is shifting from from exploratory, proof-of-concept simulations towards more quantitative, decision-ready modelling. i.e., multiscale workflows that admit their limits, report uncertainties, and actually talk to experiments. There was a shared way of thinking about multiscale simulations by first getting the chemistry and thermodynamics right with atomistic or coarse-grained MD, be honest about kinetics at the mesoscale, and only then claim mechanisms for membranes and proteins in ways that can be checked against data.

Here are the main things I took away:

Choose collective variables that match the slow physics.

For me the clearest example of that mindset was the talk by Lucie Delemotte on enhanced sampling and collective variables. I took three main points, which collectively stress away that methodological discipline matters more than picking trendy method names [1].

The first point was that collective variables really do need to resolve the slow physics you care about. If the biology cares about a gating motion, a tilt of a transmembrane helix, a pocket opening, or a lipid-coupled separation between domains, then those are the coordinates that should appear in your reduced description, not whatever is easiest to compute or plot. The second is being explicit about what you are trying to measure, i.e., are you after conformational ensembles, free energy differences, or kinetics like transition pathways? Different questions justify different biases and different analyses, and confusing them is a good way to over-interpret a simualtion. The third was a reminder that that once you bias a simulation you have created a biased ensemble, not an accelerated version of the original one, and that recovering thermodynamics from that requires proper reweighting.

But what mainly got me was the emphasis on starting from the mechanism. In an ideal world, the perfect collective variable would be the committor, i.e. the probability that a configuration reaches B before A, and while we never actually have that, we should at least aim for variables that correlate with it rather than coordinates that are convenient. That also sharpens what “convergence” ought to mean. If we claim mechanism, we should show convergence and uncertainty with the same seriousness experimentalists bring to error bars and replicates.

My own conclusion from this is that if you bias a simulation, you owe the reader a clear statement of what you are actually estimating, whether the chosen variables really resolve the slow motion you care about, and exactly how you reweighted to get back to the unbiased ensemble.

So, once you are clear on what you want to measure and how you define your variables, the next question is how to explore enough of the relevant conformations without brute-force sampling. From Matteo Degiacomi’s work, I took away the point of using machine learning to speed up physics without replacing it

Machine learning as a proposal engine, physics as the judge.

The basic idea from Matteo Degiacomi’s talk as I understood is to train an autoencoder on MD frames and then use the latent space to generate plausible new conformations, which can be fed into docking calculations when hinge motions or other large-scale rearrangements matter and a single crystal structure is too rigid [2]. The workflow is that the network proposes candidates and then MD and docking act as filters, scoring and relaxing those candidates rather than letting the ML define reality on its own.

One of the key cautions is that the network is good at interpolating between states it has already seen, but does not extrapolate reliably beyond them. In practice, we tend to read the usual warnings about generalisation and latent spaces as implying that distances in latent space are not the same thing as distances in real space, so even if a conformation looks close in the embedding you still have to validate it with physical models and proper scoring. That is my interpretation of the general idea rather than a direct quote, but it fits the way the method is set up and evaluated. What I like about this is that it feels practical for ensemble docking and for exploring functionally relevant states that are hard to sample with regular MD, the ML makes good guesses, but the physics is still in charge of checking them.

That covers how to get a better handle on which conformations to look at. The other half of the story is whether the dynamics that connect those states are realistic as emphasised by Mohsen Sadeghi’s talk.

Accurately modelling membrane kinetics and hydrodynamics.

From the mesoscale and membrane side, part of Mohsen Sadeghi’s talk emphasised being suspicious of membrane simulations that look structurally fine but have no kinetic story. The issue, as I understood it, is that solvent-free coarse-grained membrane models often get the shapes and morphologies right, but because they have stripped out hydrodynamics they end up with completely unrealistic timescales for undulations and for rare events like budding or scission.

From reading, I believe that the proposed solution is to put hydrodynamics back in using an anisotropic Langevin description that respects the fact that membranes move differently in different directions [3] [4]. In the plane, diffusion is fast and governed mainly by membrane viscosity, whereas out of plane, motion is slow and strongly damped by the surrounding fluid. If you collapse that into a single diffusion constant and then try to “time-map” your trajectory, you can maybe fix one observable, but the rest of the kinetic hierarchy is still off. What I took from this is that for processes like budding, tubulation, or membrane scission, timing matters as much as final shape, and that if you publish a membrane mechanism without modelling out-of-plane mobility properly, you should expect hard questions about whether your timescales mean anything.

That still leaves the practical question of how to make these kinds of mesoscale membrane models something people can actually run and reuse, instead of a pile of lab-specific code. From Weria Pezeshkian’s work I took away a push to turn mesoscale membrane simulations into something reproducible instead of keeping them as lab-specific scripts.

Open source tools for simulating biomembranes at mesoscopic scales.

Weria Pezeshkian presented tool like freeDTS to simulate biomembranes at messocpic length-scales [5]. It represents membranes as triangulated surfaces with proteins placed at vertices. In practice that means you can set up constant-tension patches, vesicles, tethers, tubes, and other geometries that match what people actually probe in experiments. You can also freeze the mesh and study how proteins organise on a fixed shape, which is useful if you have a structure from cryo-ET and want to know how proteins would distribute on that particular geometry instead of on an idealised sphere or tube. The other tool is TS2CG, which takes a triangulated surface from a mesoscale run and converts it back into a coarse-grained membrane model so that you can equilibrate the global shape at the mesoscale level and then zoom back in to molecular detail where chemistry matters [6].

The impression I had is that many of the curvature sensing and protein-sorting that are talked about conceptually are now directly testable in a lab, because the codes are public, the setups mirror experimental geometries, and the backmapping step is no longer a one-off trick.

Conclusion

The meeting reinforced that different tools are complementary rather than competing. Atomistic and coarse-grained simulations are where you get the chemistry and the parameters right. Mesoscale surface models are where you understand shape, long-range coupling, and how protein binding or crowding translates into buds, necks, and tethers at cellular length scales. ML can propose conformational ensembles that are hard to see directly, but MD and docking are what filter and validate those proposals. Experiments close the loop and keep everyone honest, as was emphasised by Ilpo Vattulainen in discussions of how to compare simulations to real experiments.

The overall vibe in Aarhus, at least from my seat, was that the field is emerging to a standard where simulations are quantitative, multiscale, and open enough to be used for real decisions. Presenting my work in this environment was particularly rewarding, as it aligned perfectly with the ongoing dialogue between computational chemistry and experimental relevance!

References

[1] https://arxiv.org/abs/2202.04164

[2] https://pubmed.ncbi.nlm.nih.gov/31031199/

[3] https://www.nature.com/articles/s41467-020-16424-0

[4] https://pubmed.ncbi.nlm.nih.gov/39025579/

[5] https://www.nature.com/articles/s41467-024-44819-w

[6] https://cgmartini.nl/docs/tutorials/Martini3/TS2CG/




Author