Category Archives: Code

Estimating uncertainty in MD observables using block averaging

When running molecular dynamics (MD) simulations, we are usually interested in measuring an ensemble average of some metric (e.g., RMSD, RMSF, radius of gyration, …) and use this to draw conclusions about the investigated system. While calculating the average value of a metric is straightforward (we can simply measure the metric in each frame and average it) calculating a statistical uncertainty is a little more tricky and often forgotten. The main challange when trying to calculate an uncertainty of MD oveservables is that individual frames of the simulation are not samped independently but they are time correlated (i.e., frame N depends on frame N-1). In this blog post, I will breifly introduce block averaging, a statistical technique to estimate uncertainty in correlated data.

Continue reading

Memory Efficient Clustering of Large Protein Trajectory Ensembles

Molecular dynamics simulations have grown increasingly ambitious, with researchers routinely generating trajectories containing hundreds of thousands or even millions of frames. While this wealth of data offers unprecedented insights into protein dynamics, it also presents a formidable computational challenge: how do you extract meaningful conformational clusters from datasets that can easily exceed available system memory?

Traditional approaches to trajectory clustering often stumble when faced with large ensembles. Loading all pairwise distances into memory simultaneously can quickly consume tens or hundreds of gigabytes of RAM, while conventional PCA implementations require the entire dataset to fit in memory before decomposition can begin. For many researchers, this means either downsampling their precious simulation data or investing in expensive high-memory computing resources.

The solution lies in recognizing that we don’t actually need to hold all our data in memory simultaneously. By leveraging incremental algorithms and smart memory management, we can perform sophisticated dimensionality reduction and clustering on arbitrarily large trajectory datasets using modest computational resources. Let’s explore how three key strategies—incremental PCA, mini-batch clustering, and intelligent memory management—can transform your approach to analyzing large protein ensembles.

Continue reading

Demystifying Git and Merge Conflicts 

Git can be an incredibly effective coding tool, but it can also be an incredibly frustrating one. It has a steep learning curve, but you’ll be a lot better off understanding how it works rather than copying and pasting commands from Stack Overflow or ChatGPT. I’ve been there, and things can go very, very wrong.

What is Git?

Git is a version control system which tracks changes in files within a repository. It lets you maintain different versions of that codebase. Not to be confused with GitHub, which is a Git server, or a remote location which serves as the host to codebases in Git. GitHub provides a user-friendly front-end for managing changes and issue tracking. There are plenty of other Git servers, such as Bitbucket and GitLab.

Tips for using Git

Git is massive, and there’s plenty of tutorials and guides out there to help you learn it. This is far from a comprehensive guide, but these are the commands I use on a regular basis. I’ll go over some of the basics, and then some of the niche ones that I’ve found particularly useful. 

Continue reading

Cheating at Spelling Bee using the command line


The New York Times Spelling Bee is a free online word game, where players must construct as many words as possible using the letters provided in the Bee’s grid, always including the middle letter. Bonus points for using all the letters and creating a pangram.


However, this is the kind of thing which computers are very good at. If you’ve become frustrated trying to decipher the abstruse ways of the bee, let the command line help.

tl;dr:

grep -iP "^[adlokec]{6,}$" /usr/share/dict/words |grep a | awk '{ print length, $0 }' |sort -n |cut -f2 -d" "
Continue reading

Slurm and Snakemake: a match made in HPC heaven

Snakemake is an incredibly useful workflow management tool that allows you to run pipelines in an automated way. Simply put, it allows you to define inputs and outputs for different steps that depend on each other, Snakemake will then run jobs only when the required inputs have been generated by previous steps. A previous blog post by Tobias is a good introduction to it – https://www.blopig.com/blog/2021/12/snakemake-better-workflows-with-your-code/

However, often pipelines are computationally intense and we would like to run them on our HPC. Snakemake allows us to do this on slurm using an extension package called snakemake-executor-plugin-slurm.

Continue reading

AI generated linkers™: a tutorial

In molecular biology cutting and tweaking a protein construct is an often under-appreciated essential operation. Some protein have unwanted extra bits. Some protein may require a partner to be in the correct state, which would be ideally expressed as a fusion protein. Some protein need parts replacing. Some proteins disfavour a desired state. Half a decade ago, toolkits exists to attempt to tackle these problems, and now with the advent of de novo protein generation new, powerful, precise and way less painful methods are here. Therefore, herein I will discuss how to generate de novo inserts and more with RFdiffusion and other tools in order to quickly launch a project into the right orbit.
Furthermore, even when new methods will have come out, these design principles will still apply —so ignore the name of the de novo tool used.

Continue reading

NVIDIA Reimagines CUDA for Python Developers

According to GitHub’s Open Source Survey, Python has officially become the world’s most popular programming language in 2024 – ultimately surpassing JavaScript. Due to its exceptional popularity, NVIDIA announced Python support for its CUDA toolkit at last year’s GTC conference, marking a major leap in the accessibility of GPU computing. With the latest update (https://nvidia.github.io/cuda-python/latest/) and for the first time, developers can write Python code that runs directly on NVIDIA GPUs without the need for intermediate C or C++ code.

Historically tied to C and C++, CUDA has found its way into Python code through third-party wrappers and libraries. Now, the arrival of native support means a smoother, more intuitive experience.

This paradigm shift opens the door for millions of Python programmers – including our scientific community – to build powerful AI and scientific tools without having to switch languages or learn legacy syntax.

Continue reading

Debugging code for science: Fantastic Bugs and Where to Find Them.

The simulation results make no sense … My proteins are moving through walls and this dihedral angle is negative; my neural network won’t learn anything, I’ve tried for days to install this software and I still get an error.

Feel familiar? Welcome to scientific programming. Bugs aren’t just annoying roadblocks – they’re mysterious phenomena that make you question your understanding of reality itself. If you’ve ever found yourself debugging scientific code, you know it’s a different beast compared to traditional software engineering. In the commercial software world, a bug might mean a button doesn’t work or data isn’t saved correctly. In scientific computing, a bug might mean your climate model predicts an ice age next Tuesday, or your protein folding algorithm creates molecular structures that couldn’t possibly exist in our universe (cough).

Continue reading

GUI Science

There comes a point in every software-inclined lab based grad student’s life, where they think: now is the time to write a GUI for my software, to make it fast, easy to use, generalised so that others will use it too, the new paradigm for how to do research, etc. etc.

Of course such delusions of grandeur are rarely indulged, but when executed they certainly can produce useful outputs, as a well designed (or even, designed) GUI can improve an experimentalist’s life profoundly by simplifying, automating and standardising data acquisition, and by reducing the time to see results, allowing for shorter iteration cycles (this is known in engineering as “Design, Build, Test, Learn” cycle, in software it’s called “Coding”).

Having written a few GUIs in my time, I thought it might be helpful to share some experience I have, though it is by no means broad.

Continue reading

Combining Multiple Comparisons Similarity plots for statistical tests

Following on from my previous blopig post, Garrett gave the very helpful suggestion of combining Multiple Comparisons Similarity (MCSim) plots to reduce information redundancy. For example, this an MCSim plot from my previous blog post:

This plot shows effect sizes from a statistical test (specifically Tukey HSD) between mean absolute error (MAE) scores for different molecular featurization methods on a benchmark dataset. Red shows that the method on the y-axis has a greater average MAE score than the method on the x-axis; blue shows the inverse. There is redundancy in this plot, as the same information is displayed in both the upper and lower triangles. Instead, we could plot both the effect size and the p-values from a test on the same MCSim.

Continue reading