As someone who works with T cell antigen receptor (TCR) and peptide-major histocompatibility complex (pMHC) data, I have found several Python packages to be very useful for eliminating tedious steps in data cleaning and feature engineering stages.
Continue readingTag Archives: Python
Design your very own drug: An introduction to structure-based small molecule drug design
Are you curious about how scientists design small molecules to treat disease using computational tools, but the words RDKit, docking, and QED mean nothing to you? Look no further than these tutorials for learning the fundamentals of computational small molecule drug design through interactive tutorials that introduce the key tools, concepts, and workflows. From generating compounds to evaluating their drug-likeness and binding potential, by the end you’ll be ready to explore how computational methods can result in the discovery of your very own (virtual) drug candidates to cure Zika!
Find the materials here: https://github.com/oxpig/dtc-struc-bio-smolecules/tree/main.
Continue readingA guide to fixing broken AMBER MD trajectory files and visualisations.
You’ve just finished a week-long molecular dynamics simulation. You’re excited to see what happened to your protein complex, so you load up the trajectory in VMD and… your protein looks like it’s been through a blender. Pieces are scattered across the screen, water molecules are everywhere, and half your complex seems to have teleported to the other side of the simulation box. This chaos is caused by periodic boundary conditions (PBC).
PBC
PBC is a computational trick that simulates bulk behaviour by treating your simulation box like a repeating tile. When a molecule exits one side, it immediately reappears on the opposite side. This works perfectly for physics as your protein experiences realistic bulk water behaviour.

Installing open source PyMOL with pip
TLDR; pip install pymol-open-source
AI generated linkers™: a tutorial
In molecular biology cutting and tweaking a protein construct is an often under-appreciated essential operation. Some protein have unwanted extra bits. Some protein may require a partner to be in the correct state, which would be ideally expressed as a fusion protein. Some protein need parts replacing. Some proteins disfavour a desired state. Half a decade ago, toolkits exists to attempt to tackle these problems, and now with the advent of de novo protein generation new, powerful, precise and way less painful methods are here. Therefore, herein I will discuss how to generate de novo inserts and more with RFdiffusion and other tools in order to quickly launch a project into the right orbit.
Furthermore, even when new methods will have come out, these design principles will still apply —so ignore the name of the de novo tool used.
Combining Multiple Comparisons Similarity plots for statistical tests
Following on from my previous blopig post, Garrett gave the very helpful suggestion of combining Multiple Comparisons Similarity (MCSim) plots to reduce information redundancy. For example, this an MCSim plot from my previous blog post:

This plot shows effect sizes from a statistical test (specifically Tukey HSD) between mean absolute error (MAE) scores for different molecular featurization methods on a benchmark dataset. Red shows that the method on the y-axis has a greater average MAE score than the method on the x-axis; blue shows the inverse. There is redundancy in this plot, as the same information is displayed in both the upper and lower triangles. Instead, we could plot both the effect size and the p-values from a test on the same MCSim.
The Good (and limitations) of using a Local CoPilot with Ollama
Interactive code editors have been around for a while now, and tools like GitHub Copilot have woven their way into most development pipelines, and for good reason. They’re easy to use, exceptionally helpful (at certain tasks), and have undeniably made life as a developer smoother. Recently, I decided to switch away from relying on GitHub Copilot in favour of a local model for a few key reasons. While I don’t use it all the time, it has proven to be a useful option in many situations. In this blog post, I’ll go over why I made the switch, how I set it up, and share a bit about my experience so far.
Continue readingVisualising and validating differences between machine learning models on small benchmark datasets
Introduction
An epidemic is sweeping through cheminformatics (and machine learning) research: ugly results tables. These tables are typically bloated with metrics (such as regression and classification metrics next to each other), vastly differing tasks, erratic bold text, and many models. As a consequence, results become difficult to analyse and interpret. Additionally, it is rare to see convincing evidence, such as statistical tests, for whether one model is ‘better’ than another (something Pat Walters has previously discussed). Tables are a practical way to present results and are appropriate in many cases; however, this practicality should not come at the cost of clarity.
The terror of ugly tables extends to benchmark leaderboards, such as Therapeutic Data Commons (TDC). These leaderboard tables do not show:
- whether differences in metrics between methods are statistically significant,
- whether methods use ensembles or single models,
- whether methods use classical (such as Morgan fingerprints) or learned (such as Graph Neural Networks) representations,
- whether methods are pre-trained or not,
- whether pre-trained models are supervised, self-supervised, or both,
- the data and tasks that pre-trained models are pre-trained on.
This lack of context makes meaningful comparisons between approaches challenging, obscuring whether performance discrepancies are due to variance, ensembling, overfitting, exposure to more data, or novelties in model architecture and molecular featurisation. Confirming the statistical significance of performance differences (under consistent experimental conditions!) is crucial in constructing a more lucid picture of machine learning in drug discovery. Using figures to share results in a clear, non-tabular format would also help.
Statistical validation is particularly relevant in domains with small datasets, such as drug discovery, as the small number of test samples leads to high variance in performance between different splits. Recent work by Ash et al. (2024) sought to alleviate the lack of statistical validation in cheminformatics by sharing a helpful set of guidelines for researchers. Here, we explore implementing some of the methods they suggest (plus some others) in Python.
Continue reading
Controlling PyMol from afar
Do you keep downloading .pdb and .sdf files and loading them into PyMol repeatedly?
If yes, then PyMol remote might be just for you. With PyMol remote, you can control a PyMol session running on your laptop from any other machine. For example, from a Jupyter Notebook running on your HPC cluster.
Continue readingBuilding CLI Applications with Typer
Remember the last time you had to build a command-line tool? If you’re like me, you probably started with argparse or click, wrote boilerplate code, and still ended up with something that felt clunky. That’s where typer comes in – it’s a game-changer that lets you build CLI apps with minimal code. Although there are several other options, typer stands out because it leverages Python’s type hints to do the heavy lifting. No more manual argument parsing! The following snippet shows how to use typer in its simplest form:
import typer
app = typer.Typer()
@app.command()
def hello(name: str):
typer.echo(f"Hello {name}!")
if __name__ == "__main__":
app()
And you will be able to execute it with just:
$ python hello.py Pedro Hello Pedro!
In this simple example, we were only defining positional arguments, but having optional arguments is as easy as setting default values in the function signature.
Continue reading
