Monthly Archives: November 2023

The Antibody Dictionary

Similar to getting lost in a language when moving country, you might encounter a language barrier when moving research fields. This dictionary will guide you in the complex world of immunoinformatics, with a focus on antibodies. Whether your main research will be in this field, you want to apply your machine learning model on antibodies, or you just want to understand the research performed in OPIG, this dictionary will get you started.

The Antibody Dictionary:

Affinity maturation: The optimisation process of naive antibodies to memory antibodies such that the antibody is optimised for a specific antigen. 

Antibody: (immunoglobulin) a Y-shaped molecule important in the adaptive immune system. A canonical antibody consists of two identical heavy chains and two identical smaller light chains. 

Continue reading

Let your library design blosum

During the lead optimisation stage of the drug discovery pipeline, we might wish to make mutations to an initially identified binding antibody to improve properties such as developability, immunogenicity, and affinity.

There are many ways we could go about suggesting these mutations including using Large Language Models e.g. ESM and AbLang, or Inverse Folding methods e.g. ProteinMPNN and AntiFold. However, some of our recent work (soon to be pre-printed) has shown that classical non-Machine Learning approaches, such as BLOSUM, could also be worth considering at this stage.

Continue reading

How to replace bike ball bearings when your steering sounds crunchy

Over the last few months my bicycle steering axle started freezing up, to the point where the first thing I did before getting on my bike in the morning was jerk the handlebars from side to side aggressively to loosen it up. It made atrocious guttural sounds and bangs when I did and navigating Oxford by bike was becoming more treacherous by the day as I swerved from left to right trying to wrestle my front wheel’s fork in the right direction. It was time to undertake some DIY…

Continue reading

The workings of Fragmenstein’s RDKit neighbour-aware minimisation

Fragmenstein is a Python module that combine hits or position a derivative following given templates by being very strict in obeying them. This is done by creating a “monster”, a compound that has the atomic positions of the templates, which then reanimated by very strict energy minimisation. This is done in two steps, first in RDKit with an extracted frozen neighbourhood and then in PyRosetta within a flexible protein. The mapping for both combinations and placements are complicated, but I will focus here on a particular step the minimisation, primarily in answer to an enquiry, namely how does the RDKit minimisation work.

Continue reading

Converting pandas DataFrames into Publication-Ready Tables

Analysing, comparing and communicating the predictive performance of machine learning models is a crucial component of any empirical research effort. Pandas, a staple in the Python data analysis stack, not only helps with the data wrangling itself, but also provides efficient solutions for data presentation. Two of its lesser-known yet incredibly useful features are df.to_markdown() and df.to_latex(), which allow for a seamless transition from DataFrames to publication-ready tables. Here’s how you can use them!

Continue reading

Demystifying the thermodynamics of ligand binding

Chemoinformatics uses a curious jumble of terms from thermodynamics, wet-lab techniques and statistical terminology, which is at its most jarring, it could be argued, in machine learning. In some datasets one often sees pIC50, pEC50, pKi and pKD, in discussion sections a medchemist may talk casually of entropy, whereas in the world of molecular mechanics everything is internal energy. Herein I hope to address some common misconceptions and unify these concepts.

Continue reading

What the heck are TPUs?

I recently became curious about TPUs, a specialised hardware for training Machine- and Deep-Learning models, where TPU stands for Tensor Processing Unit. This fancy chip can provide very high gains for anyone aiming to perform really massive parallelisation of AI tasks such as training, fine-tuning, and inference.

In this blog post, I will touch on what a TPU is, why it could be useful for AI applications when compared to GPUs and briefly discuss associated opportunity costs.

What’s a TPU?

Continue reading