Author Archives: Lucy Vost

How to make ML your entire personality

In our silly little day-to-day lives in over in stats, we forget how accustomed we all are to AI being used in many of the things we do. Going home for the holidays, though, I was reminded that the majority of people (at least, the majority of my family members) don’t actually make most of their choices according to what a random, free AI tool suggests for them. Unfortunately, though, I do! Here are some of my favourite non-ChatGPT free tools I use to make sure everyone knows that working in ML is, in fact, my entire personality.

Continue reading

Conference feedback: AI in Chemistry 2023

Last month, a drift of OPIGlets attended the royal society of chemistry’s annual AI in chemistry conference. Co-organised by the group’s very own Garrett Morris and hosted in Churchill College, Cambridge, during a heatwave (!), the two days of conference featured aspects of artificial intelligence and deep machine learning methods to applications in chemistry. The programme included a mixture of keynote talks, panel discussion, oral presentations, flash presentations, posters and opportunities for open debate, networking and discussion amongst participants from academia and industry alike. 

Continue reading

BRICS Decomposition in 3D

Inspired by this blog post by the lovely Kate, I’ve been doing some BRICS decomposing of molecules myself. Like the structure-based goblin that I am, though, I’ve been applying it to 3D structures of molecules, rather than using the smiles approach she detailed. I thought it may be helpful to share the code snippets I’ve been using for this: unsurprisingly, it can also be done with RDKit!

I’ll use the same example as in the original blog post, propranolol.

1DY4: CBH1 IN COMPLEX WITH S-PROPRANOLOL

First, I import RDKit and load the ligand in question:

Continue reading

SnackGPT

One of the most treasured group meeting institutions in OPIG is snackage. Each week, one group member is asked to bring in treats for our sometimes lengthy 16:30 meeting. However, as would be the case for any group of 25 or so people, there are various dietary requirements and preferences that can make this snack-quisition a somewhat tricky process: from gluten allergies to a mild dislike of cucumber, they vary in importance, but nevertheless are all to be taken into account when the pre-meeting supermarket sweep is carried out.

So, the question on every researcher’s mind: can ChatGPT help me? Turns out, yes: I gave it a list of the group’s dietary requirements and preferences, and it gave me a handy little list of snacks I might be able to bring along…

When pushed further, it even provided me with an itemised list of the ingredients required. During this process it seemed to forget a couple of the allergies I’d mentioned earlier, but they were easy to spot; almost more worryingly, it suggested I get a beetroot and mint hummus (!) for the veggie platter:

I don’t know if I’ll actually be using many of these suggestions—judging by the chats I’ve had about the above list, I think bringing in a platter of veggies as the group meeting snack may get me physically removed from the premises—but ChatGPT has once again proven itself a handy tool for saving a few minutes of thinking!

Some ponderings on generalisability

Now that machine learning has managed to get its proverbial fingers into just about every pie, people have started to worry about the generalisability of methods used. There are a few reasons for these concerns, but a prominent one is that the pressure and publication biases that have led to reproducibility issues in the past are also very present in ML-based science.

The Center for Statistics and Machine Learning at Princeton University hosted a workshop last July highlighting the scale of this problem. Alongside this, they released a running list of papers highlighting reproducibility issues in ML-based science. Included on this list are 20 papers highlighting errors from 17 distinct fields, collectively affecting a whopping 329 papers.

Continue reading

Viewing fragment elaborations in RDKit

As a reasonably new RDKit user, I was relieved to find that using its built-in functionality for generating basic images from molecules is quite easy to use. However, over time I have picked up some additional tricks to make the images generated slightly more pleasing on the eye!

The first of these (which I definitely stole from another blog post at some point…) is to ask it to produce SVG images rather than png:

#ensure the molecule visualisation uses svg rather than png format
IPythonConsole.ipython_useSVG=True

Now for something slightly more interesting: as a fragment elaborator, I often need to look at a long list of elaborations that have been made to a starting fragment. As these have usually been docked, these don’t look particularly nice when loaded straight into RDKit and drawn:

#load several mols from a single sdf file using SDMolSupplier
#add these to a list
elabs = [mol for mol in Chem.SDMolSupplier('frag2/elabsTestNoRefine_Docked_0.sdf')]

#get list of ligand efficiencies so these can be displayed alongside the molecules
LEs = [(float(mol.GetProp('Gold.PLP.Fitness'))/mol.GetNumHeavyAtoms()) for mol in elabs]

Draw.MolsToGridImage(elabs, legends = [str(LE) for LE in LEs])
Fig. 1: Images generated without doing any tinkering

Two quick changes that will immediately make this image more useful are aligning the elaborations by a supplied substructure (here I supplied the original fragment so that it’s always in the same place) and calculating the 2D coordinates of the molecules so we don’t see the twisty business happening in the bottom right of Fig. 1:

Continue reading

NeurIPS 2021 Conference Feedback

Held annually in December, the Neural Information Processing Systems meetings aim to encourage researchers using machine learning techniques in their work – whether it be in economics, physics, or any number of fields – to get together to discuss their findings, hear from world-leading experts, and in many years past, ski. The virtual nature of this year’s conference had an enormously negative impact on attendees’ skiing experiences, but it nevertheless was a pleasure to attend – the machine learning in structural biology workshop, in particular, provided a useful overview of the hottest topics in the field, and of the methods that people are using to tackle them. 

This year’s NeurIPS highlighted the growing interest in applying the newest Natural Language Processing (NLP) algorithms on proteins. This includes antibodies, as seen by two presentations in the MLSB workshop, which focused on using these algorithms for the discovery and design of antibodies. Ruffolo et al. presented their version of a BERT-inspired language model for antibodies. The purpose of such a model is to create representations that encapsulate all information of an antibody sequence, which can then be used to predict antibody properties. In their work, they showed how the representations could be used to predict high-redundancy sequences (a proxy for strong binders) and how continuous trajectories consistent with the number of mutations could be observed when using umap on the representations. While such representations can be used to predict properties of antibodies, another work by Shuai et al. instead focused on training a generative language model for antibodies, able to generate a region in an antibody based on the rest of the antibody. This can then potentially be used to generate new viable CDR regions of variable length, better than randomly mutating them.

Continue reading