Author Archives: Brennan Abanades Kenyon

Checking your PDB file for clashing atoms

Detecting atom clashes in protein structures can be useful in a number of scenarios. For example if you are just about to start some molecular dynamics simulation, or if you want to check that a structure generated by a deep learning model is reasonable. It is quite straightforward to code, but I get the feeling that these sort of functions have been written from scratch hundreds of times. So to save you the effort, here is my implementation!!!

Continue reading

Cool ideas in Deep Learning and where to find more about them

I was planning on doing a blog post about some cool random deep learning paper that I have read in the last year or so. However, I keep finding that someone else has already written a way better blog post than what I could write. Instead I have decided to write a very brief summary of some hot ideas and then provide a link to some other page where someone describes it way better than me.

The Lottery Ticket Hypothesis

This idea has to do with pruning a model, which is when you remove a parts of your model to make it more computationally efficient while barely loosing accuracy. The lottery ticket hypothesis also has to do with how weight are initialized in neural networks and why larger models often achieve better performance.

Anyways, the hypothesis says the following: “Dense, randomly-initialized, feed-forward networks contain subnetworks (winning tickets) that—when trained in isolation—reach test accuracy comparable to the original network in a similar number of iterations.” In their analogy, the random initialization of a models weights is treated like a lottery, where some combination of a subset of these weight is already pretty close to the network you want to train (winning ticket). For a better description and a summary of advances in this field I would recommend this blog post.

SAM: Sharpness aware minimization

The key idea here has to do with finding the best optimizer to train a model capable of generalization. According to this paper, a model that has converged to a sharp minima will be less likely to generalize than one that has converged to a flatter minima. They show the following plot to provide an intuition of why this may be the case.

In the SAM paper (and ASAM for adaptive) the authors implement an optimizer that is more likely to converge to a flat minima. I found this blog post by the authors of ASAM gives a very good description of the field.

Continue reading

AIRR Community Meeting VI May 17-19 

Eve, Brennan and I were delighted to attend the sixth AIRR (adaptive immune receptor repertoire) Community Meeting: Exploring New Frontiers in San Diego. Eve and I had been awaiting this meeting for a mere 3 years, since it was announced during the last in-person AIRR Community Meeting back in 2019. Fortunately, San Diego did not disappoint. 

After a rocky start (featuring many hours stuck in traffic on the M40, one missed flight and one delayed flight), we made it to California! The three day conference had ~230 participants (remote and in-person) and featured great talks from academia and industry. We particularly enjoyed keynote talks from Dennis Burton on rational vaccine design using broadly neutralising antibodies, Gunilla Karlsson Hedestam on functional consequences of allelic variation, Shane Crotty on covid and HIV vaccine design, and Atul Butte on uses of electronic health record data and how we should all found start-ups.

We had fun delivering a tutorial on OPIG antibody tools and, most importantly, we all won AIRR t-shirts in the raffle (potentially we were the only people who noticed how to enter on the conference app). Highlights outside of the conference included paddle boarding and seeing hummingbirds, pelicans, sealions, seals, ‘Garibaldi’ the state fish, and meeting Bob the golden retriever at a surfing shop. We’re now off to find jobs on the West Coast so we can live at the beach….

 The AIRR community has many webinars and talks available on their youtube channel https://www.youtube.com/c/AIRRCommunity

Sarah, Eve & Brennan

GitHub actions can be useful

GitHub actions is a (relatively) novel GitHub feature that allows you to run code on GitHub when a predefined event is triggered. The most widespread use case for GitHub actions is for Continuous Integration, because it allows you to automatically test your code on any machine immediately after each push. For a great tutorial on how to use it for this see here.

But you can do so much more with them!! Basically you can set up any workflow to run after any event. An event is basically when a specific activity on GitHub happens, while a workflow is basically the script you want to run after the event has happened. For a full list of the events you can use see here. Workflow scripts are written in a .yml file and should be saved within the .github/worflows directory within your repository. I am incapable of writing a better tutorial for these than what is already on their documentation, but I will show a copy of a workflow script I recently put together and walk you through it.

In one of my previous blog posts I wrote about how to upload your code to PyPI. Hopefully I convinced you that this is quite easy, but it does require a few steps that you may not want to be doing every time you come up with a new feature (find a bug) and have to re-upload it. Luckily, you don’t have to!! Just stick the code into a GitHub actions workflow so it will automatically re-upload it for you. Here is the script I use for this:

Continue reading

Einops: Powerful library for tensor operations in deep learning

Tobias and I recently gave a talk at the OPIG retreat on tips for using PyTorch. For this we created a tutorial on Google Colab notebook (link can be found here). I remember rambling about the advantages of implementing your own models against using other peoples code. Well If I convinced you, einops is for you!!

Basically, einops lets you perform operations on tensors using the Einstein Notation. This package comes with a number of advantages a few of which I will try and summarise here:

Continue reading

Making your python tool as easy to install as possible

Have you ever tried to use someone else’s code and spent a whole day trying to install it? Have you ever decided not to use a tool because installing it was a massive pain? Both of those have happened to me and, to be honest, it is a massive shame. The authors may spend large amounts of time developing these tools and in the end, no one uses them because they can’t get them to work. So I have decided to try and make all code I develop as easy and painless as possible to install and use.

Continue reading

Is bigger better?

Recent work in Natural Language Processing (NLP) indicates that the bigger your model is, the better performance you will get. In a paper by Kaplan, Jared, et al., they show that loss scales as a power-law with model size, dataset size, and the amount of compute used for training.

Kaplan, Jared, et al. “Scaling laws for neural language models.” arXiv preprint arXiv:2001.08361 (2020).
Continue reading

Plotly for interactive 3D plotting

An recently wrote a post on how to use the seaborn library. I really like seaborn and use it a lot for 2D plots. However, recently I have been dealing with 3D data and have found plotly to be best. When used in a jupyter notebook, it allows you to easily generate 3D interactive plots. This is extremely useful to visualize structural data.

Continue reading

3 Useful UNIX commands you might not know

nohup

The command nohup (stands for “No hang up”) allows your script to run even if you quit the terminal. It can be very useful, especially if your terminal has been opened through ssh and you have a dodgy connection. It can be used as follows:

nohup python my_script.py > log.out &

nohup will automatically append the output from your script to a file named nohup.out. By adding the > log.out part of the command you can save the output to a different file of your choice.

Continue reading