Author Archives: Isaac Ellmen

Dispatches from Lisbon

Tiles, tiles, as far as the eye can see. Conquerors on horseback storming into the breach; proud merchant ships cresting ocean waves; pious monks and shepherds tending to their flocks; Christ bearing the cross to Calvary—in intricate tones of blue and white on tin-glazed ceramic tilework. Vedi Napoli e poi muori the Sage of Weimar once wrote—to see Naples and die. But had he been to Lisbon?

The azulejos of the city’s numerous magnificent monasteries are far from the only thing for the weary PhD student to admire. Lisbon has no shortage of imposing bridges and striking towers, historically fraught monuments and charming art galleries. Crumbling old castles and revitalised industrial quarters butt up against the Airbnbs-and-expats district, somewhere between property speculation and the sea. An endearing flock of magellanic penguins paddles away an afternoon in their enclosure at the local aquarium (which is excellent), and an alarming proliferation of custard-based pastries invites one to indulge.

Continue reading

Is attention all you need for protein folding?

Researchers from Apple have released SimpleFold, a protein structure prediction model which uses exclusively standard Transformer layers. The results seem to show that SimpleFold is a little less accurate than methods such as AlphaFold2, but much faster and easier to integrate into standard LLM-like workflows. SimpleFold also shows very good scaling performance, in line with other Transformer models like ESM2. So what is powering this seemingly simple development?

Continue reading

Can AI help us design better viruses?

Viruses are the most abundant biological entity on the planet. They infect virtually every kind of life form including (sort of) other viruses. Viruses are intensely efficient – some viruses contain as few as 4 genes. Their strategy is typically simple: infect a cell, use its machinery to produce more viruses, and spread to other cells.

Pathogenic human viruses are terrible, but there are many other viruses which are useful for humans. For instance, many modern vaccines use viral vectors to produce antigens of other pathogenic entities. There is also growing interest in using viruses to fight off bacterial infections.

Continue reading

Attending LMRL @ ICLR 2025

I recently attended the Learning Meaningful Representations of Life (LMRL) workshop at ICLR 2025. The goal of LMRL is to highlight machine learning methods which extract meaningful or useful properties from unstructured biological data, with an eye towards building a virtual cell. I presented my paper which demonstrates how standard Transformers can learn to meaningfully represent 3D coordinates when trained on protein structures. Each paper submitted to LMRL had to include a “meaningfulness statement” – a short description of how the work presents a meaningful representation.

Continue reading

Generating Haikus with Llama 3.2

At the recent OPIG retreat, I was tasked with writing the pub quiz. The quiz included five rounds, and it’s always fun to do a couple “how well do you know your group?” style rounds. Since I work with Transformers, I thought it would be fun to get AI to create Haiku summaries of OPIGlet research descriptions from the website.

AI isn’t as funny as it used to be, but it’s a lot easier to get it to write something coherent. There are also lots of knobs you can turn like temperature, top_p, and the details of the prompt. I decided to use Meta’s new Llama 3.2-3B-Instruct model which is publicly available on Hugging Face. I ran it locally using vllm, and instructed it to write a haiku for each member’s description using a short script which parses the html from the website.

Continue reading

Architectural highlights of AlphaFold3

DeepMind and Isomophic Labs recently published the methods behind AlphaFold3, the sequel to the famous AlphaFold2. The involvement of Isomorphic Labs signifies a shift that Alphabet is getting serious about drug design. To this end, AlphaFold3 provides a substantial improvement in the field of complex prediction, a major piece in the computational drug design pipeline.

Continue reading

3 approaches to linear-memory Transformers

Transformers are a very popular architecture for processing sequential data, notably text and (our interest) proteins. Transformers learn more complex patterns with larger models on more data, as demonstrated by models like GPT-4 and ESM-2. Transformers work by updating tokens according to an attention value computed as a weighted sum of all other tokens. In standard implentations this requires computing the product of a query and key matrix which requires O(N2d) computations and, problematically, O(N2) memory for a sequence of length N and an embedding size of d. To speed up Transformers, and to analyze longer sequences, several variants have been proposed which require only O(N) memory. Broadly, these can be divided into sparse methods, softmax-approximators, and memory-efficient Transformers.

Continue reading

Understanding positional encoding in Transformers

Transformers are a very popular architecture in machine learning. While they were first introduced in natural language processing, they have been applied to many fields such as protein folding and design.
Transformers were first introduced in the excellent paper Attention is all you need by Vaswani et al. The paper describes the key elements, including multiheaded attention, and how they come together to create a sequence to sequence model for language translation. The key advance in Attention is all you need is the replacement of all recurrent layers with pure attention + fully connected blocks. Attention is very efficeint to compute and allows for fast comparisons over long distances within a sequence.
One issue, however, is that attention does not natively include a notion of position within a sequence. This means that all tokens could be scrambled and would produce the same result. To overcome this, one can explicitely add a positional encoding to each token. Ideally, such a positional encoding should reflect the relative distance between tokens when computing the query/key comparison such that closer tokens are attended to more than futher tokens. In Attention is all you need, Vaswani et al. propose the slightly mysterious sinusoidal positional encodings which are simply added to the token embeddings:

Continue reading