Viruses are the most abundant biological entity on the planet. They infect virtually every kind of life form including (sort of) other viruses. Viruses are intensely efficient – some viruses contain as few as 4 genes. Their strategy is typically simple: infect a cell, use its machinery to produce more viruses, and spread to other cells.
Pathogenic human viruses are terrible, but there are many other viruses which are useful for humans. For instance, many modern vaccines use viral vectors to produce antigens of other pathogenic entities. There is also growing interest in using viruses to fight off bacterial infections.
Bikes and pints across 5 pubs – what could be better (and what could go wrong). The year is 2025 and the date 06.06.25. Starting from the Stats department the customary picture was taken before the horn blown and a flood of ~30 structural biologists was unleashed onto the streets of Oxford to raid and plunder. Despite being the new kid I think Fergus will be proud to see my accurate version controlling unlike past more experienced members of the group, and that this reference doesn’t seem like copying his homework too much.
Of course even though it was June the weather was tumultuous. Having to make an educated guess on the probability of experiencing rain I took a Bayesian approach to calculate the posterior of rain occurring given the data of the entirety of British history which suggested that despite seeing sun on the BBC weather report that did not in anyway improve the likelihood of there being later rain. In light of this everyone came aptly dressed in waterproofs which turned out to be a smart choice after a later event of spontaneous beer spillage where a certain individual knocked his entire pint over Sophie and proceeded to say “at least you were wearing a raincoat”. This was a fantastic play by the newest member of the group who destroyed what little dignity (if any) he had so far amassed and simultaneously embroiled himself in the responsibility of this blog post. So to Charlotte who I know will be reading this (as I was warned!) perhaps this blog post will be an adequate first step to redemption.
And so the convoy departed towards our first stop, the Up in Arms (thanks Charlotte for the round). The inaugural table tennis tournament was held and it was great to see a real world application of the groups protein folding experience with Odysseus’s portable bike.
Next stop, the Victoria (thanks Matt for the round), before the 3.5 mile cycle to The Plough (I recommend going to the toilet before this after drinking units in the metric of pints).
Being far removed from our hunter gatherer past we settled down on the crisp summer grass with Oxford’s famous White Rabbit pizza delivered directly to the local meadow. I hadn’t grounded myself and connected to the earth like that in months (preferring to spend my days with my quadruple monitor workstation setup in the department) which combined with the beautiful settings of port meadow was making the trees look huggable. After scavenging 4 more pieces of pizza for a profit of 50% on my original contribution despite my intolerance to onions – whilst arguing that tolerance is a mental game aided by alcoholic bravery – we walked down the field to the river to reach our final destination – the idyllic medley looking over the Thames.
Reaching our last stop it dawned on me that despite proclaiming an ambitious target of 2 pints per pub I was sitting well below that at 3 pints total. It was clear desperate actions were needed to raise my average to stand up to any later scrutiny. Perhaps it was this subconscious desire to complete my self-assigned quest that at this last point of interest I executed the “swill Sophie” manoeuvre. Yet, despite my insistence that by getting through 2 pint glasses this was “technically” equivalent two my 2 pints per pub target, this did not stand up to the scrutiny of Charlotte.
After a month of wrangling with HPC molecular dynamics I’ve been getting more contact with the Slurm e-mail notification service than real human beings so it was refreshing to escape the GROMACS simulation that my brain has become and get to know the group better. Yet by the end of the night some of us (myself) couldn’t resist entering a tirade about how fractals and symmetry is the underlying representation of consciousness with the source being a strong “trust me bro”, and so it seemed liked a fitting time to put myself to bed.
From the 11th until the 16th of May, we (Gemma and Henriette) attended PEGS Boston 2025. First of all, we will share some tips for preparing to attend a conference. We will give some general feedback on the conference and share some general highlights. Lastly, we will mention some of the talks we found interesting.
In our recent work with the PoseBusters benchmark, we made a deliberate choice: to include both receptors seen during training and completely novel ones. Why? To explore an often-overlooked question: how much does receptor similarity to training data influence model performance?
One of the great delights in this life is pointless optimisation. Point-ful optimisation has its place of course; it is right and proper and sensible, and, well, useful, and it also does, when first achieved, yield considerable satisfaction. But I have found I soon adjust to the newly more efficient (and equally drab) normality, and so the spell fades quickly.
Not so with pointless optimisation. Pointless optimisation, once attained, is a preternaturally persistent source of joy that keeps on giving indefinitely. Particularly if it involves acquiring a skill of some description; if the task optimised is frequent; and if the time so saved could not possibly compensate for the time and effort sunk into the optimisation process. Words cannot convey the triumph of completing a common task with hard-earned skill and effortless efficiency, knowing full-well it makes no difference whatsoever in the grand scheme of things.
It turns out that giving neural networks attention gives you some pretty amazing results. The attention mechanism allowed neural language models to ingest vast amounts of data in a highly parallelised manner, efficiently learning what to pay the most attention to in a contextually aware manner. This computational breakthrough launched the LLM-powered AI revolution we’re living through. But what if attention isn’t just a computational trick? What if the same principle that allows transformers to focus on what matters from a sea of information also lies at the heart of consciousness, perception, and even morality itself? (Ok, maybe this is a bit of a stretch, but hear me out.)
To understand the connection, we need to look at how perception really works. Modern neuroscience reveals that experience is fundamentally subjective and generative. We’re not passive receivers of objective reality through our senses, we’re active constructors of our own experience. According to predictive processing theory, our minds constantly generate models of reality, and our sensory input is then used to provide an ‘error’ of these predictions. But the extraordinary point here is that we never ‘see’ these sensory inputs, only our mind’s best guess of how the world should be, updated by sensory feedback. As consciousness researcher Anil Seth puts it “Reality is a controlled hallucination… an action-oriented construction, rather than passive registration of an objective external reality”, or in the words of Anaïs Nin, half a century earlier, “We do not see things as they are, we see things as we are.”
I was recently devastated to hear that Amazon Prime has cancelled the Wheel of Time TV Show, a fantasy epic based on the novels of Robert Jordan. I recently binge-watched the entire show and found it to improve throughout, with the third and most recent season being the best.
In my grief, I turned to something dark – reading the books instead.
I have recently finished the first book (of 12) and thought I would give my thoughts on the story and the storytelling of Jordan as a concise book review so I can get my final Blopig out of the way.
All chemistry LLM enthusiasts were treated to a pleasant surprise on Friday when Greg Brockman tweeted that ChatGPT now has access to RDKit. I’ve spent a few hours playing with the updated models and I have summarized some of my findings in this blog.
ChatGPT now can analyze, manipulate, and visualize molecules and chemical information via the RDKit library.
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Name
Description
Duration
Cookie Preferences
This cookie is used to store the user's cookie consent preferences.