Category Archives: Publications

Pitfalls of AI-Generated Reviews: Case Study of a Frontiers in Microbiology Review on Anti-Influenza A bnAbs

In the last five or so years, large language models (LLMs) have transformed from a novel regurgitator of haphazardly stitched together sentences to an almost ‘human’ personality standing by our side as we tackle life. Whilst the perceived humanity of these models is the topic for perhaps a future blogpost, it is almost undeniable to understate the impact of LLMs in our daily lives. Do you need someone to proofread your essay you’ve spent hours drafting? GPT (or one of its many counterparts) has you covered. Need help drafting an email from scratch? No problem. Want to write and/or heavily edit an entire academic article which would typically require days, if not weeks, of research? Surely just needs a push of a button… right?

Despite tremendous advances in LLMs, key issues mean they are not yet a fully dependable addition to our writing endeavours. They are known to fail when asked to generate new content with only a basic prompt. Some of these failures have made headlines 1. Some of the scariest instances are those of hallucinated information 2–4 . This refers to the phenomenon where AI tools generate convincing information which is factually inaccurate or simply fabricated 2 . In Belgium, the Ghent university rector came under fire for citing quotes, supposedly from influential thinkers, which were later found to be AI-hallucinations 1.

Whilst there are numerous examples of the poorly cited and often AI-hallucinated papers falling through the cracks of the peer-review process, today we focus on a Frontiers in Microbiology reviewtitled ‘Broadly neutralizing monoclonal antibodies against influenza A viruses: current insights and future directions’ 5. This paper attempts to provide an overview of the current landscape of monoclonal antibodies (mAbs) which are being developed to confer protection against influenza A, highlighting ‘technological advances, clinical performance, and scalability’. However, it contains many of the hallmarks of text that has been created or edited with generative AI, despite the generative AI statement stating ‘The author(s) declared that Generative AI was not used in the creation of this manuscript.’

Continue reading

Fragment-to-Lead Successes in 2024 – 10th Anniversary Edition

In what I have to admit is now becoming an annual tradition ([2023] [2019]), I’d like to highlight the 2024 edition of the fragment-to-lead success stories, published in J. Med. Chem. at the end of 2025 [Paper].

Continue reading

Fragment-to-Lead Successes in 2023

Back in 2021, I highlighted the annual fragment-to-lead (F2L) success stories from 2019 [Blog post] [Paper]. This is one of my favourite annual publications, and I’m delighted to see that it’s still going strong. In this post, I’ll discuss the 2023 edition that was published in at the start of 2025 [Paper].

Continue reading

Publishing 101

Scientists pride themselves on clear, logical and concise communication. So naturally, the process for publishing our research involves an absurd number of formalities, like coming up with 700 slightly different ways to ‘thank the reviewer for their insightful comment’. Nevertheless, I’m told this is all a necessary part of spreading your beautiful researcher butterfly wings—and frankly, I’m enough years into my DPhil to stop questioning every quirk of academia. However, the current protocol for new researchers wanting to learn the moves to this bizarre dance seems to be begging postdocs/ old timers for examples of cover letters, marked-up manuscripts, and reviewer responses. To attempt to save everyone some time, I thought I’d provide some guidance and templates here.

Continue reading

Cross referencing across LaTeX documents in one project

A common scenario we come across is that we have a main manuscript document and a supplementary information document, each of which have their own sections, tables and figures. The question then becomes – how do we effectively cross-reference between the documents without having to tediously count all the numbers ourselves every time we make a change and recompile the documents?

The answer: cross referencing!

Continue reading

How to write a review paper as a first year PhD student

As a first year PhD student, it is not an uncommon thing to be asked to write a review paper on your subject area. It is both a great way to get acquainted with your research field and to get the background portion of your thesis completed early. However, it can seem like a daunting task to go from knowing almost nothing about your research field to producing something of interest for experts who have spent years studying your subject matter.

In my first year, I was exactly in this position and I found very little online to help guide this process. Thus, here is my reflective look at writing a review paper that will hopefully help someone else in the future.

Continue reading

Making your figures more accessible

You might have created the most esthetic figures for your last presentation with a beautiful colour scheme, but have you considered how these might look to someone with colourblindness? Around 5% of the gerneral population suffer from some kind of color vision deficiency, so making your figures more accessible is actually quite important! There are a range of online tools that can help you create figures that look great to everyone.

Continue reading

Converting pandas DataFrames into Publication-Ready Tables

Analysing, comparing and communicating the predictive performance of machine learning models is a crucial component of any empirical research effort. Pandas, a staple in the Python data analysis stack, not only helps with the data wrangling itself, but also provides efficient solutions for data presentation. Two of its lesser-known yet incredibly useful features are df.to_markdown() and df.to_latex(), which allow for a seamless transition from DataFrames to publication-ready tables. Here’s how you can use them!

Continue reading

A simple criterion can conceal a multitude of chemical and structural sins

We’ve been investigating deep learning-based protein-ligand docking methods which often claim to be able to generate ligand binding modes within 2Å RMSD of the experimental one. We found, however, this simple criterion can conceal a multitude of chemical and structural sins…

DeepDock attempted to generate the ligand binding mode from PDB ID 1t9b
(light blue carbons, left), but gave pretzeled rings instead (white carbons, right).

Continue reading