This blog post is supporting my poster at Young Modellers Forum and makes things way easier to see and understand. Underneath each GIF, is the explanation of what you should look for as things denoise throughout the diffusion trajectory. Click the GIFs for higher quality viewing!
Continue readingAuthor Archives: Sanaz Kazeminia
Bye Bye Lucy Vost! (Lucy Gone-st but not forgotten)
This month we said Goodbye to a few OG members of OPIG š among them was one of my favourites, Lucy! (should I apologise to the others?)
Lucy did some amazing work on improving output of generative models during her time in OPIG. One of her recent works involved increasing the plausibility of 3D molecular diffusion models using distorted training data. Check it out here.
Early in her PhD she worked on PointVS with Jack Scantlebury. PointVS is a machine learning scoring function that predicts protein-small molecule binding affinity by learning actual binding physics rather than dataset biases.
Word on the street is she also has some secret works in the making…
Continue readingUnderstand Large Codebases Faster Using GitIngest
Often as researchers we have to deal with large and ugly codebases – this is not new, I know. Alas, fear not, now we have large language models (LLMs) like ChatGPT and friends which make things a little faster! In this blogpost I will show you how to use GitIngest to do this even faster using your favourite LLM.
No more copy pasting files individually or writing a paragraph explaining the directory structure, or even worse, relying on an LLM to use web search to find the codebase. As the codebase grows, the unreliability of these methods does too. GitIngest makes any “whole” codebase, prompt friendly – one prompt will be all you need!
Continue readingNavigating Hallucinations in Large Language Models: A Simple Guide
AI is moving fast, and large language models (LLMs) are at the centre of it all, doing everything from generating coherent, human-like text to tackling complex coding challenges. And this is just scratching the surfaceāLLMs are popping up everywhere, and their list of talents keeps growing by the day.
However, these models aren’t infallible. One of their most intriguing and concerning quirks is the phenomenon known as “hallucination” – instances where the AI confidently produces information that is fabricated or factually incorrect. As we increasingly rely on AI-powered systems in our daily lives, understanding what hallucinations are is crucial. This post briefly explores LLM hallucinations, exploring what they are, why they occur, and how we can navigate them and get the most out of our new favourite tools.
Continue reading