Author Archives: James Wilsenach

Chained or Unchained: Markov, Nekrasov and Free Will

A Markov Chain moving between two states A and B. Animation by Devin Soni

Markov chains are simple probabilistic models which model sequences of related events through time. In a Markov chain, events at the present time depend on the previous event in the sequence. The example above shows a model of a dynamical system with two states A and B and the events are either moving between states A and B, or staying put.

More formally, a Markov chain is a model of any sequence of events with the following relationship

P(X_t=x|X_{t-1}=x_{t-1},X_{t-2}=x_{t-2},..,X_1=x_1)=P(X_t|X_{t-1}).

That is, the event that the sequence \{X_t\}_{t} is in state x at time t is conditionally independent of all of its past states given its immediate past. This simple relationship between past and present provides a useful simplifying assumption to model, to a surprising degree of accuracy, many real world systems. These range from air particles diffusing through a room, to the migration patterns of insects, to the evolution of your genome, and even your web browser activity. Given their broad use in describing natural phenomena, it is very curious that Markov first invented the Markov chain to settle a dispute in Mathematical Theology, one in which the atheist Markov was pitted against the devoutly Orthodox Pavel Nekrasov.

Continue reading

On The Logic of GOing with Weisfeiler-Lehman

Recently, I was able to attend Martin Grohe’s talk on The Logic of Graph Neural Networks. Professor Grohe of RWTH Aachen University, is a titan of the fields of Logic and Complexity theory. Even so, he is modest about his achievements, and I was tickled when it was pointed out to me that the theorem he refers to as “a little complex”, one of his crowning achievements, involves a four-hundred page long book of a proof.

The theorem relates to the Weisfeiler-Lehmann (WL) algorithm, an algorithm for determining whether two graphs are equivalent (i.e. isomorphic). The algorithm has deep connections with combinatorics, complexity theory and first order logic. A system of logic that is remarkably similar to the relations present in ontologies such as the Gene Ontology (GO), which is commonly used to compare and predict protein function. Kernelised methods and other WL-based metrics present a new and possibly logically “complete” way to potentially compare the functions of proteins and infer their similarity.

The Gene Ontology follows a simple set of rules, very similar to first order logic. From the GO Database Description
Continue reading

EEGor on Proteins: A Brain-based Perspective on Crowd-sourced Protein Structure Prediction

EEG-based Brain-Computer Interfaces (BCIs) are becoming increasingly popular, with products such as the Muse Headband and g-tec’s Unicorn Hybrid Black taking off, while in the protein folding space, Fold It and distributed/crowd computing efforts like Fold@home, don’t seem to be talked about as much as they once were.

Game-ification is still just as effective a tool to harness human ingenuity as it once was, so perhaps what is needed is a new approach to crowd-folding efforts that can tap into the full potential of the human mind to manipulate and visualise new 3D structures, by drawing inspiration directly from the minds of users…

Continue reading

Cooking Up a (Deep)STORM with a Little Cup of Super Resolution Microscopy

Recently, I attended the Quantitative BioImaging (QBI) Conference 2020, served right here in Oxford. Amongst the many methods on the menu were new recipes for spicing up your Cryo-EM images with a bit of CiNNamon with a peppering of Poisson point processes in the inhomogeneous spatial case amongst many others. However, like many of today’s top tier restaurants most of the courses on offer were on the smaller side, nano-scale in fact, serving up the new field of Super Resolution Microscopy!

Continue reading

IEEEGor Knows What You’re Thinking…

Last month, I, EEGor, took part in the Brain-Computer Interface Designers Hackathon (BR41N.IO), the opening event of the IEEE Systems, Man and Cybernetics Conference in Bari, Italy. Brain-Computer Interfaces (BCIs) are a class of technologies designed to translate brain activity into machine actions to assist (currently in clinical trials) as well as (one day) enhance human beings. BCIs are receiving more and more media attention, most recently with the launch of Elon Musk’s newest company, Neuralink which aims to set up a two-way communication channel between man and machine using a tiny chip embedded in the brain. With the further aim of one-day perhaps making our wildest transhumanist dreams come true…

Continue reading

Just Call Me EEGor

Recently, I was lucky enough to assist in (who am I kidding…obstruct) a sleep and anaesthesia study aimed at monitoring participants by Electroencephalogram (EEG) in various states of consciousness. The study, run by Dr Katie Warnaby of The Anaesthesia Neuroimaging Research Group at The Nuffield Department of Clinical Neuroscience, makes use of both EEG and functional Magnetic Resonance Imaging (fMRI). The research aim is to learn about the effects anaesthesia has on the brain and and in so doing help us both understand ourselves and understand how to most effectively monitor patients undergoing surgery.

Continue reading

Kernel Methods are a Hot Topic in Network Feature Analysis

The kernel trick is a well known method in machine learning for producing a real-valued measure of similarity between data points in any number of settings. Kernel methods for network analysis provide a way of assigning real values to vertices of the graph. These values may correspond to similarity across any number of graphical properties such as the neighbours they share, or more dynamic context, the influence that change in the state of one vertex might have on another.

By using the kernel trick it is possible to approximate the distribution of features on the vertices of a graph in a way that respects the graphical relationships between vertices. Kernel based methods have long been used, for instance in inferring protein function from other proteins within Protein Interaction Networks (PINs).

Continue Reading

Neuronal Complexity: A Little Goes a Long Way…To Clean My Apartment

The classical model of signal integration in both biological and Artificial Neural Networks looks something like this,

f(\mathbf{s})=g\left(\sum\limits_i\alpha_is_i\right)

where g is some linear or non-linear output function whose parameters \alpha_i adapt to feedback from the outside world through changes to protein dense structures near to the point of signal input, namely the Post Synaptic Density (PSD). In this simple model integration is implied to occur at the soma (cell body) where the input signals s_i are combined and broadcast to other neurons through downstream synapses via the axon. Generally speaking neurons (both artificial and otherwise) exist in multilayer networks composing the inputs of one neuron with the outputs of the others creating cross-linked chains of computation that have been shown to be universal in their ability to approximate any desired input-output behaviour.

See more at Khan Academy

Models of learning and memory have relied heavily on modifications to the PSD to explain modifications in behaviour. Physically these changes result from alterations in the concentration and density of the neurotransmitter receptors and ion channels that occur in abundance at the PSD, but, in actuality these channels occur all along the cell wall of the dendrite on which the PSD is located. Dendrites are something like a multi-branched mass of input connections belonging to each neuron. This begs the question as to whether learning might in fact occur all along the length of each densely branched dendritic tree.

Continue reading