Neuronal Complexity: A Little Goes a Long Way…To Clean My Apartment

The classical model of signal integration in both biological and Artificial Neural Networks looks something like this,

f(\mathbf{s})=g\left(\sum\limits_i\alpha_is_i\right)

where g is some linear or non-linear output function whose parameters \alpha_i adapt to feedback from the outside world through changes to protein dense structures near to the point of signal input, namely the Post Synaptic Density (PSD). In this simple model integration is implied to occur at the soma (cell body) where the input signals s_i are combined and broadcast to other neurons through downstream synapses via the axon. Generally speaking neurons (both artificial and otherwise) exist in multilayer networks composing the inputs of one neuron with the outputs of the others creating cross-linked chains of computation that have been shown to be universal in their ability to approximate any desired input-output behaviour.

See more at Khan Academy

Models of learning and memory have relied heavily on modifications to the PSD to explain modifications in behaviour. Physically these changes result from alterations in the concentration and density of the neurotransmitter receptors and ion channels that occur in abundance at the PSD, but, in actuality these channels occur all along the cell wall of the dendrite on which the PSD is located. Dendrites are something like a multi-branched mass of input connections belonging to each neuron. This begs the question as to whether learning might in fact occur all along the length of each densely branched dendritic tree.

Models of hippocampal pyramidal cells (cells from the part of the brain most closely linked with forming long-term memories) suggest that dendrites may themselves be capable of change when stimulated, storing important input features and even adapting to their external environment through signal backpropagation. Not only that but dendrites are themselves capable of generating integrated voltage signals, preventing signal attenuation and leading the neuron to potentially behave more like its own network, giving another layer of complexity that adds to the computational power of the brain as a whole.

But how robust are modern neuronal models in generating behaviours anyway? Timothy Busbice, a researcher at InterIntelligence Research and co-founder of the OpenWorm Program connected up a computational model of the complete neuronal network of the well-studied round worm C. elegans to a LEGO robot. The network, comprising just 302 neurons, was simulated using the NeuroML language, connecting sonar and touch sensors to replace the tactile and scent receptors the worm uses to survive and navigate its environment. A number of wormy behaviours have been well documented, such as the worms’ aversion response to being touched on the nose, as shown here:

Incredibly, some of the worm’s scent-based food-finding and touch aversion behaviours were reproduced in the robot!

Next step: Hotwire an unsuspecting vacuum cleaner with your worm brain in a box and start blasting the tunes in every room that needs cleaning! Totally WORMth it!

Losonczy, A., Makara, J. K., & Magee, J. C. (2008). Compartmentalized dendritic plasticity and input feature storage in neurons. Nature452(7186), 436.

Polsky, A., Mel, B. W., & Schiller, J. (2004). Computational subunits in thin dendrites of pyramidal cells. Nature neuroscience7(6), 621.

Busbice, T. (2014). Extending the C. Elegans Connectome to Robotics.

Szigeti, B., Gleeson, P., Vella, M., Khayrulin, S., Palyanov, A., Hokanson, J., … & Larson, S. (2014). OpenWorm: an open-science approach to modeling Caenorhabditis elegansFrontiers in computational neuroscience8, 137.

Author