Author Archives: Jinwoo Leem

Using bare git repos

Git is a fantastic method of doing version control of your code. Whether it’s to share with collaborators, or just for your own reference, it almost acts as an absolute point of reference for a wide variety of applications and needs. The basic concept of git is that you have your own folder (in which you edit your code, etc.) and you commit/push those changes to a git repository. Note that Git is a version control SYSTEM and GitHub/BitBucket etc. are services that host repositories using Git as its backend!

The basic procedure of git can be summarised to:

1. Change/add/delete files in your current working directory as necessary. This is followed by a git add or git rm command.
2. “Commit” those changes; we usually put a message reflecting the change from step 1. e.g. git commit -m "I changed this file because it had a bug before."
3. You “push” those changes with git push to a git repository (e.g. hosted by BitBucket, GitHub, etc.); this is sort of like saying “save” that change.

Typically we use services like GitHub to HOST a repository. We then push our changes to that repository (or git pull from it) and all is good. However, a powerful concept to bear in mind is the ‘bare’ git repository. This is especially useful if you have code that’s private and should be strictly kept within your company/institution’s server, yet you don’t want people messing about too much with the master version of the code. The diagram below makes the bare git repository concept quite clear:

The bare repo acts as a “master” version of sorts, and every other “working”, or non-bare repo pushes/pulls changes out of it.

Let’s start with the easy stuff first. Every git repository (e.g. the one you’re working on in your machine) is a WORKING/NON-BARE git repository. This shows files in your code as you expect it, e.g. *.py or *.c files, etc. A BARE repository is a folder hosted by a server which only holds git OBJECTS. In it, you’ll never see a single .py or .c file, but a bunch of folders and text files that look nothing like your code. By the magic of git, this is easily translated as .py or .c files (basically a version of the working repo) when you git clone it. Since the bare repo doesn’t contain any of the actual code, you can safely assume that no one can really mess up with the master version without having gone through the process of git add/commit/push, making everything documented. To start a bare repo…

# Start up a bare repository in a server
user@server:$~  git init --bare name_to_repo.git

# Go back to your machine then clone it
user@machine:$~ git clone user@server:/path/to/repo/name_to_repo.git .

# This will clone a empty git repo in your machine
cd name_to_repo
ls
# Nothing should come up.

touch README
echo "Hello world" >> README
git add README
git commit -m "Adding a README to initialise the bare repo."
git push origin master # This pushes to your origin, which is user@server:/path/to/repo/name_to_repo.git

If we check our folders, we will see the following:

user@machine:$~ ls /path/to/repo
README # only the readme exists

user@server:$~ ls /path/to/repo/name_to_repo.git/
branches/ config description HEAD hooks/ info/ objects/ refs/

Magic! README isn’t in our git repo. Again, this is because the git repo is BARE and so the file we pushed won’t exist. But when we clone it in a different machine…

user@machine2:$~ git clone user@server:/path/to/repo/name_to_repo.git .
ls name_to_repo.git/
README
cat README
Hello world #magic!

This was a bit of a lightning tour but hopefully you can see that the purpose of a bare repo is to allow you to host code as a “master version” without having you worry that people will see the contents directly til they do a git clone. Once they clone, and push changes, everything will be documented via git, so you’ll know exactly what’s going on!

R or Python for data vis?

Python users: ever wanted to learn R?
R users: ever wanted to learn Python?
Check out: http://mathesaurus.sourceforge.net/r-numpy.html

Both languages are incredibly powerful for doing large-scale data analyses. They both have amazing data visualisation platforms, allowing you to make custom graphs very easily (e.g. with your own set of fonts, color palette choices, etc.) These are just a quick run-down of the good, bad, and ugly:

R

  • The good:
    • More established in statistical analyses; if you can’t find an R package for something, chances are it won’t be available in Python either.
    • Data frame parsing is fast and efficient, and incredibly easy to use (e.g. indexing specific rows, which is surprisingly hard in Pandas)
    • If GUIs are your thing, there are programs like Rstudio that mesh the console, plotting, and code.
  • The bad:
    • For loops are traditionally slow, meaning that you have to use lots of apply commands (e.g. tapply, sapply).
  • The ugly:
    • Help documentation can be challenging to read and follow, leading to (potentially) a steep learning curve.

Python

  • The good:
    • If you have existing code in Python (e.g. analysing protein sequences/structures), then you can plot straight away without having to save it as a separate CSV file for analysis, etc.
    • Lots of support for different packages such as NumPy, SciPy, Scikit Learn, etc., with good documentation and lots of help on forums (e.g. Stack Overflow)
    • It’s more useful for string manipulation (e.g. parsing out the ordering of IMGT numbering for antibodies, which goes from 111A->111B->112B->112A->112)
  • The bad:
    • Matplotlib, which is the go-to for data visualisation, has a pretty steep learning curve.
  • The ugly:
    • For statistical analyses, model building can have an unusual syntax. For example, building a linear model in R is incredibly easy (lm), whereas Python involves sklearn.linear_model.LinearRegression().fit. Otherwise you have to code up a lot of things yourself, which might not be practical.

For me, Python wins because I find it’s much easier to create an analysis pipeline where you can go from raw data (e.g. PDB structures) to analysing it (e.g. with BioPython) then plotting custom graphics. Another big selling point is that Python packages have great documentation. Of course, there are libraries to do the analyses in R but the level of freedom, I find, is a bit more restricted, and R’s documentation means you’re often stuck interpreting what the package vignette is saying, rather than doing actual coding.

As for plotting (because pretty graphs are where it’s at!), here’s a very simple implementation of plotting the densities of two normal distributions, along with their means and standard deviations.

import numpy as np
from matplotlib import rcParams

# plt.style.use('xkcd') # A cool feature of matplotlib is stylesheets, e.g. make your plots look XKCD-like

# change font to Arial
# you can change this to any TrueType font that you have in your machine
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Arial']

import matplotlib.pyplot as plt
# Generate two sets of numbers from a normal distribution
# one with mean = 4 sd = 0.5, another with mean (loc) = 1 and sd (scale) = 2
randomSet = np.random.normal(loc = 4, scale = 0.5, size = 1000)
anotherRandom = np.random.normal(loc = 1, scale = 2, size = 1000)

# Define a Figure and Axes object using plt.subplots
# Axes object is where we do the actual plotting (i.e. draw the histogram)
# Figure object is used to configure the actual figure (e.g. the dimensions of the figure)
fig, ax = plt.subplots()

# Plot a histogram with custom-defined bins, with a blue colour, transparency of 0.4
# Plot the density rather than the raw count using normed = True
ax.hist(randomSet, bins = np.arange(-3, 6, 0.5), color = '#134a8e', alpha = 0.4, normed = True)
ax.hist(anotherRandom, bins = np.arange(-3, 6, 0.5), color = '#e8291c', alpha = 0.4, normed = True)

# Plot solid lines for the means
plt.axvline(np.mean(randomSet), color = 'blue')
plt.axvline(np.mean(anotherRandom), color = 'red')

# Plot dotted lines for the std devs
plt.axvline(np.mean(randomSet) - np.std(randomSet), linestyle = '--', color = 'blue')
plt.axvline(np.mean(randomSet) + np.std(randomSet), linestyle = '--', color = 'blue')

plt.axvline(np.mean(anotherRandom) - np.std(anotherRandom), linestyle = '--', color = 'red')
plt.axvline(np.mean(anotherRandom) + np.std(anotherRandom), linestyle = '--', color = 'red')

# Set the title, x- and y-axis labels
plt.title('A fancy plot')
ax.set_xlabel("Value of $x$") 
ax.set_ylabel("Density")

# Set the Figure's size as a 5in x 5in figure
fig.set_size_inches((5,5))

Figure made by matplotlib using the code above.

randomSet = rnorm(mean = 4, sd = 0.5, n = 1000)
anotherRandom = rnorm(mean = 1, sd = 2, n = 1000)

# Let's define a range to plot the histogram for binning;
limits = range(randomSet, anotherRandom)
lbound = limits[1] - (diff(limits) * 0.1)
ubound = limits[2] + (diff(limits) * 0.1)
# use freq = F to plot density
# in breaks, we define the bins of the histogram by providing a vector of values using seq
# xlab, ylab define axis labels; main sets the title
# rgb defines the colour in RGB values from 0-1, with the fourth digit setting transparency
# e.g. rgb(0,1,0,1) is R = 0, G = 1, B = 0, with a alpha of 1 (i.e. not transparent)
hist(randomSet, freq = F, breaks = seq(lbound, ubound, 0.5), col = rgb(0,0,1,0.4), xlab = 'Value of x', ylab = 'Density', main = 'A fancy plot')
# Use add = T to keep both histograms in one graph
# other parameters, such as breaks, etc., can be introduced here
hist(anotherRandom, freq = F, breaks = seq(lbound, ubound, 0.5), col = rgb(1,0,0,0.4), add = T)

# Plot vertical lines with v =
# lty = 2 generates a dashed line
abline(v = c(mean(randomSet), mean(anotherRandom)), col = c('blue', 'red'))

abline(v = c(mean(randomSet)-sd(randomSet), mean(randomSet)+sd(randomSet)), col = 'blue', lty = 2)
abline(v = c(mean(anotherRandom)-sd(anotherRandom), mean(anotherRandom)+sd(anotherRandom)), col = 'red', lty = 2)

Similar figure made using R code from above.

*Special thanks go out to Ali and Lyuba for helpful fixes to make the R code more efficient!

Colour wisely…

Colour – the attribute of an image that makes it acceptable or destined for the bin. Colour has a funny effect on us – it’s a double-edged sword that greatly strengthens, or weakens data representation in such a huge level. No one really talks about what’s a good way to colour an image or a graph, but it’s something that most can agree as being pleasing, or disgusting. There are two distinctive advantages to colouring a graph: it conveys both quantitative and categorical information very, very well. Thus, I will provide a brief overview (with code) on how colour can be used to display both quantitative and qualitative information. (*On the note of colours, Nick has previously discussed how colourblindness must be considered in visualising data…).

1. Colour conveys quantitative information.
A huge advantage of colour is that it can provide quantitative information, but this has to be done correctly. Here are three graphs showing the exact same information (the joint density of two normal distributions) and  we can see from the get-go which method is the best at representing the density of the two normal distributions:

Colouring the same graph using three different colour maps.

Colouring the same graph using three different colour maps.

If you thought the middle one was the best one, I’d agree too. Why would I say that, despite it being grayscale and seemingly being the least colourful of them all?

  • Colour is not limited to hues (i.e. whether it’s red/white/blue/green etc. etc.); ‘colour’ is also achieved by saturation and brightness (i.e., how vivid a colour is, or dark/light it is). In the case of the middle graph, we’re using brightness to indicate the variations in density and this is a more intuitive display of variations in density. Another advantage of using shades as the means to portray colour is that it will most likely be OK with colourblind users.
  • Why does the graph on the right not work for this example? This is a case where we use a “sequential” colour map to convey the differences in density. Although the colour legend clarifies what colour belongs to which density bin, without it, it’s very difficult to tell what “red” is with respect to “yellow”. Obviously by having a colour map we know that red means high density and yellow is lower, but without the legend, we can interpret the colours very differently, e.g. as categories, rather than quantities. Basically, when you decide on a sequential colour map, its use must be handled well, and a colour map/legend is critical. Otherwise, we risk putting colours as categories, rather than as continuous values.
  • Why is the left graph not working well? This is an example of a “diverging” colourmap.
    It’s somewhat clear that blue and red define two distinct quantities. Despite this, a major error of this colour map comes in the fact that there’s a white colour in the middle. If the white was used as a “zero crossing” — basically, where a white means the value is 0 — the diverging colour map would have been a more effective tool. However, we can see that matplotlib used white as the median value (by default); this sadly creates the false illusion of a 0 value, as our eyes tend to associate white with missing data, or ‘blanks’. Even if this isn’t your biggest beef with the divergent colour map, we run into the same colour as the sequential colour map, where blue and red don’t convey information (unless specified), and the darkness/lightness of the blue and red are not linked very well without the white in the middle. Thus, it doesn’t do either job very well in this graph. Basically, avoid using divergent colourmaps unless we have two different quantities of values (e.g. data spans from -1 to +1).

2. Colour displays categorical information.
An obvious use of colour is the ability to categorise our data. Anything as simple as a line chart with multiple lines will tell you that colour is terrific at distinguishing groups. This time, notice that the different colour schemes have very different effects:

Colour schemes can instantly differentiate groups.

Colour schemes can instantly differentiate groups.

Notice how this time around, the greyscale method (right) was clearly the losing choice. To begin with, it’s hard to pick out what’s the difference between persons A,B,C, but there’s almost a temptation to think that person A morphs into person C! However, on the left, with a distinct set of colours, there is a clear distinction of persons A, B, and C as the three separate colours. Although a set of distinct three colours is a good thing, bear in mind the following…

  • Make sure the colours don’t clash with respect to lightness! Try to pick something that’s distinct (blue/red/green), rather than two colours which can be interpreted as two shades of the same colour (red/pink, blue/cyan, etc.)
  • Pick a palette to choose from – a rainbow is typically the best choice just because it’s the most natural, but feel free to choose your own set of hues. Also include white and black as necessary, so long as it’s clear that they are also part of the palette. White in particular would only work if you have a black outline.
  • Keep in mind that colour blind readers can have trouble with certain colour combinations (red/yellow/green) and it’s best to steer toward colourblind-friendly palettes.
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as sp
from mpl_toolkits.axes_grid1 import make_axes_locatable

### Part 1
# Sample 250 points
np.random.seed(30)
x = np.random.normal(size = 250)
np.random.seed(71)
y = np.random.normal(size = 250)

# Assume the limits of a standard normal are at -3, 3
pmin, pmax = -3, 3

# Create a meshgrid that is 250x250
xgrid, ygrid = np.mgrid[pmin:pmax:250j, pmin:pmax:250j]
pts = np.vstack([xgrid.ravel(), ygrid.ravel()]) # ravel unwinds xgrid from a 250x250 matrix into a 62500x1 array

data = np.vstack([x,y])
kernel = sp.gaussian_kde(data)
density = np.reshape(kernel(pts).T, xgrid.shape) # Evaluate the density for each point in pts, then reshape back to a 250x250 matrix

greys = plt.cm.Greys
bwr = plt.cm.bwr
jet = plt.cm.jet

# Create 3 contour plots
fig, ax = plt.subplots(1,3)
g0 = ax[0].contourf(xgrid, ygrid, density, cmap = bwr)
c0 = ax[0].contour(xgrid, ygrid, density, colors = 'k') # Create contour lines, all black
g1 = ax[1].contourf(xgrid, ygrid, density, cmap = greys)
c1 = ax[1].contour(xgrid, ygrid, density, colors = 'k') # Create contour lines, all black
g2 = ax[2].contourf(xgrid, ygrid, density, cmap = jet)
c2 = ax[2].contour(xgrid, ygrid, density, colors = 'k') # Create contour lines, all black

# Divide each axis then place a colourbar next to it
div0 = make_axes_locatable(ax[0])
cax0 = div0.append_axes('right', size = '10%', pad = 0.1) # Append a new axes object
cb0  = plt.colorbar(g0, cax = cax0)

div1 = make_axes_locatable(ax[1])
cax1 = div1.append_axes('right', size = '10%', pad = 0.1)
cb1  = plt.colorbar(g1, cax = cax1)

div2 = make_axes_locatable(ax[2])
cax2 = div2.append_axes('right', size = '10%', pad = 0.1)
cb2  = plt.colorbar(g2, cax = cax2)

fig.set_size_inches((15,5))
plt.tight_layout()
plt.savefig('normals.png', dpi = 300)
plt.close('all')

### Part 2
years = np.arange(1999, 2017, 1)
np.random.seed(20)
progress1 = np.random.randint(low=500, high =600, size = len(years))
np.random.seed(30)
progress2 = np.random.randint(low=500, high =600, size = len(years))
np.random.seed(40)
progress3 = np.random.randint(low=500, high =600, size = len(years))

fig, ax = plt.subplots(1,2)
ax[0].plot(years, progress1, label = 'Person A', c = '#348ABD')
ax[0].plot(years, progress2, label = 'Person B', c = '#00de00')
ax[0].plot(years, progress3, label = 'Person C', c = '#A60628')
ax[0].set_xlabel("Years")
ax[0].set_ylabel("Progress")
ax[0].legend()

ax[1].plot(years, progress1, label = 'Person A', c = 'black')
ax[1].plot(years, progress2, label = 'Person B', c = 'gray')
ax[1].plot(years, progress3, label = 'Person C', c = '#3c3c3c')
ax[1].set_xlabel("Years")
ax[1].set_ylabel("Progress")
ax[1].legend()

fig.set_size_inches((10,5))
plt.tight_layout()
plt.savefig('colourgrps.png', dpi = 300)
plt.close('all')

Visualising Biological Data, Pt. 2

Here’s a little quick round-up of some of the tools/algorithms that I’ve seen in VIZBI, which I believe can be useful for many. For more details, I strongly advise you check out the posters page (vizbi.org/Posters/2016). There were a few that I would’ve liked to re-visit, but the webapps weren’t available (e.g. MeshCloud from the Human Genome Center, Tokyo), so maybe I’ll come back with a part 3. Here are my top five:

1. Autodesk’s Protein Viewer* (shout-outs to @_merrywang on Twitter)
As a structural bioinformatician, I’m going to be really biased here, and say that Autodesk’s Molecule viewer was the best tool that was showcased in the conference. It combines not only the capacity to visualise millions of molecules from the PDB (or your own PDB files), it also allows annotation and sharing, effectively, “snapshots” of your workspace for collaboration (see this if you want to know what I mean). AND it’s free! It’s not the fastest viewer on the planet, nor the easiest thing to use, but it is effective.

2. Vectorbase
Not related to protein structures, but a really interesting visualisation that shows information on, for example, insecticide resistance. With mosquitoes being such a huge part of today’s news, this kind of information is vital for fighting and understanding the distribution of insects across the globe.

3. Phandango
This is a genome browser which, from a one-man effort, could be a game-changer. The UI needs a little bit of work I think, but otherwise, a really valuable tool for crunching lots of genomic data in a quick fashion.

4. i-PV Circos
This is a neat circular browser that helps users view protein sequences in a circularised format. With this visualisation format becoming more popular as the days go by, I think this has the potential to be a leader in the field. At the moment the website’s a bit dark and not the most user-friendly, but some of the core functionality (e.g. highlighting residues and association of domains) is a real plus!

5. Storyline visualisation
Possibly my favourite/eye-opener tool from the entire conference. Storyline visualisation helps users understand how things progress in realtime — this has been used for movie plot data (e.g. Star Wars character and plot progression) but the general concept can be useful for biological phenomena – for example, how do cells in diseased states progress over time? How does it compare to healthy states? Can we also monitor protein dynamics using a similar concept? I think the fact that it gives a very intuitive, big-picture overview of the micro-scale dynamics was the reason why I’ve been incredibly interested in Kwan-Liu Ma’s work, and I recommend checking out his website/publications list to grab insight on improving data visualisation (in particular, network visualisation when you want to avoid hairballs!)

The list isn’t ranked in any way, and do check these out! There were other tools I would’ve really liked to review (e.g. Minardo, made by David Ma @frostickle on Twitter), but I suppose I can go on and on. At the end of the day, visualisation tools like these are meant to be quick, and help us to not only EXPLORE our data, but to EXPLAIN it too. I think we’re incredibly fortunate to have some amazing minds out there who are willing to not only create these tools, but also make them available for all.

Visualising Biological Data, Pt. 1

Hey Blopig Readers,

I had the privilege to go down to Heidelberg last week to go and see some stunning posters and artwork. I really recommend that you check some of the posters out. In particular, the “Green Fluorescent Protein” poster stuck out as my favourite. Also, if you’re a real Twitter geek, check out #Vizbi for some more tweets throughout the week.

So what did the conference entail? As a very blunt summary, it was really an eclectic collection of researchers around the globe who showcased their research with very neat visual media. While I was hoping for a conference that gave an overview of some of the principles that dictate how to visualise proteins, genes, etc., it wasn’t like that at all! Although I was initially a bit disappointed, it turned out to be better – one of the key themes that were re-iterated throughout the conference is that visualisations are dependent on the application!

From the week, these are the top 5 lessons I walked away with, and I hope you can integrate this into your own visualisation:

  1. There is no pre-defined, accepted way of visualising data. Basically, every visualisation is tailored, has a specific purpose, so don’t try to fit your graph into something pretty that you’ve seen in another paper. We’re encouraged to get insight from others, but not necessarily replicate a graph.
  2. KISS (Keep it simple, stupid!) Occam’s razor, KISS, whatever you want to call it – keep things simple. Making an overly complicated visualisation may backfire.
  3. Remember your colours. Colour is probably one of the most powerful tools in our arsenal for making the most of a visualisation. Don’t ignore them, and make sure that they’re clean, separate, and interpretable — even to those who are colour-blind!
  4. Visualisation is a means of exploration and explanation. Make lots, and lots of prototypes of data visuals. It will not only help you explore the underlying patterns in your data, but help you to develop the skills in explaining your data.
  5. Don’t forget the people. Basically, a visualisation is really for a specific target audience, not for a machine. What you’re doing is to encourage connections, share knowledge, and create an experience so that people can learn your data.

I’ll come back in a few weeks’ time after reviewing some tools, stay tuned!

We can model everything, right…?

First, happy new year to all our Blopig fans, and we all hope 2016 will be awesome!

A couple of months ago, I was covering this article by Shalom Rackovsky. The big question that jumps out of the paper is, has modelling reached its limits? Or, in other words, can bioinformatics techniques be used to model every protein? The author argues that protein structures have an inherent level of variability that cannot be fully captured by computational methods; thus, he raises some scepticism on what modelling can achieve. This isn’t entirely news; competitions such as CASP show that there’s still lots to work on in this field. This article takes a very interesting spin when Rackovsky uses a theoretical basis to justify his claim.

For a pair of proteins and Q, Rackovsky defines their relationship depending on their sequence and structural identity. If and share a high level of sequence identity but have little structural resemblance, and are considered to be a conformational switch. Conversely, if and share a low level of sequence identity but have high structural resemblance, they are considered to be remote homologues. 

Case of a conformational switch - two DNAPs with 100% seq identity but 5.3A RMSD.

Case of a conformational switch – two DNAPs with 100% seq identity but 5.3A RMSD.

Haemoglobins are 'remote homolgues' - despite 19% sequence identity, these two proteins have 1.9A RMSD.

Haemoglobins are ‘remote homolgues’ – despite 19% sequence identity, these two proteins have 1.9A RMSD.

From here on comes the complex maths. Rackovsky’s work here (and in papers prior, example) assume that there are periodicities in properties of proteins, and thus apply fourier transforms to compare protein sequences and structures.

In the case of comparing protein sequences, instead of treating sequences as a string of letters, protein sequences are characterised by an x 10 matrix. represents the number of amino acids in protein (or Q), and each amino acid has 10 biophysical properties. The matrix then undergoes Fourier Transformation (FT), and the resulting sine and cosine coefficients for proteins and are used to calculate the Euclidean distance between each other.

When comparing structures, proteins are initially truncated into length-L fragments, and the dihedral angle, bond length and bond angle for each fragment is collected into a matrix. The distribution of matrices allows us to project proteins onto a pre-parameterised principal components space. The Euclidean distance between the newly-projected proteins is then used to quantify protein structural similarity.

In both sequence and structure distances, the distances are normalised and centred around 0,0 by calculating the average distance between and its M-nearest neighbours, and then adjusted by the global average. Effectively, if a protein has an average structural distance, it will tend toward 0,0.

The author uses a dataset of 12000 proteins from the CATH set to generate the following diagram; the Y-axis represents sequence similarity and the X-axis is the structural similarity. Since these axes are scaled to the mean, the closer you are to 0, it means you’re closer to the global average sequence or structure distance.

rackovskyplot

The four quadrants: along the diagonal is a typical linear relationship (greater sequence identity = more structural similarity). The lower-right quadrant represents proteins with LOW sequence similarity yet HIGH structural similarity. In the upper-left quadrant, proteins have LOW structural similarity but HIGH sequence similarity.

Rackovsky argues that, while the remote homologue and conformational switch seem like rare phenomena, it accounts for approximately ~50% of his dataset. Although he does account for the high density of proteins within 0,0, the paper does not clearly address the meaning of these new metrics. In other words, the author does not translate these values to something we’re more familiar with (e.g.RMSD, and sequence identity % for structural and sequence distance). Although the whole idea is that his methods are supposed to be an alignment-free method, it’s still difficult to draw relationships to what we already use as the gold standard in traditional protein structure prediction problems. Also, note that the structure distance spans between -0.1 and 0.1 units whereas sequence identity spans between -0.3 and 0.5. The differences in scale are also not covered – i.e., is a difference of 0.01 units an expected value for protein structure distance, and why are the jumps in protein structure distance so much smaller than jumps in sequence space?

The author makes more interesting observations in the dataset (e.g. α/β mixed proteins are more tolerant to mutations in comparison to α- or β-only proteins) but the observations are not discussed in depth. If α/β-mixed proteins are indeed more resilient to mutations, why is this the case? Conversely, if small mutations change α- or β-only proteins’ structures to make new folds, having any speculation on the underlying mechanism (e.g. maybe α-only proteins are only sensitive to radically different amino acid substitutions, such as ALA->ARG) will only help our prediction methods. Overall I had the impression that the author was a bit too pessimistic about what modelling can achieve. Though we definitely cannot model all proteins that are out there at present, I believe the surge of new sources of data (e.g. cryo-EM structures) will provide an alternative inference route for better prediction methods in the future.

ISMB wrap-up (it was coming, we insist…!)

So ISMB 2015 seems a bit far away from now (just under 2 months!), but Dublin was an adventure, filled with lots of Guinness, Guinness, and … Guinness. Oh and of course, science (credits to Eleanor for the science of Guinness video)! We definitely had lots of that, and you can see some of our pictures from the week too (credits to Sam; https://goo.gl/photos/2qm9CPbfHtoC3VfH9)

Here are some key pieces of work that got to each of us here at OPIG.

Claire:
Jianlin Cheng, from the University of Missouri, presented his research into model quality assessment methods – ways of choosing the model that is closest to the native structure from a set of decoys. Their method (MULTICOM) is a consensus method, which calcualtes an average rank from 14 different quality assessment methods. By combining this average rank with clustering and model combination to select five top models for a target, their method produces good results – in CASP11, the group were ranked third when considering the top-ranked models and second when considering the top five.
Alistair:
Accounting for the cell cycle in single cell RNA-seq
The current ubiquitous of RNA-seq throughout academia speaks volumes to the strength of the methodology.  It provides a transcript-wide measure of a cell’s expression at the point of cell lysis; from which one can investigate gene fusion, SNPs and changes in expression, to name only a few possibilities.  Traditionally, these measurement are made using a cell culture and as such the expression levels, and derived results, are based on averages taken over a number of cells. Recent advances have allowed the resolution to increase to the point where measurements can now instead be made on single isolated cells. With this increase in capability, it should now be possible to measure and identify subpopulations within a given culture. However, the inherent variability of expression, such as that caused by the cell cycle, often overshadows any change that could be attributed to these subpopulations. If one could characterise this variability, then this could be removed from the data and perhaps these subpopulations would then be elucidated.
Oliver Stegle gave a great presentation on doing exactly this for the cell cycle. They modeled the different phases as a set of latent variables such that they are inferred directly from the data (rather than merely observed). Via this model, they estimated that upwards of 30% of the inherent variability could be accounted for, and hence subtracted from the data. Applying such a technique to culture of T cells, they were able to identify the the different stages of differentiation of naive T cells into T helper 2 cells. Crucially, these would of been obscured had the cell cycle not been identified. Given this success with just accounting for the cell cycle, Stegle suggested that their technique can be expanded upon to elucidate other sources of gene expression heterogeneity while making it easier to identify these cellular subpopulations.
Jaro:
Dr. Julia Shifman from Hebrew University of Jerusalem studies protein-protein interactions. In her 3DSiG presentation she focused on the presence of “cold-spots” in protein sequence where in sillico mutations to several different amino acids improve the binding affinity. Such cold-spots are often observed at the periphery of the complex, where no interaction is observed.
Malte: 
Alex Cornish from Imperial College London presented his work on the structural difference between cell-type specific interaction networks. To generate these, he weighted protein-protein interaction network edges by cell-type specific gene expression data from the FANTOM5 project. Using these cell-type specific networks, he finds that it is possible to associate specific networks with specific diseases based on the distances between disease-associated genes in the networks. Furthermore, these disease – cell type associations can be used to assess the similarity between diseases.
Jin:
Barry Grant presented an overview of the research activity in his group — namely nucleotide switch proteins (e.g. GTPases, such as Ras and Kinesin). In the case of Kinesin, the group used simple statistical methods such as principal components analysis to draw inferences between conformation and functional states. The group then used correlated motions to construct a community network that describes how certain groups of residues behave in certain conformational states.
Sam:
Discovery of CREBBP Bromodomain Inhibitors by High-Throughput Docking and Hit Optimization Guided by Molecular Dynamics
Min Xu, Andrea Unzue, Jing Dong, Dimitrios Spiliotopoulos, Cristina Nevado, and Amedeo Caflisch Paper link
In this paper MD simulations were used to confirm the binding modes found by in silico docking and to guide the chemical synthesis of hit optimisation. In one example three binding modes were observed during MD, which helped to identify favourable electrostatic interactions  that improved selectivity. For another compound a favourable polar interaction in the binding site was observed during MD, which helped to increase its potency from micro- to nanomolar. Thus, given a robust in-silico screening and docking approach, it seems that MD simulations can be a useful addition to confirm binding modes and inspire derivatives that might have been overseen in static structures.
Eleanor:
Bumps and traffic lights along the translation of secretory proteins
Shelley Mahlab, Michal LinialIn her talk, Michal described how a theoretical measure of translation speed, tAI, shows the differences between the translation speeds of proteins targeted to different locations. Proteins with a signal peptide (either secreted or transmembrane) have significantly slower codons than proteins without, over approximately the first 30 codons, perhaps allowing more time for binding to the signal recognition particle. Transmembrane proteins had a lower predicted speed overall, which may be important for their insertion and correct folding.

Modelling antibodies, from Sequence, to Structure…

Antibody modelling has come a long way in the past 5 years. The Antibody Modelling Assessment (AMA) competitions (effectively an antibody version of CASP) have shown that most antibody design methods are capable of modelling the antibody variable fragment (Fv) at ≤ 1.5Å. Despite this feat, AMA-II provided two important lessons:

1. We can still improve our modelling of the framework region and the canonical CDRs.

Stage two of the AMA-II competition showed that CDR-H3 modelling improves once the correct crystal structure was provided (bar the H3 loop, of course). In addition, some of the canonical CDRs (e.g. L1) were modelled poorly, and some of the framework loops had also been poorly modelled.

2. We can’t treat orientation as if it doesn’t exist.

Many pipelines are either vague about how they predict the orientation, or have no explicit explanation on how the orientation will be predicted for the model structure. Given how important the orientation can be toward the antibody’s binding mode (Fera et al., 2014), it’s clear that this part of the pipeline has to be re-visited more carefully.

In addition to these lessons, one question remains:

What do we do with these models?

No pipeline, as far as we are aware, have no comments on what we should do beyond creating the model from a pipeline. What are its implications? Can we even use it for experiments, and use it as a potential therapeutic in the long-term? In light of these lessons and this blaring question, we developed our own method.

Before we begin, how does modelling work?

In my mind, most, if not all, pipelines follow this generic paradigm:pipeline2

Our method, ABodyBuilder, also follows this 4-step workflow;

  1. We choose the template structure based on sequence identity; below a threshold, we predict the structure of the heavy and light chains separately
  2. In the event that we use the structures from separate antibodies, we predict the orientation from the structure with the highest global sequence identity.
  3. We model the loops using FREAD (Choi, Deane, 2011)
  4. Graft the side chains in using SCWRL.

Following the modelling procedure, our method also annotates the accuracy of the model in a probabilistic context — i.e., an estimated probability that a particular region is modelled at a given RMSD threshold. Moreover, we also flag up any issues that an experimentalist can run into should they ever decide to model the antibody.

The accuracy estimation is a data-driven estimation of model quality. Many pipelines end up giving you just a model – but there’s no way of determining model accuracy until the native structure is determined. This is particularly problematic for CDRH3 where RMSDs can reach up to >4.0A between models and native structures, and it would be incredibly useful to have an a priori, expected estimation of model accuracy.

Furthermore, by commenting on motifs that can conflict with antibody development, we aim to offer a convenient solution for users when they are considering in vitro experiments with their target antibody. Ultimately, ABodyBuilder is designed with the user in mind, making an easy-to-use, informative software that facilitates antibody modelling for novel applications.

Predicting Antibody Affinities – Progress?

Antibodies are an invaluable class of therapeutic molecules — they have the capacity to bind any molecule (Martin and Thornton, 1996), and this comes from an antibody’s remarkable diversity (Georgiou et al., 2014). In particular, antibodies can bind their targets with high affinity, and most drug design campaigns seek to ‘mature’ an antibody, i.e. increase the antibody’s affinity for its target. However, the process of antibody maturation is, in experimental terms, time-consuming and expensive — if we had 6 CDRs (as in a typical antibody), with 10 residues each, and if you can have any of the 20 amino acids in the CDR positions, there are 20^60 mutants to test (and this is before considering any double or triple mutations!)

So hold on, what is affinity exactly? Affinity represents the strength of binding, and it’s calculated as either a ratio of concentrations, or as a ratio of rate constants, i.e.equationsIn the simplest affinity maturation protocol, three steps are compulsory:

  1. Mutate the antibody’s structure correctly
  2. Assess the impact of mutation on KD
  3. Select and repeat.

For the past year, we have centralised around part 2 — affinity prediction. This is a fundamental aspect of the affinity maturation pipeline in order to rationalise ‘good’ and ‘bad’ mutations in the context of maturing an antibody. We developed a statistical potential, CAPTAIN; essentially the idea is to gather contact statistics that are represented in antibody-antigen complexes, and use this information to predict affinities.

But why use contact information? Does it provide anything useful? Based on analyses of the interfaces of antibody-antigen complexes in comparison to general protein-protein interfaces, we definitely see that antibodies rely on a different binding mode compared to general protein-protein complexes, and other studies have confirmed this notion (Ramaraj et al., 2012; Kunik and Ofran, 2013; Krawczyk et al., 2013).

For our methodology, we trained on general protein-protein complexes (as most scoring functions do!) and specifically on antibody-protein complexes from the PDB. For our test set of antibody-protein complexes, we outperformed 16 other published methods, though for our antibody-peptide test set, we were one of the worst performers. We found that other published methods predict antibody-protein affinities poorly, though they make better predictions for antibody-peptide binding affinities. Ultimately, we achieve stronger performance as we use a more appropriate training set (antibody-antigen complexes) for the problem in hand (predicting antibody-antigen affinities). Our correlations were by no means ideal, and we believe that there are other aspects of antibody structures that must be used for improving affinity prediction, such as conformational entropy (Haidar et al., 2013) and VH-VL domain orientation (Dunbar et al., 2013; Fera et al., 2014).

What’s clear though, is that using the right knowledge base is key to improving predictions for solving greater problems like affinity maturation. At present, most scoring functions are trained on general proteins, but this ‘one-fits-all’ approach has been subject to criticism (Ross et al., 2013). Our work supports the idea that scoring functions should be tailored specifically for the problem in hand.

 

Computational Antibody Affinity Maturation

In this week’s journal club, we reviewed a paper by Lippow et al. in Nature Biotechnology, which features a computational pipeline that is capable of maturing antibodies (Abs) by up to 140-fold. The paper itself discusses 4 test case Abs (D44.1, cetuximab, 4-4-20, bevacizumab) and uses changes in electrostatic energy to identify favourable mutations. Up to the point when this paper was published back in 2007, computational antibody design was an (almost) unexplored field of research – except for a study by Clark et al. in 2006, no one else had done anything like the work presented in this paper.

The idea behind the paper is to identify certain positions within the Ab structure for mutation and hopefully find an Ab with a higher binding affinity.

The idea behind the paper is to identify certain positions within the Ab structure for mutation and hopefully find an Ab with a higher binding affinity.

Pipeline

Briefly speaking, the group generated a mutant Ab-antigen (Ag) complex using a series of algorithms (dead-end elimination and A*), which was then scored by the group’s energy function for identifying favourable mutations. Lippow et al. used the electrostatics term of their binding affinity prediction in order to estimate the effects of mutations on an Ab’s binding affinity. In other words, instead of examining their entire scoring function, which includes terms such as van der Waal’s energy, the group only used changes in the electrostatic energy term as an indicator for proposing mutations. Overall, in 2 of the 4 mentioned test cases (D44.1 & cetuximab), the proposed mutations were experimentally tested to confirm their computational design pipeline – a brief overview of these two case studies will be described.

Results

In the case of the D44.1 anti-lysozyme Ab, the group proposed 9 single mutations by their electrostatics-based calculation method; 6/9 single mutants were confirmed to be beneficial (i.e., the mutant had an increased binding affinity). The beneficial single mutants were combined, ultimately leading to a quadruple mutant structure with a 100-fold improvement in affinity. The quadruple mutant was then subjected to a second round of computer-guided affinity maturation, leading to a new variant with six mutations (effectively a 140-fold improvement over the wild-type Ab). This case study was a solid testimony to the validity of their method; since anti-lysozyme Abs are often used as model systems, these results demonstrated that their design pipeline had taken, in principle, a suitable approach to maturing Abs in silico.

The second case study with cetuximab was arguably the more interesting result. Like the D44.1 case above, mutations were proposed to increase the Ab’s binding affinity on the basis of the changes in electrostatics. Although the newly-designed triple mutant only showed a 10-fold improvement over its wild-type counterpart, the group showed that their protocols can work for therapeutically-relevant Abs. The cetuximab example was a perfect complement to the previous case study — it demonstrated the practical implications of the method, and how this pipeline could potentially be used to mature existing Abs within the clinic today.

Effectively, the group suggested that mutations that either introduce hydrophobicity or a net charge at the binding interface tend to increase an Ab’s binding affinity. These conclusions shouldn’t come with huge surprise, but it was remarkable that the group had reached these conclusions with just one term from their energy function.

Conclusions

Effectively, the paper set off a whole new series of possibilities and helped us to widen our horizons. The paper was by no means perfect, especially with respect to predicting the precise binding affinities of mutants – much of this error could be bottled down to the modelling stage of their pipeline. However, the paper showed that computational affinity maturation is not just a dream – in fact, the paper showed that it’s perfectly doable, and immediately applicable. Interestingly, Lippow et al.’s manipulation of an Ab’s electrostatics seemed to be a valid approach, with recent publications on Ab maturation showing that introducing charged residues can enhance binding affinity (e.g. Kiyoshi et al., 2014).

More importantly, the paper was a beautiful showcase of how computational analyses could inform the decision making process in an in vitro framework, and I believe it exemplified how we should approach our problems in bioinformatics. We should not think of proteins as mere text files and numbers, but realise that they are living systems, and we’re not yet at a point where we fully understand how proteins behave. This shouldn’t discourage us from research; instead, it should give us the incentive to take things more slowly, and develop a method/product that could be used to solve greater, pragmatic problems.