Category Archives: Group Meetings

What we discuss during cake at our Tuesday afternoon group meetings

Journal Club: Comments on Three X-ray Crystal Structure Papers

One of the fundamental weaknesses of X-ray crystallography, when used for the solution of macromolecular structures, is that the constructed models are based on the subjective interpretation of the electron density by the crystallographer.

This can lead to poor or simply incorrect models, as discussed by Stanfield et al. in their recent paper “Comment on Three X-ray Crystal Structure Papers” (link below). Here, they assert that the basis of several papers by Dr. Salunke and his coworkers, a series of antibody-peptide complexes, are fundamentally flawed. It is argued that the experimental electron density does not support the presence of the peptide models: there is no significant positive OMIT density for the peptides when they are removed from the model, and the quality of the constructed models is poor, with unreasonably large B-factors.

Link to paper: http://www.jimmunol.org/content/196/2/521.1.


Firstly, a quick recap on crystallographic maps and how they are used. Two map types are principally used in macromolecular crystallography: composite maps and difference maps.

The composite map is used to approximate the electron density of the crystal. It consists of the modelled density subtracted from the observed electron density (multiplied by 2) and contains correction factors to minimise phase bias (m, D). The weighting of 2 of the observed map is used to compensate for the poor phases, which cause un-modelled features to appear only weakly in the density. It is universally represented as a blue mesh:

composite

The difference map is the modelled density subtracted from the observed density, and is used to identify un-modelled areas of the electron density. It contains the same correction factors to compensate for phase bias. It is universally represented as a green mesh for positive values, and a red mesh for negative values. The green and red meshes are always contoured at the same absolute values, e.g. ±1 or ±1.4.

difference

The problem of identifying features to model in the electron density is the point where the subjectivity of the crystallographer is most influential. For ligands, this means identifying blobs that are “significant”, and that match the shape of the molecule to be modelled.

When a crystallographer is actively searching for the presence of a binding molecule, in this case a peptide, it is easy to misinterpret density as the molecule you are searching for. You have to be disciplined and highly critical before accidentally contouring to levels that are too low, and modelling into density that does not really match the model. This is the case in the series of structures that are criticised by Stanfield et al.


Specific concerns with the structures of Dr Salenke et al.

1: Contouring difference maps at only positive values (and colouring them blue?)

The first questionable thing that Dr Salunke et al do is to present a difference map contoured at only positive values as evidence for the bound peptide. This oddity is compounded by colouring the resulting map as blue, which is unusual for a difference map.

bad1

2: Contouring difference maps to low values

Salunke et al claim that the image above shows adequate evidence for the binding of the peptide in a difference map contoured at 1.7𝛔.

When contouring difference maps to such low levels, weak features will indeed be detectable, if they are present, but in solvent channels, where the crystal is an ensemble of disordered states, there is no way to interpret the density as an atomic model. Hence, a difference map at 1.7𝛔 will show blobs across all of the solvent channels in the crystal.

This fact, in itself, does not prove that the model is wrong, but makes it highly likely that the model is a result of observation bias. This observation bias occurs because the authors were looking for evidence of the binding peptide, and so inspected the density at the binding site. This has lead to the over-interpretation of noisy and meaningless density as the peptide.

The reason that the 3𝛔 limit is used to identify crystallographic features in difference maps is that this identifies only strong un-modelled features that are unlikely to be noise, or a disordered feature.

More worryingly, the model does not actually fit the density very well.

3: Poor Model Quality

Lastly, the quality of the modelled peptides is very poor. The B-factors of the ligands are much higher than the B-factors of the surrounding protein side-chains. This is symptomatic of the modelled feature not being present in the data, and the refinement program tries to “erase” the presence of the model by inflating the B-factors. This, again, does not prove that the model is wrong, but highlights the poor quality of the model.

Lastly, the ramachandran outliers in the peptides are extreme, with values in the 0th percentile of empirical values. This means that the conformer of the peptide is highly strained, and therefore highly unlikely.


Combining all of the evidence above, as presented in the article written by Stanfield et al, there is little doubt that the models presented by Salunke et al are incorrect. Individual failings in the models in one area could be explained, but such a range of errors across such a range of quality metrics cannot.

Strachey Lecture – “Artificial Intelligence and the Future” by Dr. Demis Hassabis

For this week’s group meeting, some of us had the pleasure of attending a very interesting lecture by Dr. Demis Hassabis, founder of Deep Mind. Personally, I found the lecture quite thought-evoking and left the venue with a plethora of ideas sizzling in my brain. Since one of the best ways to end mental sizzlingness is by writing things down, I volunteered to write this week’s blog post in order to say my peace about yesterday’s Strachey Lecture.

Dr. Hassabis began by listing some very audacious goals: “To solve intelligence” and “To use it to make a better world”. At the end of his talk, someone in the audience asked him if he thought it was possible to achieve these goals (“to fully replicate the brain”), to which he responded with a simple there is nothing that tells us that we can’t.

After his bold introductory statement, Dr. Hassabis pressed on. For the first part of his lecture, he engaged the audience with videos and concepts of a reinforcement learning agent trained to learn and play several ATARI games. I was particularly impressed with the notion that the same agent could be used to achieve a professional level of gaming for 49 different games. Some of the videos are quite impressive and can be seen here or here. Suffice to say that their algorithm was much better at playing ATARi than I’ll ever be. It was also rather impressive to know that all the algorithm received as input was the game’s score and the pixels on the screen.

Dr. Hassabis mentioned in his lecture that games provide the ideal training ground for any form of AI. He presented several reasons for this, but the one that stuck with me was the notion that games quite often present a very simplistic and clear score. Your goal in a game is usually very well defined. You help the frog cross the road or you defeat some aliens for points. However, what I perceive to be the greatest challenge for AI is the fact that real world problems do not come with such a clear-cut, incremental score.

For instance, let us relate back to my particular scientific question: protein structure prediction. It has been suggested that much simpler algorithms such as Simulated Annealing are able to model protein structures as long as we have a perfect scoring system [Yang and Zhou, 2015]. The issue is, currently, the only way we have to define a perfect score is to use the very structure we are trying to predict (which kinda takes the whole prediction part out of the story).

Real world problems are hard. I am sure this is no news to anyone, including the scientists at Deep Mind.

During the second part of his talk, Dr. Hassabis focused on AlphaGo. AlphaGo is Deep Mind’s effort at mastering the ancient game of Go. What appealed to me in this part of the talk is the fact that Go has such a large number of possible configurations that devising an incremental score is no simple task (sounds familiar?). Yet, somehow, Deep Mind scientists were able to train their algorithm to a point where it defeated a professional Go player.

Their next challenge? In two weeks, AlphaGo will face the professional Go player with the highest number of titles in the last decade (the best player in the world?). This makes me reminiscent of when Garry Kasparov faced Deep Blue. After the talk, my fellow OPIG colleagues also seemed to be pretty excited about the outcome of the match (man vs. food computer).

Dr. Hassabis finished by saying that his career goal would be to develop AI that is capable of helping scientists tackle the big problems. From what I gather (and from my extremely biased point of view; protein structure prediction mindset), AI will only be able to achieve this goal once it is capable of coming up with its own scores for the games we present it to play with (hence developing some form of impetus). Regardless of how far we are from achieving this, at least we have a reason to cheer for AlphaGo in a couple of weeks (because hey, if you are trying to make our lives easier with clever AI, I am all up for it).

Identifying basic building blocks/motifs of networks

elec3p

The optimal subgraph decomposition of an electronic circuit.

There are many verbal descriptions for network motifs: characteristic connectivity patterns, over represented subgraphs, recurrent circuits, basic building-blocks of networks just to name a few. However, as with most concepts in network science network motifs are maybe best explained in terms of empirical observations. For instance the most basic example of a network motif is the motif consisting of tree mutually connected nodes that is: a triangle. Many real world networks ranging from the internet to social networks to biological networks contain many more triangles than one would expect if they were wired randomly. In certain cases there exist good explanations for the large number of triangles found in the network. For instance, the presence of many triangles in friendship networks simply tell us that we are more likely to be friends with the friends of our friends. In biological networks triangles and other motifs are believed to contribute to the overall function of the network by performing modular tasks such as information processing and therefore are believed to be favoured by natural selection.

The predominant definition of network motifs is due to Milo et al. [1]  and defines network motifs on the basis of how surprising their frequency in the network is when compared to randomized version of the network. The randomized version of the network is usually taken to be the configuration model i.e. the ensemble of all networks that have the same degree distribution as the original network. Following this definition motifs are identified by comparing their counts in the original network with a large sample of this null model. The approach of Milo et al. formalizes the concept of network motifs as over represented connectivity patterns. However, the results of the method are highly dependent on the choice of null model.

In my talk I presented an alternative approach to motif analysis [2] that seeks to formalize the network motifs from the perspective of simple building blocks. The approach is based on finding an optimal decomposition of the network into subgraphs. Here, subgraph decompositions are defined as subgraph covers which are sets of subgraphs such that every edge of the network is contained in at least one of the subgraphs in the cover. It follows from this definition that a subgraph cover is a representation of the network in the sense that given a subgraph cover the network can be recovered fully by simply taking the union of the edge sets of the subgraphs in the cover. In fact many network representations including edge lists, adjacency lists, bipartite representations and power-graphs fall into the category of subgraph covers. For instance, the edge list representation is equivalent to the cover consisting of all single edge subgraphs of the network and bipartite representations are simply covers which consist of cliques of various sizes.

Given that there are many competing ways of representing a network as a subgraph cover the question of how one picks one of the covers over the other arises. In order to address this problem we consider the total information of subgraph covers as a measure of optimality. The total information is a information measure introduced by Gell-Mann and Hartle [3] which given a model for a certain entity e is defined to be sum of the entropy and effective complexity of M. While the entropy measures the information required to describe e given M and the effective complexity measures the amount of information required to specify M which is given by its algorithmic information content. The total information also provides a framework for model selection:  given two or more models for the same entity one picks the one that has lowest total information and if two models have the same total information one picks the one that has lower effective complexity i.e. the simpler one. This essentially tells us how to trade off goodness of fit and model complexity.

In the context of subgraph covers the entropy of a cover corresponds to the information required to give the positions of the subgraphs in the cover given the different motifs that occur in C and their respective frequencies in C. On the other hand the effective complexity of C corresponds to the information required to describe the set of motifs occurring in the cover together with their respective frequencies. While the entropy of subgraph covers can be calculated analytically their effective complexity is not computable due to the halting problem. However, in practice one can use approximations in the form of upper bounds.

Following the total information approach we now define an optimal subgraph cover of network G to be a subgraph cover that minimizes the total information and the network motifs of G to be the motifs/connectivity patterns that occur in such an optimal cover.
The problem of finding an optimal cover turns out to be computationally rather challenging. Besides the usual difficulties associated to counting subgraphs  (subgraph isomorphism problem-NP complete) and classifying subgraphs (graph isomorphism problem-complexity unknown) the problem is a non-linear set covering problem and therefore NP-hard. Consequently, we construct a greedy heuristic for the problem.

When applied to real world networks the method finds very similar motifs in networks representing similar systems. Moreover, the counts of the motifs in networks of the same type scale approximately with network size. Consequently, the method can also be used to classify networks according to their subgraph structure.

 

References:

[1] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, Network Motifs: Simple Building Blocks of Complex Networks, Science 298, 824 (2002)

[2] Wegner, A. E. Subgraph covers: An information-theoretic approach to motif analysis in
networks. Phys. Rev. X, 4:041026, Nov 2014

[3] M. Gell-Mann and S. Lloyd, Information Measures, Effective Complexity, and Total Information, Complexity 2, 44 (1996).

Novelty in Drug Discovery

The primary aim of drug discovery is to find novel molecules that are active against a target of therapeutic relevance and that are not covered by any existing patents (1).  Due to the increasing cost of research and development in the later stages of drug discovery, and the increase in drug candidates failing at these stages, there is a desire to select the most diverse set of active molecules at the earliest stage of drug discovery, to maximise the chance of finding a molecule that can be optimised into a successful drug (2,3). Computational methods that are both accurate and efficient are one approach to this problem and can augment experiment approaches in deciding which molecules to take forward.

But what do we mean by a “novel” compound? When prioritising molecules for synthesis which characteristics do we want to be different?  It was once common to select subsets of hits to maximise chemical diversity in order to cover as much chemical space as possible (4).  These novel lead molecules could subsequently be optimised, the idea that maximising the coverage of chemical space would maximise the chance of finding a molecule that could be optimised successfully. More recently however, the focus has shifted to “biodiversity”: diversity in terms of how the molecule interacts with the protein (1). Activity cliffs, pairs of molecules that are structurally and chemically similar but have a large difference in potency, indicate that chemical diversity may not be the best descriptor to identify molecules that interact with the target in sufficiently diverse ways. The molecules to be taken forward should be both active against the target and diverse in terms of how they interact with the target, and the parts of the binding site the molecule interacts with.

This raises two interesting ideas. The first is prioritising molecules that form the same interactions as molecules known to bind but are chemically different: scaffold hopping (5). The second is prioritising molecules that potentially form different interactions to known binders. I hope to explore this in the coming months as part of my research.

References

(1) J. K. Medina-Franco et al., Expert Opin. Drug Discov., 2014, 9, 151-156.

(2) A. S. A. Roy, Project FDA Report, 2012, 5.

(3) J. Avorn, New England Journ. of Med., 2015, 372, 1877-1879.

(4)  P. Willet, Journ. Comp. Bio., 1999, 6, 447-457.

(5) H. Zhao, Drug Discov. Today,  2007, 12, 149–155.

New toys for OPIG

OPIG recently acquired 55 additional computers all of the same make and model; they are of a decent specification (for 2015), each with quad-core i5 processor and 8GB of RAM, but what to do with them? Cluster computing time!
cluster

Along with a couple of support servers, this provides us with 228 computation cores, 440GB of RAM and >40TB of storage. Whilst this would be a tremendous specification for a single computer, parallel computing on a cluster is a significantly different beast.

This kind of architecture and parallelism really lends itself to certain classes of problems, especially those that have:

  • Independent data
  • Parameter sweeps
  • Multiple runs with different random seeds
  • Dirty great data sets
  • Or can be snipped up and require low inter-processor communication

With a single processor and a single core, a computer looks like this:
1core

These days, when multiple processor cores are integrated onto a single die, the cores are normally independent but share a last-level cache and both can access the same memory. This gives a layout similar to the following:
2cores

Add more cores or more processors to a single computer and you start to tessellate the above. Each pair of cores have access to their own shared cache, they have access to their own memory and they can access the memory attached to any other cores. However, accessing memory physically attached to other cores comes at the cost of increased latency.
4cores

Cluster computing on the other hand rarely exhibits this flat memory architecture, as no node can directly another node’s memory. Instead we use a Message Passing Interface (MPI) to pass messages between nodes. Though it takes a little time to wrap your head around working this way, effectively every processor simultaneously runs the exact same piece of code, the sole difference being the “Rank” of the execution core. A simple example of MPI is getting every code to greet us with the traditional “Hello World” and tell us its rank. A single execution with mpirun simultaneously executes the code on multiple cores:

$mpirun -n 4 ./helloworld_mpi
Hello, world, from processor 3 of 4
Hello, world, from processor 1 of 4
Hello, world, from processor 2 of 4
Hello, world, from processor 0 of 4

Note that the responses aren’t in order, some cores may have been busy (for example handling the operating system) so couldn’t run their code immediately. Another simple example of this would be a sort. We could for example tell every processor to take several million values, find the smallest value and pass a message to whichever core has “Rank 0” that number. The core at Rank 0 will then sort that much smaller number set of values. Below is the kind of speedup which was achieved by simply splitting the same problem over 4 physically independent computers of the cluster.

cluster-results

As not everyone in the group will have the time or inclination to MPI-ify their code, there is also HTCondor. HTCondor, is a workload management system for compute intensive jobs which allows jobs to be queued, scheduled, assigned priorities and distributed from a single head node to processing nodes, with the results copied back on demand. The server OPIG provides the job distribution system, whilst SkyOctopus provides shared storage on every computation node. Should the required package currently not be available on all of the computation nodes, SkyOctopus can reach down and remotely modify the software installations on all of the lesser computation nodes.

Loop Model Selection

As I have talked about in previous blog posts (here and here, if interested!), the majority of my research so far has focussed on improving our ability to generate loop decoys, with a particular focus on the H3 loop of antibodies. The loop modelling software that I have been developing, Sphinx, is a hybrid of two other methods – FREAD, a knowledge-based method, and our own ab initio method. By using this hybrid approach we are able to produce a decoy set that is enriched with near-native structures. However, while the ability to produce accurate loop conformations is a major advantage, it is by no means the full story – how do we know which of our candidate loop models to choose?

loop_decoy_ranking

In order to choose which model is the best, a method is required that scores each decoy, thereby producing a ranked list with the conformation predicted to be best at the top. There are two main approaches to this problem – physics-based force fields and statistical potentials.

Force fields are functions used to calculate the potential energy of a structure. They normally include terms for bonded interactions, such as bond lengths, bond angles and dihedral angles; and non-bonded interactions, such as electrostatics and van der Waal’s forces. In principle, they can be very accurate, however they have certain drawbacks. Since some terms have a very steep dependency on interatomic distance (in particular the non-bonded terms), very slight conformational differences can have a huge effect on the score. A loop conformation that is very close to the native could therefore be ranked poorly. In addition, solvation terms have to be used – this is especially important in loop modelling applications since loop regions are generally found on the surface of proteins, where they are exposed to solvent molecules.

The alternatives to physics-based force fields are statistical potentials. In this case, a score is achieved by comparing the model structure (i.e. its interatomic distances and contacts) to experimentally-derived structures. As a very simple example, if the distance between the backbone N and Cα of a residue in a loop model is 2Å, but this distance has not been observed in known structures, we can assume that a distance of 2Å is energetically unfavourable, and so we can tell that this model is unlikely to be close to the native structure. Advantages of statistical potentials over force fields are their relative ‘smoothness’ (i.e. small variations in structure do not affect the score as much), and the fact that all interactions do not need to be completely understood – if examples of these interactions have been observed before, they will automatically be taken into account.

I have tested several statistical potentials (including calRW, DFIRE, DOPE and SoapLoop) by using them to rank the loop decoys generated by our hybrid method, Sphinx. Unfortunately, none of them were consistently able to choose the best decoy out of the set. The average RMSD (across 70 general loop targets) of the top-ranked decoy ranged between 2.7Å and 4.74Å for the different methods – the average RMSD of the actual best decoy was much lower at 1.32Å. Other researchers have also found loop ranking challenging – for example, in the latest Antibody Modelling Assessment (AMA-II), ranking was seen as an area for significant improvement. In fact, model selection is seen as such an issue that protein structure prediction competitions like AMA-II and CASP allow the participants to submit more than one model. Loop model selection is therefore an unsolved problem, which must be investigated further to enable reliable predictions to be made.

Network Pharmacology

The dominant paradigm in drug discovery has been one of finding small molecules (or more recently, biologics) that bind selectively to one target of therapeutic interest. This reductionist approach conveniently ignores the fact that many drugs do, in fact, bind to multiple targets. Indeed, systems biology is uncovering an unsettling picture for comfortable reductionists: the so-called ‘magic bullet’ of Paul Ehrlich, a single compound that binds to a single target, may be less effective than a compound with multiple targets. This new approach—network pharmacology—offers new ways to improve drug efficacy, to rescue orphan drugs, re-purpose existing drugs, predict targets, and predict side-effects.

Building on work Stuart Armstrong and I did at InhibOx, a spinout from the University of Oxford’s Chemistry Department, and inspired by the work of Shoichet et al. (2007), Álvaro Cortes-Cabrera and I took our ElectroShape method, designed for ultra-fast ligand-based virtual screening (Armstrong et al., 2010 & 2011), and built a new way of exploring the relationships between drug targets (Cortes-Cabrera et al., 2013). Ligand-based virtual screening is predicated on the molecular similarity principle: similar chemical compounds have similar properties (see, e.g., Johnson & Maggiora, 1990). ElectroShape built on the earlier pioneering USR (Ultra-fast Shape Recognition) work of Pedro Ballester and Prof. W. Graham Richards at Oxford (Ballester & Richards, 2007).

Our new approach addressed two Inherent limitations of the network pharmacology approaches available at the time:

  • Chemical similarity is calculated on the basis of the chemical topology of the small molecule; and
  • Structural information about the macromolecular target is neglected.

Our method addressed these issues by taking into account 3D information from both the ligand and the target.

The approach involved comparing the similarity of each set ligands known to bind to a protein, to the equivalent sets of ligands of all other known drug targets in DrugBank, DrugBank is a tremendous “bioinformatics and cheminformatics resource that combines detailed drug (i.e. chemical, pharmacological and pharmaceutical) data with comprehensive drug target (i.e. sequence, structure, and pathway) information.” This analysis generated a network of related proteins, connected by the similarity of the sets of ligands known to bind to them.

2013.ElectroShapePolypharmacologyServerWe looked at two different kinds of ligand similarity metrics, the inverse Manhattan distance of our ElectroShape descriptor, and compared them to 2D Morgan fingerprints, calculated using the wonderful open source cheminformatics toolkit, RDKit from Greg Landrum. Morgan fingerprints use connectivity information similar to that used for the well known ECFP family of fingerprints, which had been used in the SEA method of Keiser et al. We also looked at the problem from the receptor side, comparing the active sites of the proteins. These complementary approaches produced networks that shared a minimal fraction (0.36% to 6.80%) of nodes: while the direct comparison of target ligand-binding sites could give valuable information in order to achieve some kind of target specificity, ligand-based networks may contribute information about unexpected interactions for side-effect prediction and polypharmacological profile optimization.

Our new target-fishing approach was able to predict drug adverse effects, build polypharmacology profiles, and relate targets from two complementary viewpoints:
ligand-based, and target-based networks. We used the DUD and WOMBAT benchmark sets for on-target validation, and the results were directly comparable to those obtained using other state-of-the-art target-fishing approaches. Off-target validation was performed using a limited set of non-annotated secondary targets for already known drugs. Comparison of the predicted adverse effects with data contained in the SIDER 2 database showed good specificity and reasonable selectivity. All of these features were implemented in a user-friendly web interface that: (i) can be queried for both polypharmacology profiles and adverse effects, (ii) links to related targets in ChEMBLdb in the three networks (2D, 4D ligand and 3D receptor), and (iii) displays the 2D structure of already annotated drugs.

2013.ElectroShapePolypharmacologyServer.Screenshot

References

Armstrong, M. S., G. M. Morris, P. W. Finn, R. Sharma, L. Moretti, R. I. Cooper and W. G. Richards (2010). “ElectroShape: fast molecular similarity calculations incorporating shape, chirality and electrostatics.” J Comput Aided Mol Des, 24(9): 789-801. 10.1007/s10822-010-9374-0.

Armstrong, M. S., P. W. Finn, G. M. Morris and W. G. Richards (2011). “Improving the accuracy of ultrafast ligand-based screening: incorporating lipophilicity into ElectroShape as an extra dimension.” J Comput Aided Mol Des, 25(8): 785-790. 10.1007/s10822-011-9463-8.

Ballester, P. J. and W. G. Richards (2007). “Ultrafast shape recognition to search compound databases for similar molecular shapes.” J Comput Chem, 28(10): 1711-1723. 10.1002/jcc.20681.

Cortes-Cabrera, A., G. M. Morris, P. W. Finn, A. Morreale and F. Gago (2013). “Comparison of ultra-fast 2D and 3D ligand and target descriptors for side effect prediction and network analysis in polypharmacology.” Br J Pharmacol, 170(3): 557-567. 10.1111/bph.12294.

Johnson, A. M., & G. M. Maggiora (1990). “Concepts and Applications of Molecular Similarity.” New York: John Willey & Sons.

Landrum, G. (2011). “RDKit: Open-source cheminformatics.” from http://www.rdkit.org.

Keiser, M. J., B. L. Roth, B. N. Armbruster, P. Ernsberger, J. J. Irwin and B. K. Shoichet (2007). “Relating protein pharmacology by ligand chemistry.” Nat Biotechnol, 25(2): 197-206. 10.1038/nbt1284.

Wishart, D. S., C. Knox, A. C. Guo, S. Shrivastava, M. Hassanali, P. Stothard, Z. Chang and J. Woolsey (2006). “DrugBank: a comprehensive resource for in silico drug discovery and exploration.” Nucleic Acids Res, 34(Database issue): D668-672. 10.1093/nar/gkj067.

Co-translational insertion and folding of membrane proteins

The alpha-helical bundle is the most common type of fold for membrane proteins. Their diverse functions include transport, signalling, and catalysis. While structure determination is much more difficult for membrane proteins than it is for soluble proteins, it is accelerating and there are now 586 unique proteins in the database of Membrane Proteins of Known 3D Structure. However, we still have quite a poor understanding of how membrane proteins fold. There is increasing evidence that it is more complicated than the two-stage model proposed in 1990 by Popot and Engelman.

The machinery that inserts most alpha-helical membrane proteins is the Sec apparatus. In prokaryotes, it is located in the plasma membrane, while eukaryotic Sec is found in the ER. Sec itself is an alpha-helical bundle in the shape of a pore, and its structure is able both to allow peptides to pass fully across the membrane, and also to open laterally to insert transmembrane helices into the membrane. In both cases, this occurs co-translationally, with translation halted by the signal recognition particle until the ribosome is associated with the Sec complex.

If helices are inserted during the process of translation, does folding only begin after translation is finished? On what timescale are these folding processes occuring? There is evidence that a hairpin of two transmembrane helices forms on a timescale of miliseconds in vitro. Are helices already interacting during translation to form components of the native structure? It has also been suggested that helices may insert into the membrane in pairs, via the Sec apparatus.

There are still many aspects of the insertion process which are not fully understood, and even the topology of an alpha-helical membrane protein can be affected by the last part of the protein to be translated. I am starting to investigate some of these questions by using computational tools to learn more about the membrane proteins whose structures have already been solved.

Next generation sequencing of paired heavy and light chain sequences

At the last meeting before Christmas I covered the article by DeKosky et al. describing a new methodology for sequencing of paired VH-VL repertoire developed by the authors.

In the recent years there have been an exponential growth of available antibody sequences, caused mainly by the development of cheap and high-throughput Next Generation Sequencing (NGS) technologies. This trend led to the creation of several publicly available antibody sequence databases such as the DIGIT database and the abYsis database, containing hundreds of thousands of unpaired light chain and heavy chain sequences from over 100 species. Nevertheless, the sequencing of paired VH-VL repertoire remained a challenge, with the available techniques suffering from low throughput (<700 cells) and high cost. In contrast, the method developed by DeKosky et al. allows for relatively cheap paired sequencing of most of the 10^6 B cells contained within a typical 10-ml blood draw.

The work flow is as follows: first the isolated cells, dissolved in water, and magnetic poly(dT) beads mixed with cell lysis buffer are pushed through a narrow opening into a rapidly moving annular oil phase, resulting in a thin jet that coalescences into droplets, in such a way that each droplet has a very low chance of having a cell inside it. This ensures that the vast majority of droplets that do contain cells, contain only one cell each. Next, the cell lysis occurs within the droplets and the mRNA fragments coding for the antibody chains attach to the poly(dT) beads. Following that, the mRNA fragments are recovered and linkage PCR is used to generate 850 bp cDNA fragments for NGS.

To analyse the accuracy of their methodology the authors sequenced paired CDR-H3 – CDR-L3 sequences from blood samples obtained from three different human donors, filtering the results by 96% clustering, read-quality and removing sequences with less than two reads. Overall, this resulted in ~200,000 paired CDR-H3 – CDR-L3 sequences. The authors found that pairing accuracy of their methodology was ~98%.

The article also contained some bioinformatics analysis of the data. The authors first analysed CDR-L3 sequences that tend to pair up with many diverse CDR-H3 sequences and whether such “promiscuous” CDR-L3s are also “public” i.e. they are promiscuous and common in all three donors. Their results show that out of 50 most common promiscuous CDR-L3s 49 are also public. The results also show that the promiscuous CDR-L3s show little to no modification, being very close to the germline sequence.

Illustration of the sequencing pipeline

The sequencing data also contained examples of allelic inclusion, where one B-cell expresses two B cell receptors (almost always one VH gene and two distinct VL genes). It was found that about ~0.5% of all analysed B-cells showed allelic inclusion.

Finally, the authors looked at the occurrence of traits commonly associated with broadly Neutralizing Antibodies (bNAbs), produced to fight rapidly mutating pathogens (such as the influenza virus). These traits were short (<6 aa) CDR-L3 and long (11 – 18 aa) CDR-H3s. In total, the authors found 31 sequences with these features, suggesting that bNAbs can be found in the repertoire of healthy donors.

Overall this article presents very interesting and promising method, that should allow for large-scale sequencing of paired VH-VL sequences.

Racing along transcripts: Correlating ribosome profiling and protein structure.

A long long time ago, in a galaxy far away, I gave a presentation about the state of my research to the group (can you tell I’m excited for the new Star Wars!). Since then, little has changed due to my absenteeism from Oxford, which means ((un)luckily) the state of work is by and large the same. Now, my work focusses on the effect that the translation speed of a given mRNA sequence can have on the eventual protein product, specifically through the phenomena of cotranslational folding. I’ve discussed the evidence behind this in prior posts (see here and here), though I find the below video a good reminder of why we can’t always just go as fast as we like.

So given that translation speed is important, how do we in fact measure it? Traditional measures, such as tAI and CAI, infer them using the codon bias within the genome or by comparing the counts of tRNA genes in a genome. However, while these have been shown to somewhat relate to speed, they are still solely theoretical in their construction. An alternative is ribosome profiling, which I’ve discussed in depth before (see here), which provides an actual experimental measure of the time taken to translate each codon in an mRNA sequence. In my latest work, I have compiled ribosome profiling data from 7 different experiments, consisting of 6 diverse organisms and processed them all in the same fashion from their respective raw data. Combined, the dataset gives ribosome profiling “speed” values for approximately 25 thousand genes across the various organisms.

Screenshot from 2015-10-22 13:58:17

Our first task with this dataset is to see how well the traditional measures compare to the ribosome profiling data. For this, we calculated the correlation against CAI, MinMax, nTE and tAI, with the results presented in the figure above. We find that basically no measure adequately captures the entirety of the translation speed; some measures failing completely, others obviously capturing some part of the behaviour, and then some others even predicting the reverse! Given that no measure captured the behaviour adequately, we realised that existing results that related the translation speed to the protein structure, may, in fact, be wrong. Thus, we decided that we should recreate the analysis using our dataset to either validate or correct the original observations. To do this we combined our ribosome profiling dataset with matching PDB structures, such that we had the sequence, the structure, and the translation speed for approximately 4500 genes over the 6 species. While I won’t go in to details here (see upcoming paper – touch wood), we analysed the relationship between the speed and the solvent accessibility, the secondary structure, and linker regions. We found striking differences to the observations found in the literature that I’ll be excited to share in the near future.