Monthly Archives: February 2016

Strachey Lecture – “Artificial Intelligence and the Future” by Dr. Demis Hassabis

For this week’s group meeting, some of us had the pleasure of attending a very interesting lecture by Dr. Demis Hassabis, founder of Deep Mind. Personally, I found the lecture quite thought-evoking and left the venue with a plethora of ideas sizzling in my brain. Since one of the best ways to end mental sizzlingness is by writing things down, I volunteered to write this week’s blog post in order to say my peace about yesterday’s Strachey Lecture.

Dr. Hassabis began by listing some very audacious goals: “To solve intelligence” and “To use it to make a better world”. At the end of his talk, someone in the audience asked him if he thought it was possible to achieve these goals (“to fully replicate the brain”), to which he responded with a simple there is nothing that tells us that we can’t.

After his bold introductory statement, Dr. Hassabis pressed on. For the first part of his lecture, he engaged the audience with videos and concepts of a reinforcement learning agent trained to learn and play several ATARI games. I was particularly impressed with the notion that the same agent could be used to achieve a professional level of gaming for 49 different games. Some of the videos are quite impressive and can be seen here or here. Suffice to say that their algorithm was much better at playing ATARi than I’ll ever be. It was also rather impressive to know that all the algorithm received as input was the game’s score and the pixels on the screen.

Dr. Hassabis mentioned in his lecture that games provide the ideal training ground for any form of AI. He presented several reasons for this, but the one that stuck with me was the notion that games quite often present a very simplistic and clear score. Your goal in a game is usually very well defined. You help the frog cross the road or you defeat some aliens for points. However, what I perceive to be the greatest challenge for AI is the fact that real world problems do not come with such a clear-cut, incremental score.

For instance, let us relate back to my particular scientific question: protein structure prediction. It has been suggested that much simpler algorithms such as Simulated Annealing are able to model protein structures as long as we have a perfect scoring system [Yang and Zhou, 2015]. The issue is, currently, the only way we have to define a perfect score is to use the very structure we are trying to predict (which kinda takes the whole prediction part out of the story).

Real world problems are hard. I am sure this is no news to anyone, including the scientists at Deep Mind.

During the second part of his talk, Dr. Hassabis focused on AlphaGo. AlphaGo is Deep Mind’s effort at mastering the ancient game of Go. What appealed to me in this part of the talk is the fact that Go has such a large number of possible configurations that devising an incremental score is no simple task (sounds familiar?). Yet, somehow, Deep Mind scientists were able to train their algorithm to a point where it defeated a professional Go player.

Their next challenge? In two weeks, AlphaGo will face the professional Go player with the highest number of titles in the last decade (the best player in the world?). This makes me reminiscent of when Garry Kasparov faced Deep Blue. After the talk, my fellow OPIG colleagues also seemed to be pretty excited about the outcome of the match (man vs. food computer).

Dr. Hassabis finished by saying that his career goal would be to develop AI that is capable of helping scientists tackle the big problems. From what I gather (and from my extremely biased point of view; protein structure prediction mindset), AI will only be able to achieve this goal once it is capable of coming up with its own scores for the games we present it to play with (hence developing some form of impetus). Regardless of how far we are from achieving this, at least we have a reason to cheer for AlphaGo in a couple of weeks (because hey, if you are trying to make our lives easier with clever AI, I am all up for it).

Community structure in multilayer networks

 

Multilayer networks are a generalisation of network that may incorporate different types of interactions [1]. This could be different time points in temporal data, measurements in different individuals or under different experimental conditions. Currently many measures and methods from monolayer networks are extended to be applicabile to multilayer networks. Those include measures of centrality [2], or methods that enable to find mesoscale structure in networks [3,4].

Examples of such mesoscale structure detection methods are stochastic block models and community detection. Both try to find groups of nodes that behave structurally similar in a network. In its most simplistic way you might think of two groups that are densely connected internally but only sparsely between the groups. For example two classes in a high school, there are many friendships in each class but only a small number between the classes. Often we are interested in how such patterns evolve with time. Here, the usage of multilayer community detection methods is fruitful.

mucha_senate_pic

From [4]: Multislice community detection of U.S. Senate roll call vote similarities across time. Colors indicate assignments to nine communities of the 1884 unique senators (sorted vertically and connected across Congresses by dashed lines) in each Congress in which they appear. The dark blue and red communities correspond closely to the modern Democratic and Republican parties, respectively. Horizontal bars indicate the historical period of each community, with accompanying text enumerating nominal party affiliations of the single-slice nodes (each representing a senator in a Congress): PA, pro-administration; AA, anti-administration; F, Federalist; DR, Democratic-Republican; W, Whig; AJ, anti-Jackson; A, Adams; J, Jackson; D, Democratic; R, Republican. Vertical gray bars indicate Congresses in which three communities appeared simultaneously.

Mucha et al. analysed the voting pattern in the U.S. Senate [4]. They find that the communities are oriented as the political party organisation. However, the restructuring of the political landscape over time is observable in the multilayered community structure. For example, the 37th Congress during the beginning of the American Civil War brought a major change in the voting patterns. Modern politics is dominated by a strong partition into Democrats and Republicans with third minor group that can be identified as the ‘Southern Democrats’ that had distinguishable voting patterns during the 1960.

Such multilayer community detection methods can be insightful for networks from other disciplines. For example they have been adopted to describe the reconfiguration in the human brain during learning [5]. Hopefully they will be able to give us insight in the structure and function of protein interaction.

[1] De Domenico, Manlio; Solé-Ribalta, Albert; Cozzo, Emanuele; Kivelä, Mikko; Moreno, Yamir; Porter, Mason A.; Gómez, Sergio; and Arenas, Alex [2013]. Mathematical Formulation of Multilayer NetworksPhysical Review X, Vol. 3, No. 4: 041022.

[2] Taylor, Dane; Myers, Sean A.; Clauset, Aaron; Porter, Mason A.; and Mucha, Peter J. [2016]. Eigenvector-based Centrality Measures for Temporal Networks

[3]  Tiago P. Peixoto; Inferring the mesoscale structure of layered, edge-valued, and time-varying networks. Phys. Rev. E 92, 042807

[4] Mucha, Peter J.; Richardson, Thomas; Macon, Kevin; Porter, Mason A.; and Onnela, Jukka-Pekka [2010]. Community Structure in Time-Dependent, Multiscale, and Multiplex NetworksScience, Vol. 328, No. 5980: 876-878.

[5] Bassett, Danielle S.; Wymbs, Nicholas F.; Porter, Mason A.; Mucha, Peter J.; Carlson, Jean M.; and Grafton, Scott T. [2011]. Dynamic Reconfiguration of Human Brain Networks During LearningProceedings of the National Academy of Sciences of the United States of America, Vol. 118, No. 18: 7641-7646.

 

Drawing Custom Unrooted Trees from Sequence Alignments

Multiple Sequence Alignments can provide a lot of information relating to the relationships between proteins. One notable example was the map of the kinome space published in 2002 (Figure 1).

 

Figure 1. Kinase space as presented by Manning et al. 2002;

Such images organize our thinking about the possible space of such proteins/genes going beyond long lists of multiple sequence alignments. The image in Figure 1, got a revamp later which now is the popular ‘kinome poster’ (Figure 2).

Revamped dendrogram of the kinome fro Fig. 1. Downloaded from http://i.imgur.com/BPLUvfc.png.

Here we have created a script to produce similar dendrograms straight from the multiple sequence alignment files (although clearly not as pretty as Fig 2!). It is not difficult to find software that would produce ‘a dendrogram’ from an MSA but making it do the simple thing of annotating the nodes with colors, shapes etc. with respect to the labels of the genes/sequences is slightly more problematic. Sizes might correspond to the importance of given nodes and colors can organize by their tree branches. The script uses the Biopython module Phylo to construct a tree from an arbitrary MSA and networkx to draw it:

python Treebeard.py
import networkx, pylab
from networkx.drawing.nx_agraph import graphviz_layout
from Bio import Phylo
from Bio.Phylo.TreeConstruction import DistanceCalculator
from Bio.Phylo.TreeConstruction import DistanceTreeConstructor
from Bio import AlignIO

#What color to give to the edges?
e_color = '#ccccff'
#What colors to give to the nodes with similar labels?
color_scheme = {'RSK':'#e60000','SGK':'#ffff00','PKC':'#32cd32','DMPK':'#e600e6','NDR':'#3366ff','GRK':'#8080ff','PKA':'magenta','MAST':'green','YANK':'pink'}
#What sizes to give to the nodes with similar labels?
size_scheme = {'RSK':200,'SGK':150,'PKC':350,'DMPK':400,'NDR':280,'GRK':370,'PKA':325,'MAST':40,'YANK':200}

#Edit this to produce a custom label to color mapping
def label_colors(label):
	color_to_set = 'blue'
	for label_subname in color_scheme:
		if label_subname in label:
			color_to_set = color_scheme[label_subname]
	return color_to_set

#Edit this to produce a custom label to size mapping
def label_sizes(label):
	#Default size
	size_to_set = 20
	for label_subname in size_scheme:
		if label_subname in label:
			size_to_set = size_scheme[label_subname]
	return size_to_set

#Draw a tree whose alignment is stored in msa.phy
def draw_tree():
	
	#This loads the default kinase alignment that should be in the same directory as this script
	aln = AlignIO.read('agc.aln', 'clustal')
	#This will construct the unrooted tree.
	calculator = DistanceCalculator('identity')
	dm = calculator.get_distance(aln)
	constructor = DistanceTreeConstructor()
	tree = constructor.nj(dm)
	G = Phylo.to_networkx(tree)
	node_sizes = []
	labels = {}
	node_colors = []
	for n in G:
		label = str(n)
		if 'Inner' in label:
			#These are the inner tree nodes -- leave them blank and with very small sizes.
			node_sizes.append( 1 )
			labels[n] = ''
			node_colors.append(e_color)
		else:
			#Size of the node depends on the labels!
			node_sizes.append( label_sizes(label) )
			#Set colors depending on our color scheme and label names
			node_colors.append(label_colors(label))
			#set the label that will appear in each node			
			labels[n] = label
	#Draw the tree given the info we provided!
	pos = graphviz_layout(G)
	networkx.draw(G, pos,edge_color=e_color,node_size = node_sizes, labels=labels, with_labels=True,node_color=node_colors)
	#Showing	
	pylab.show()
	#Saving the image -- uncomment
	#pylab.savefig('example.png')

if __name__ == '__main__':
	
	draw_tree()

We are going to use the kinase alignment example to demonstrate how the script can be used. The kinase alignment we use can be found here on the kinase.com website. We load the alignment and construct the unrooted tree using the Bio.Phylo module. Note that on each line of the alignment there is a name. These names are the labels that we use to define the colors and sizes of nodes. There are two dummy functions that achieve that label_nodes() and label_sizes() — if you look at them it should be clear how to define your own custom labeling.

If you download the code and the alignment and run it by:

python Treebeard.py

You should see a similar image as in Fig 3.

Fig 3. Size-color-customized unrooted tree straight from a multiple sequence alignment file of protein kinases. Constructed using the script Treebeard.py

 

 

 

 

 

 

 

Inserting functional proteins in an antibody

At the group meeting on the 3rd of February I presented the results of the paper “A General Method for Insertion of Functional Proteins within Proteins via Combinatorial Selection of Permissive Junctions” by Peng et. al. This is interesting to our group, and especially to me, because this is a novel way of designing an antibody, although I suspect that the scope of their research is much more general, their use of antibodies being a proof of concept.

Their premise is that the structure of a protein is essentially secondary structures and tertiary structure interconnected through junctions. As such it should be possible to interconnect regions from different proteins through junctions, and these regions should take up their native secondary and tertiary structures, thus preserving their functionality. The question is what is a suitable junction? ThisScreen Shot 2016-02-03 at 14.37.34 is important because these junctions should be flexible enough to allow the proper folding of the different regions, but also not too flexible as to have a negative impact on stability. There has been previous work done on trying to design suitable junctions, however the workflow presented in this paper is based on trying a vast number of junctions and then identifying which of them work.

As I said above their proof concept is antibodies. They used an antibody scaffold (the host), out of which they removed the H3 loop and then fused to it, using  junctions, two different proteins: Leptin and FSH (the guests). To identify the correct junctions they generated a library of antibodies with random three residues sequences on either side of the inserted protein plus a generic linker (GGGGS) that can be repeated up to three times.Screen Shot 2016-02-03 at 15.11.41

They say that the theoretical size of the library is 10^9 (however I would say it is 9*20^6), and the actually achieved diversity of their library was of size 2.88*10^7 for Leptin and 1.09*10^7. Next step is to identify which junctions have allowed the guest protein to fold properly. For this they devised an autocrine-based selection method using engineered cells that have beta lactamase receptors which have either Leptin or FSH as agonists. A fluoroprobe in the cell responds to the presence of beta lactamase producing a blue color, instead of green and therefore this allows the cells with the active antibody-guest  designed protein (clone) to be identified using FRET-based fluorescence-activated cell sorting.

They managed to identify 6 clones that worked for Leptin and 3 that worked for FSH with the linkers being listed in the below table: Screen Shot 2016-02-03 at 15.49.03

There does not seem to be a pattern emerging from those linker sequences, although one of them repeats itself. For my research it would have been interesting if a pattern did emerge, and then that could be used as a generic linker for future designers. However, this is still another prime example of how Screen Shot 2016-02-03 at 16.05.38well an antibody scaffold can be used a starting point for protein engineering.

As a bonus they also tested in vivo how their designs work and they discovered that the antibody-leptin design (IgG-Leptin) has a longer lifetime. This is probably due to the fact that being a larger protein this is not filtered out by the kidneys.

Identifying basic building blocks/motifs of networks

elec3p

The optimal subgraph decomposition of an electronic circuit.

There are many verbal descriptions for network motifs: characteristic connectivity patterns, over represented subgraphs, recurrent circuits, basic building-blocks of networks just to name a few. However, as with most concepts in network science network motifs are maybe best explained in terms of empirical observations. For instance the most basic example of a network motif is the motif consisting of tree mutually connected nodes that is: a triangle. Many real world networks ranging from the internet to social networks to biological networks contain many more triangles than one would expect if they were wired randomly. In certain cases there exist good explanations for the large number of triangles found in the network. For instance, the presence of many triangles in friendship networks simply tell us that we are more likely to be friends with the friends of our friends. In biological networks triangles and other motifs are believed to contribute to the overall function of the network by performing modular tasks such as information processing and therefore are believed to be favoured by natural selection.

The predominant definition of network motifs is due to Milo et al. [1]  and defines network motifs on the basis of how surprising their frequency in the network is when compared to randomized version of the network. The randomized version of the network is usually taken to be the configuration model i.e. the ensemble of all networks that have the same degree distribution as the original network. Following this definition motifs are identified by comparing their counts in the original network with a large sample of this null model. The approach of Milo et al. formalizes the concept of network motifs as over represented connectivity patterns. However, the results of the method are highly dependent on the choice of null model.

In my talk I presented an alternative approach to motif analysis [2] that seeks to formalize the network motifs from the perspective of simple building blocks. The approach is based on finding an optimal decomposition of the network into subgraphs. Here, subgraph decompositions are defined as subgraph covers which are sets of subgraphs such that every edge of the network is contained in at least one of the subgraphs in the cover. It follows from this definition that a subgraph cover is a representation of the network in the sense that given a subgraph cover the network can be recovered fully by simply taking the union of the edge sets of the subgraphs in the cover. In fact many network representations including edge lists, adjacency lists, bipartite representations and power-graphs fall into the category of subgraph covers. For instance, the edge list representation is equivalent to the cover consisting of all single edge subgraphs of the network and bipartite representations are simply covers which consist of cliques of various sizes.

Given that there are many competing ways of representing a network as a subgraph cover the question of how one picks one of the covers over the other arises. In order to address this problem we consider the total information of subgraph covers as a measure of optimality. The total information is a information measure introduced by Gell-Mann and Hartle [3] which given a model for a certain entity e is defined to be sum of the entropy and effective complexity of M. While the entropy measures the information required to describe e given M and the effective complexity measures the amount of information required to specify M which is given by its algorithmic information content. The total information also provides a framework for model selection:  given two or more models for the same entity one picks the one that has lowest total information and if two models have the same total information one picks the one that has lower effective complexity i.e. the simpler one. This essentially tells us how to trade off goodness of fit and model complexity.

In the context of subgraph covers the entropy of a cover corresponds to the information required to give the positions of the subgraphs in the cover given the different motifs that occur in C and their respective frequencies in C. On the other hand the effective complexity of C corresponds to the information required to describe the set of motifs occurring in the cover together with their respective frequencies. While the entropy of subgraph covers can be calculated analytically their effective complexity is not computable due to the halting problem. However, in practice one can use approximations in the form of upper bounds.

Following the total information approach we now define an optimal subgraph cover of network G to be a subgraph cover that minimizes the total information and the network motifs of G to be the motifs/connectivity patterns that occur in such an optimal cover.
The problem of finding an optimal cover turns out to be computationally rather challenging. Besides the usual difficulties associated to counting subgraphs  (subgraph isomorphism problem-NP complete) and classifying subgraphs (graph isomorphism problem-complexity unknown) the problem is a non-linear set covering problem and therefore NP-hard. Consequently, we construct a greedy heuristic for the problem.

When applied to real world networks the method finds very similar motifs in networks representing similar systems. Moreover, the counts of the motifs in networks of the same type scale approximately with network size. Consequently, the method can also be used to classify networks according to their subgraph structure.

 

References:

[1] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, Network Motifs: Simple Building Blocks of Complex Networks, Science 298, 824 (2002)

[2] Wegner, A. E. Subgraph covers: An information-theoretic approach to motif analysis in
networks. Phys. Rev. X, 4:041026, Nov 2014

[3] M. Gell-Mann and S. Lloyd, Information Measures, Effective Complexity, and Total Information, Complexity 2, 44 (1996).

Novelty in Drug Discovery

The primary aim of drug discovery is to find novel molecules that are active against a target of therapeutic relevance and that are not covered by any existing patents (1).  Due to the increasing cost of research and development in the later stages of drug discovery, and the increase in drug candidates failing at these stages, there is a desire to select the most diverse set of active molecules at the earliest stage of drug discovery, to maximise the chance of finding a molecule that can be optimised into a successful drug (2,3). Computational methods that are both accurate and efficient are one approach to this problem and can augment experiment approaches in deciding which molecules to take forward.

But what do we mean by a “novel” compound? When prioritising molecules for synthesis which characteristics do we want to be different?  It was once common to select subsets of hits to maximise chemical diversity in order to cover as much chemical space as possible (4).  These novel lead molecules could subsequently be optimised, the idea that maximising the coverage of chemical space would maximise the chance of finding a molecule that could be optimised successfully. More recently however, the focus has shifted to “biodiversity”: diversity in terms of how the molecule interacts with the protein (1). Activity cliffs, pairs of molecules that are structurally and chemically similar but have a large difference in potency, indicate that chemical diversity may not be the best descriptor to identify molecules that interact with the target in sufficiently diverse ways. The molecules to be taken forward should be both active against the target and diverse in terms of how they interact with the target, and the parts of the binding site the molecule interacts with.

This raises two interesting ideas. The first is prioritising molecules that form the same interactions as molecules known to bind but are chemically different: scaffold hopping (5). The second is prioritising molecules that potentially form different interactions to known binders. I hope to explore this in the coming months as part of my research.

References

(1) J. K. Medina-Franco et al., Expert Opin. Drug Discov., 2014, 9, 151-156.

(2) A. S. A. Roy, Project FDA Report, 2012, 5.

(3) J. Avorn, New England Journ. of Med., 2015, 372, 1877-1879.

(4)  P. Willet, Journ. Comp. Bio., 1999, 6, 447-457.

(5) H. Zhao, Drug Discov. Today,  2007, 12, 149–155.

Designing antibodies targeting disordered epitopes

At the meeting on February 10 I covered the article by Sormanni et al. describing a methodology for computationally designing antibodies against intrinsically disordered regions of proteins.

Antibodies are proteins that are a natural part of our immune system. For over 50 years lab-made antibodies have been used in a wide variety of therapeutic and diagnostic applications. Nowadays, we can design antibodies with high specificity and affinity for almost any target. Nevertheless, engineering antibodies against intrinsically disordered proteins remains costly and unreliable. Since as many as about 33.0% of all eukaryotic proteins could be intrinsically disordered, and the disordered proteins are often implicated in various ailments and diseases such methodology could prove invaluable.

Cascade design

Cascade design

The initial step in the protocol involves searching the PDB for protein sequences that interact in a beta strand with segments of the target sequence. Next, such peptides are joined together using a so-called “cascade method”. The cascade method starts with the longest found peptide and grows it to the length of the target sequence by joining it with other, partially overlapping peptides coming from beta strands of the same type (parallel, antiparallel). In the cascade method, all fragments used must form the same hydrogen bond pattern. The resulting complementary peptide is expected to “freeze” part of the discorded protein by forcing it to locally form a beta sheet. After the complementary peptide is designed, it is grafted on a single-domain antibody scaffold. This decision has been made as antibodies have a longer half-life and lower immunogenicity.

To test their method the authors initially assessed the robustness of their design protocol. First, they run the cascade method on three targets – a-synuclein, Aβ42 and IAPP. They found that more than 95% of the residue position in the three proteins could be targeted by their method. In addition, the mean number of available fragments per position was 570. They also estimated their coverage on a larger scale, using 1690 disordered protein sequences obtained from DisProt database and from measured NMR chemical shifts. About 90% of residue positions from DisProt and 85% positions from the chemical shift could be covered by at least one designed peptide. The positions that were hard to target usually contained Proline, in agreement with the known result that Prolines tend to disrupt secondary structure formation.

To test the quality of their designs the authors created complementary peptides for a-synuclein, Aβ42 and IAPP and grafted them on the CDR3 region of a human single domain antibody scaffold. All designs were highly stable and bound their targets with high specificity. Following the encouraging result the authors measured the affinity of one of their designs (one of the anti-a-synuclein antibodies). The K­d was found to lie in the range 11-27 μM. Such affinity is too low for pharmaceutical purposes, but it is enough to prevent aggregation of the target protein.

As the last step in the project the authors attempted a two-peptide design, where a second peptide was grafted in the CDR2 region of the single-domain scaffold. Both peptides were designed to bind the same epitope. The two peptide design managed to reach the affinity required for pharmaceutical viability (affinity smaller than 185 nM with 95% confidence). Nevertheless, the two loop design became very unstable rendering it not viable for pharmaceutical purposes.

Overall, this study presents a very exciting step towards computationally designed antibodies targeting disordered epitopes and deepens out understanding of antibody functionality.