Monthly Archives: August 2015

Slow and steady improvements in the prediction of one-dimensional protein features

What do you do when you have a big, complex problem whose solution is not necessarily trivial? You break the problem into smaller, easier to solve parts,  solve each of these sub-problems and merge the results to find the solution of the original, bigger problem. This is an algorithm design paradigm known as the divide and conquer approach.

In protein informatics, we use divide and conquer strategies to deal with a plethora of large and complicated problems. From protein structure prediction to protein-protein interaction networks, we have a wide range of sub and sub-sub problems whose solutions are supposed to help us with the bigger picture.

In particular, prediction of the so called one-dimensional protein features are fundamental sub-problems with a wide range of applications such as protein structure modelling,  homology detection, functional characterization and others. Here, one-dimensional protein features refer to secondary structure, backbone dihedral and C-alpha angles, and solvent accessible surface area.

In this week’s group meeting, I discussed the latest advancements in prediction of one-dimensional features as described in an article published by Heffernan R. and colleagues in Scientific Reports (2015):

“Improving prediction of secondary structure, local backbone angles, and solvent accessible surface area of proteins by iterative deep learning.”

In this article, the authors describe the implementation of SPIDER2, a deep learning approach to predict secondary structure, solvent accessible surface area, and four backbone angles (the traditional dihedrals phi and psi, and the recently explored theta and tau).

“Deep learning” is the buzzword (buzz-two-words or buzzsentence, maybe?) of the moment. For those of you who have no idea what I am talking about, deep learning is an umbrella term for a series of convoluted machine learning methods. The term deep comes from the multiple hidden layers of neurons used during learning.

Deep learning is a very fashionable term for a reason. These methods have been shown to produce state-of-the-art results for a wide range of applications in several fields, including bioinformatics. As a matter of fact, one of the leading methods for contact prediction (previously introduced in this blog post), uses a deep learning approach to improve the precision of predicted protein contacts.

Machine learning has already been explored to predict one-dimensional protein features, showing promising (and more importantly, useful) results. With the emergence of new, more powerful machine learning techniques such as deep learning, previous software are now becoming obsolete.

Based on this premise, Heffernan R. and colleagues implemented and applied their deep learning approach to improve the prediction of one-dimensional protein features. Their training process was rigorous: they performed a 10-fold cross validation using their training set of ~4500 proteins and, on top of that, they also had two independent test sets (a ~1200 protein test set and a set based on the targets of CASP11).  Proteins in all sets did not share more than 25% (30% sequence identity for the CASP set) to any other protein in any of the sets.

The method described in the paper, SPIDER2, was thoroughly compared with state-of-the art prediction software for each of the one-dimensional protein features that it  is capable of predicting. Results show that SPIDER2 achieves a small, yet significant improvement compared to other methods.

It is just like they say, slow and steady wins the race, right? In this case, I am not so sure. It would be interesting to see how much the small increments in precision obtained by SPIDER2 can improve the bigger picture, whichever your bigger picture is. The thing about divide and conquer is that if you become marginally better at solving one of the parts, that doesn’t necessarily imply that you will improve the solution of the bigger, main problem.

If we think about it, during the “conquer” stage (that is, when you are merging the solution of the smaller parts to get to the bigger picture),  you may make compromises that completely disregard any minor improvements for the sub-problems. For instance, in my bigger picture, de novo protein structure prediction, predicted local properties can be sacrificed to ensure a more globally consistent model. More than that, most methods that perform de novo structure prediction already account for a certain degree of error or uncertainty for, say, secondary structure prediction. This is particularly important for the border regions between secondary structure elements (i.e. where an alpha-helix ends and a loop begins). Therefore, even if you improve the precision of your predictions for those border regions, the best approach for structure prediction may still consider those slightly more precise border predictions as unreliable.

The other moral of this story is far more pessimistic. If you think about it, there were significant advancements in machine learning, which led to the creation of ever-more-so complicated neural network architectures. However, when we look back to how much improvement we observed when these highly elaborate techniques were applied to an old problem (prediction of one-dimensional protein features), it seems that the pay-off wasn’t as significant (at least as I would expect). Maybe, I am a glass half-empty kind of guy, but given the buzz surrounding deep learning, I think minor improvements is a bit of a let down. Not to take any credit away from the authors. Their work was rigorous and scientifically very sound. It is just that maybe we are reaching our limits when it comes to applying machine learning to predict secondary structure. Maybe when the next generation of buzzword-worthy machine learning techniques appear, we will observe an even smaller improvement to secondary structure prediction. Which leaves a very bitter unanswered question in all our minds: if machine learning is not the answer, what is?

Predicted protein contacts: is it the solution to (de novo) protein structure prediction?

So what is this buzz I hear about predicted protein contacts? Is it really the long awaited solution for one of the biggest open problems in biology today? Has protein structure prediction been solved?

Well, first things first. Let me give you a quick introduction to this predicted protein contact business (probably not quick enough for an elevator pitch, but hopefully you are not reading this in an elevator).

Nowadays, the scientific community has become very good at sequencing things (and by things I mean genetic things, like whole genomes of a bunch of different people and organisms). We are so good at it that mountains of sequence data are now available: genes, mRNAs, protein sequences. The question is what do we do with all this data?

Good scientists are coming up with new and creative ideas to extract knowledge from these mountains of data. For instance, one can build multiple sequence alignments using protein sequences for a given protein family. One of the ways in which information can be extracted from these multiple sequence alignments is by identifying extremely conserved columns (think of the alignment as a big matrix). Residues in these conserved positions are good candidates for being functionally important for the proteins in that particular family.

Another interesting thing that can be done is to look for pairs of residues that are mutating in a correlated fashion. In more practical terms, you are ascertaining how correlated is the information between two columns of a multiple sequence alignment; how often a change in one of them is countered by a change in the other. Why would anyone care about that? Simple. There is an assumption that residues that mutate in a correlated fashion are co-evolving. In other words, they share some sort of functional dependence (i.e. spatial proximity) that is under selective pressure.

Ok, that was a lot of hypotheticals, does it work? For many years, it didn’t. There were lots of issues with the way these correlations were computed and one of the biggest problems was to identify (and correct for) transitivity. Transitivity is the idea that you observe a false correlation between residues A and C because residues A,B and residues B,C are mutating in a correlated fashion. AS more powerful statistical methods were developed (borrowing some ideas from mechanical statistics), the transitivity issue has seemingly been solved.

The newest methods that detect co-evolving residues in a multiple sequence alignment are capable of detecting protein contacts with high precision. In this context, a contact is defined as two residues that are close together in a protein structure. How close?  Their C-betas must be 8 Angstroms or less apart. When sufficient sequence information is available (at least 500 sequences in the MSA), the average precision of the predicted contacts can reach 80%.

This is a powerful way of converting sequence information into distance constraints, which can be used for protein structure modelling. If a sufficient number of correct distance constraints is used, we can accurately predict the topology of a protein [1]. Recently, we have also observed great advances in the way that models are refined (that is, refining a model that contains the correct topology to atomic, near-experimental resolution). If you put those two things together, we start to look at a very nice picture.

So what’s the catch? The catch was there. Very subtle. “When sufficient sequence information is available”. Currently, there is an estimate that only 15% of the de novo protein structure prediction cases present sufficient sequence information for the prediction of protein contacts. One potential solution would be to sit and wait for more and more sequences to be obtained. Yet a potential pitfall of sitting and waiting is that there is no guarantee that we will have sufficient sequence information for a large number of protein families, as they may as well present less than 500 members.

Furthermore, scientists are not very good at sitting around and waiting. They need to keep themselves busy. There are many things that the community as whole can invest time on while we wait for more sequences to be generated. For instance, we want to be sure that, for the cases where there is a sufficient number of sequences, that we get the modelling step right (and predict the accurate protein topology). Predicted contacts also show potential as a tool for quality assessment and may prove to be a nice way of ascertaining whether you have confidence that a model with correct topology was created. More than that, model refinement still needs to improve if we want to make sure that we get from the correct topology to near-experimental resolution.

Protein structure prediction is a hard problem and with so much room for improvement, we still have a long way to go. Yet, this predicted contact business is a huge step in the right direction. Maybe, it won’t be long before models generated ab initio are considered as reliable as the ones generated using a template. Who knows what promised the future holds.

References:

[1] Kim DE, Dimaio F, Yu-Ruei Wang R, Song Y, Baker D. One contact for every twelve residues allows robust and accurate topology-level protein structure modeling. Proteins. 2014 Feb;82 Suppl 2:208-18. doi: 10.1002/prot.24374. Epub 2013 Sep 10.

 

 

 

Modelling antibodies, from Sequence, to Structure…

Antibody modelling has come a long way in the past 5 years. The Antibody Modelling Assessment (AMA) competitions (effectively an antibody version of CASP) have shown that most antibody design methods are capable of modelling the antibody variable fragment (Fv) at ≤ 1.5Å. Despite this feat, AMA-II provided two important lessons:

1. We can still improve our modelling of the framework region and the canonical CDRs.

Stage two of the AMA-II competition showed that CDR-H3 modelling improves once the correct crystal structure was provided (bar the H3 loop, of course). In addition, some of the canonical CDRs (e.g. L1) were modelled poorly, and some of the framework loops had also been poorly modelled.

2. We can’t treat orientation as if it doesn’t exist.

Many pipelines are either vague about how they predict the orientation, or have no explicit explanation on how the orientation will be predicted for the model structure. Given how important the orientation can be toward the antibody’s binding mode (Fera et al., 2014), it’s clear that this part of the pipeline has to be re-visited more carefully.

In addition to these lessons, one question remains:

What do we do with these models?

No pipeline, as far as we are aware, have no comments on what we should do beyond creating the model from a pipeline. What are its implications? Can we even use it for experiments, and use it as a potential therapeutic in the long-term? In light of these lessons and this blaring question, we developed our own method.

Before we begin, how does modelling work?

In my mind, most, if not all, pipelines follow this generic paradigm:pipeline2

Our method, ABodyBuilder, also follows this 4-step workflow;

  1. We choose the template structure based on sequence identity; below a threshold, we predict the structure of the heavy and light chains separately
  2. In the event that we use the structures from separate antibodies, we predict the orientation from the structure with the highest global sequence identity.
  3. We model the loops using FREAD (Choi, Deane, 2011)
  4. Graft the side chains in using SCWRL.

Following the modelling procedure, our method also annotates the accuracy of the model in a probabilistic context — i.e., an estimated probability that a particular region is modelled at a given RMSD threshold. Moreover, we also flag up any issues that an experimentalist can run into should they ever decide to model the antibody.

The accuracy estimation is a data-driven estimation of model quality. Many pipelines end up giving you just a model – but there’s no way of determining model accuracy until the native structure is determined. This is particularly problematic for CDRH3 where RMSDs can reach up to >4.0A between models and native structures, and it would be incredibly useful to have an a priori, expected estimation of model accuracy.

Furthermore, by commenting on motifs that can conflict with antibody development, we aim to offer a convenient solution for users when they are considering in vitro experiments with their target antibody. Ultimately, ABodyBuilder is designed with the user in mind, making an easy-to-use, informative software that facilitates antibody modelling for novel applications.

Le tour d’OPIG 2015

The third iteration of “Let’s use two wheels to transport us to many pubs” took place earlier this summer, on Wednesday 20th May. Following on from the great successes of the last two years, there was much anticipation, and the promise of a brand new route. This year we covered 8 miles, via the Chester, the King’s Arms at Sandford lock, the Prince of Wales in Iffley, and the Magdalen Arms. Nobody fell in the river or went hungry, so it was considered a success!

2015 route

IMG_20150520_210247519

IMG_20150520_181800722

IMG_20150520_190103582

IMG_20150520_190137098

 

 

 

 

 

 

IMG_20150520_203747651

IMG_20150520_210044077

Using B factors to assess flexibility

In my work of analysing antibody loops I have reached the point where I was interested in flexibility, more specifically challenging the somewhat popular belief that they have a high flexibility, especially the H3 loop. I wanted to use for this the B/Temperature/Debye-Waller factor which can be interpreted as a measure of the temperature dependent vibration of the atoms in the crystal, or in more gentler terms the flexibility at a certain position. I was keen to use the backbone atoms, and possibly the Cβ, but unfortunately the B factor shows some biases as it is used to mask other uncertainties due to high resolution, low electron density and as a result poor modelling. If we take a non redundant set of loops and split them in resolution shells of 0.2A we see how pronounced this bias is (Fig. 1 (a)).

b_factor_vs_res

Fig. 1(a) Comparison of average backbone B factors for loops found in structures at increasing resolution. A clear bias can be observed that correlates with the increase in resolution.

norm_b_factor_vs_res

Fig. 1(b) Normalization using the average the Z-score of the B factor of backbone atoms shows no bias at different resolution shells.

Comparing loops in neighbouring shells is virtually uninformative, and can lead to quite interesting results. In one analysis it came up that loops that are directly present in the binding site of antibodies have a higher average B factor than loops in structures without antigen where the movement is less constrained.

The issue here is that a complex structure (antibody-antigen) is larger, and has a poorer resolution, and therefore more biased B factors. To solve this issue I decided to normalize the B factors using the Z-score of the PDB file, where the mean and the standard deviation are computed from all the backbone atoms of amino acids inside the PDB file. This method to my knowledge was first described by (Parthasarathy and Murthy, 1997) [1] , although I came to the result without reading their paper, the normalization being quite intuitive. Using this measure we can finally compare loops from different structures at different resolutions (Fig. 1 (b)) with each other and we see what is expected: loops found in bound structures are less flexible than loops in unbound structures (Fig. 2). We can also answer our original question: does the H3 loop present an increased flexibility? The answer from Fig, 2 is no, if we compare a non-redundant sets of loops from antibodies to general proteins.

norm_b_factor_plot_heavy

Fig. 2 Flexibility comparison using the normalized B factor between a non-redundant set of non-IG like protein loops and different sets of H3 loops: bound to antigen (H3 bound), unbound (H3 unbound), both (H3). For each comparison ten samples with same number of examples and similar length distribution have been generated  and amassed (LMS) to correct for the possibility of length bias induced by the H3 loop which is known to have a propensity for longer loops than average.

References

[1] Parthasarathy, S. ; Murthy, M. R. N. (1997) Analysis of temperature factor distribution in high-resolution protein structures Protein Science, 6 (12). pp. 2561-2567. ISSN 0961-8368

Journal Club: Spontaneous transmembrane helix insertion thermodynamically mimics translocon-guided insertion

Many methods are available for prediction of topology of transmembrane helices, this being one of the success stories of protein structure prediction with accuracies over 90%. However, there are still areas where there is disagreement in some areas about the partitioning between the states of dissolved in water and positioned across a lipid bilayer. Complications arise because there are so many methods of measuring the thermodynamics of this transition – experimental and theoretical, in vivo and in vitro. It is uncertain what difference the translocon makes to the energetics of insertion – is the topology and conformation of a membrane protein the global thermodynamic minimum or just a kinetic product?

This paper uses three approaches to measure partitioning to test the agreement between different methods. The authors aim to reconcile differences calculated so far for insertion of an arginine residue into the membrane (ranging from +2 to +15 kcal/mol). This is an important question, because many transmembrane helices are only marginally hydrophobic and it is not known how and when they insert in the folding process. Arginine is chosen here because the pKa of 12.5 of the side chain is very high so it will not deprotonate in the centre of a bilayer and complications of protonation and deprotonation do not need to be considered. The same peptide is used for each method, of the form LnRLn, and the ratio between the interface and transmembrane states is used to calculate estimates of ΔG. In order to make sure that there were helices with a ΔG close to zero for accurate estimates, they used a range of values of n from 5-8.

The first method was an insertion assay using reconstituted microsomes, where this helix was inserted into the luminal domain of LepB. A glycosylation site was added at each end of the helix, but glycosylation takes place only on sites inside microsomes. Helices inserted into the membrane are only glycosylated once, whereas secreted helices are glycosylated twice and those which did not go through the translocon are not glycosylated. SDS-PAGE can separate these states by mass, and the ratio between single and double glycosylation gives the partitioning between inserted and interface helices out of those which entered the translocon. As expected, the trend is for longer helices with more leucine to favour the transmembrane state.

 Adapted from Figure 4a: The helix, H, either passes through the translocon into the lumen ("S") resulting in two glycosylations (green pentagons), or is inserted (TM) resulting in one glycosylation.

Adapted from Figure 4a: The helix, H, either passes through the translocon into the lumen (“S”) resulting in two glycosylations (green pentagons), or is inserted (TM) resulting in one glycosylation.

The second method was also experimental: oriented synchrotron radiation circular dichroism (ORSCD). Here they used just the peptide with one glycine at each end, as this would be able to equilibrate between the two states quickly. Theoretical spectra can be calculated for a helix , and therefore the ratio in which they must be combined to give the measured spectrum for a given peptide gives the ratio of transmembrane and interface states present.

Figure 2b: TM and IP are the theoretical spectra for the transmembrane and interface states, and the peptides fall somewhere in between.

Figure 2b: TM and IP are the theoretical spectra for the transmembrane and interface states, and the peptides fall somewhere in between.

Finally, the authors present 4 μs molecular dynamics simulations of the same peptides at 140°C, so that equilibration between the two states would be fast. The extended peptide at the start of the simulation quickly associates with the membrane and adopts a helical conformation. An important observation to note is that the transmembrane state is in fact at around 30° to the membrane normal, to allow the charged guanidinium group of the arginine to “snorkel” up to interact with charged phosphate groups of the lipids. Therefore this state is defined as transmembrane, in contrast to the OSRCD experiments where the theoretical TM spectrum was calculated for a perpendicular helix. This may be a source of some inaccuracy in the propensities calculated from OSRCD.

Figure 2c: Equilibration in the simulation for the L<sub>7</sub>RL<sub>7</sub> peptide. Transmembrane and interface states are seen in the partitioning and equilibration phases after the helix has formed.

Figure 2c: Equilibration in the simulation for the L7RL7 peptide. Transmembrane and interface states are seen in the partitioning and equilibration phases after the helix has formed.

Figure 3c: As the simulations run, the proportion of helices in the transmembrane state (PTM) converges to a different value for each peptide.

Figure 3c: As the simulations run, the proportion of helices in the transmembrane state (PTM) converges to a different value for each peptide.

Overall, the ΔG calculated experimental and molecular dynamics (MD) simulations agree very well. In fact, they agree better than those from previous studies of a similar format looking at polyleucine helices, where there was a consistent offset of 2 kcal/mol between the experiment and simulation derived values. The authors are unable to explain why the agreement for this study is better, but they indicate that it is unlikely to be related to any stabilisation by dimerisation in the experimental results, as a 4 μs MD simulation of two helices did not show them forming stable interactions. The calculated difference in insertion energy (ΔΔG) on replacing a leucine with argnine is therefore calculated to be +2.4-4.3 kcal/mol by experiment and +5.4-6.8 by simulation, depending on the length of the peptide (it is a more costly substitution for longer peptides as the charge is buried deeper). The difference between the experimental and simulation results is accounted for by their disagreement in the polyleucine study.

We thought this paper was a great example of experimental design, where the system was carefully chosen so that different experimental and theoretical approaches would be directly comparable. The outcome is good agreement between the methods, demonstrating that the vastly different values recorded previously seem to be because very different questions were being asked.

Protein loops – why do we care?

In my DPhil research, I work on the development of new methods for predicting protein loop structures. But what exactly are loops, and why should we care about their structures?

Many residues in a given protein will form regions of regular structure, in α-helices and β-sheets. The segments of the protein that join these secondary structure elements together, that do not have easily observable regular patterns in their structure, are referred to as loops. This does not mean, though, that loops are only a minor component of a protein structure – on average, half of the residues in a protein are found in loops [1], and they are typically found on the surface of the protein, which is largely responsible for its shape, dynamics and physiochemical properties [2].

Connecting different secondary structures together is often not the only purpose of a loop – they are often vitally important to a protein’s function. For example, they are known to play a role in protein-protein interactions, recognition sites, signalling cascades, ligand binding, DNA binding, and enzyme catalysis [3].

As regular readers of the blog are probably aware by now, one of the main areas of research for our group is antibodies. Loops are vital for an antibody’s function, since its ability to bind to an antigen is mainly determined by six hypervariable loops (the complementarity determining regions). The huge diversity in structure displayed by these loops is the key to how antibodies can bind to such different substances. Knowledge of loop structures is therefore extremely useful, enabling predictions to be made about the protein.

Loops involved in protein function: a methyltransferase binding to DNA (top left, PDB 1MHT); the active site of a triosephosphate isomerase enzyme (bottom left, PDB 1NEY); an antibody binding to its antigen (blue, surface representation) via its complementarity determining regions, shown as the coloured loops (centre, PDB 3NPS); the activation loop of a tyrosine kinase has a different conformation in the active (pink) and inactive (blue) forms (top right, PDBs 1IRK and 1IR3); a zinc finger, where the zinc ion is coordinated by the sidechain atoms of a loop (bottom right, PDB 4YH8).

Loops involved in protein function: a methyltransferase binding to DNA (top left, PDB 1MHT); the active site of a triosephosphate isomerase enzyme (bottom left, PDB 1NEY); an antibody binding to its antigen (blue, surface representation) via its complementarity determining regions, shown as the coloured loops (centre, PDB 3NPS); the activation loop of a tyrosine kinase has a different conformation in the active (pink) and inactive (blue) forms (top right, PDBs 1IRK and 1IR3); a zinc finger, where the zinc ion is coordinated by the sidechain atoms of a loop (bottom right, PDB 4YH8).

More insertions, deletions and substitutions occur in loops than in the more conserved α-helices and β-sheets [4]. This means that, for a homologous set of proteins, the loop regions are the parts that vary the most between structures. While this often makes the protein’s function possible, as in the case of antibodies, it leads to unaligned regions in a sequence alignment, standard homology modelling techniques can therefore not be used. This makes prediction of their structure difficultit is frequently the loop regions that are the least accurate parts of a protein model.

There are two types of loop modelling algorithm: knowledge-based and ab initio. Knowledge-based methods look for appropriate loop structures from a database of previously observed fragments, while ab initio methods generate possible loop structures without prior knowledge. There is some debate about with approach is the best. Knowledge-based methods can be very accurate when the target loop is close in structure to one seen before, but perform poorly when this is not the case; ab initio methods are able to access regions of the conformational space that have not been seen before, but fail to take advantage of any structural data that is available. For this reason, we are currently working on developing a new method that combines aspects of the two approaches, allowing us to take advantage of the available structural data whilst allowing us to predict novel structures.

[1] L. Regad, J. Martin, G. Nuel and A. Camproux, Mining protein loops using a structural alphabet and statistical exceptionality. BMC Bioinformatics, 2010, 11, 75.

[2] A. Fiser and A. Sali, ModLoop: automated modeling of loops in protein structures. Bioinformatics, 2003, 19, 2500-2501.

[3] J. Espadaler, E. Querol, F. X. Aviles and B. Oliva, Identification of function-associated loop motifs and application to protein function prediction. Bioinformatics, 2006, 22, 2237-2243.

[4] A. R. Panchenko and T. Madej, Structural similarity of loops in protein families: toward the understanding of protein evolution. BMC Evolutionary Biology, 2005, 5, 10.

Antibody binding site re-design

In this blog post I describe three successful studies on structure based re-design of antibody binding sites, leading to significant improvements of binding affinity.

In their study Clark et al.[1] re-designed a binding site of antibody AQC2 to improve its binding affinity to the I domain of human integrin VLA1. The authors assessed the effects of the mutations on the binding energy using the CHARMM[2,3] potential with the electrostatic and desolations energies calculated using the ICE software[4]. In total, 83 variants were identified for experimental validation, some of which included multiple mutations. The mutated antibodies were expressed in E. Coli and the affinity to the antigen was measured. The best mutant included a total of four mutations which improved the affinity by approximately one order of magnitude from 7 nM to 850 pM. The crystal structure of the best mutant was solved to further study the interaction of the mutant with the target.

Figure 1: Comparison of calculated and experimental binding free energies. (Lippow et al., 2007)

Figure 1: Comparison of calculated and experimental binding free energies. (Lippow et al., 2007)

Lippow et al.[5] studied the interactions of three antibodies – the anti-epidermal growth factor receptor drug cetuximab[6], the anti-lysozyme antibody D44.1 and the anti-lysosyme antibody D1.3 with their respective antigens. The energy calculations favoured mutations to large amino acids (such as Phe or Trp) of which most were found to be false positives. More accurate results were obtained using only the electrostatic term of the energy function. The authors improved the binding affinity of D44.1 by one order of magnitude and the affinity of centuximab by 2 orders of magnitude. The antibody D1.3 didn’t show many opportunities for electrostatic improvement and the authors suggest it might be an anomalous antibody.

Computational methods have recently been used to successfully introduce non-canonical amino acids (NCAA) into the antibody binding site. Xu et al.[7] introduced L-DOPA (L-3,4-dihydroxephenyalanine) into the CDRs of anti-protective antigen scFv antibody M18 to crosslink it with its native antigen. The authors used the program Rosetta 3.4 to create models of antibody-antigen complex with L-DOPA residues. The distance between L-DOPA and a lysine nucleophile was used as a predictor of crosslinking was. The crosslinking efficiency was quantified as a fraction of antibodies that underwent a mass change, measured using Western blot assays. The measured average efficiency of the mutants was 10% with the maximum efficiency of 52%.

[1]      Clark LA, Boriack-Sjodin PA, Eldredge J, Fitch C, Friedman B, Hanf KJM, et al. Affinity enhancement of an in vivo matured therapeutic antibody using structure-based computational design. Protein Sci 2006;15:949–60. doi:10.1110/ps.052030506.

[2]      Brooks BR, Bruccoleri RE, Olafson DJ, States DJ, Swaminathan S, Karplus M. CHARMM: A Program for Macromolecular Energy, Minimization, and Dynamics Calculations. J Comput Chem 1983;4:187–217.

[3]      MacKerel Jr. AD, Brooks III CL, Nilsson L, Roux B, Won Y, Karplus M. CHARMM: The Energy Function and Its Parameterization with an Overview of the Program. In: v. R. Schleyer et al. P, editor. vol. 1, John Wiley & Sons: Chichester; 1998, p. 271–7.

[4]      Kangas E, Tidor B. Optimizing electrostatic affinity in ligand–receptor binding: Theory, computation, and ligand properties. J Chem Phys 1998;109:7522. doi:10.1063/1.477375.

[5]      Lippow SM, Wittrup KD, Tidor B. Computational design of antibody-affinity improvement beyond in vivo maturation. Nat Biotechnol 2007;25:1171–6. doi:10.1038/nbt1336.

[6]      Sato JD, Kawamoto T, Le AD, Mendelsohn J, Polikoff J, Sato GH. Biological effects in vitro of monoclonal antibodies to human epidermal growth factor receptors. Mol Biol Med 1983;1:511–29.

[7]      Xu J, Tack D, Hughes RA, Ellington AD, Gray JJ. Structure-based non-canonical amino acid design to covalently crosslink an antibody-antigen complex. J Struct Biol 2014;185:215–22. doi:10.1016/j.jsb.2013.05.003.