Author Archives: Nick

End of an era?

The Era of Crystallography ends…

For over 100 years, crystallography has been used to determine the atom arrangements of molecules; specifically, it has become the workhorse of routine macromolecular structure solution, being responsible for over 90% of the atomic structures in the PDB. Whilst this achievement is impressive, in some ways it has come around despite the crystallographic method, rather than because of it…

The problem, generally, is this: to perform crystallography, you need crystals. Crystals require the spontaneous assembly of billions of molecules into a regular repeated arrangement. For proteins — large, complex, irregularly shaped molecules — this is not generally a natural state for them to exist in, and getting a protein to crystallise can be a difficult process (the notable exception is Lysozyme, which it is difficult NOT to crystallise, and there are subsequently currently ~1700 crystal structures of it in the PDB). Determining the conditions under which proteins will crystallise requires extensive screening: placing the protein into a variety of difference solutions, in the hope that in one of these, the protein will spontaneously self-assemble into (robust, homogeneous) crystals. As for membrane proteins, which… exist in membranes, crystallisation solutions are sort of ridiculous (clever, but ridiculous).

But even once a crystal is obtained (and assuming it is a “good” well-diffracting crystal), diffraction experiments alone are generally not enough to determine the atomic structure of the crystal. In a crystallographic experiment, only half of the data required to solve the structure of the crystal is measured — the amplitudes. The other half of the data — the phases — are not measured. This constitutes the “phase problem” of crystallography, and “causes some problems”: developing methods to solve the phase problem is essentially a field of its own.

…and the Era of Cryo-Electron Microscopy begins

Cryo-electron microscopy (cryo-EM; primers here and here), circumnavigates both of the problems with crystallography described above (although of course it has some of its own). Single-particles of the protein (or protein complex) are deposited onto grids and immobilised, removing the need for crystals altogether. Furthermore, the phases can be measured directly, removing the need to overcome the phase problem.

Cryo-EM is also really good for determining the structures of large complexes, which are normally out of the reach of crystallography, and although cryo-EM structures used to only be determined at low resolution, this is changing quickly with improved experimental hardware.

Cryo-Electron Microscopy is getting better and better every day. For structural biologists, it seems like it’s going to be difficult to avoid it. However, for crystallographers, don’t worry, there is hope.

Journal Club: Comments on Three X-ray Crystal Structure Papers

One of the fundamental weaknesses of X-ray crystallography, when used for the solution of macromolecular structures, is that the constructed models are based on the subjective interpretation of the electron density by the crystallographer.

This can lead to poor or simply incorrect models, as discussed by Stanfield et al. in their recent paper “Comment on Three X-ray Crystal Structure Papers” (link below). Here, they assert that the basis of several papers by Dr. Salunke and his coworkers, a series of antibody-peptide complexes, are fundamentally flawed. It is argued that the experimental electron density does not support the presence of the peptide models: there is no significant positive OMIT density for the peptides when they are removed from the model, and the quality of the constructed models is poor, with unreasonably large B-factors.

Link to paper: http://www.jimmunol.org/content/196/2/521.1.


Firstly, a quick recap on crystallographic maps and how they are used. Two map types are principally used in macromolecular crystallography: composite maps and difference maps.

The composite map is used to approximate the electron density of the crystal. It consists of the modelled density subtracted from the observed electron density (multiplied by 2) and contains correction factors to minimise phase bias (m, D). The weighting of 2 of the observed map is used to compensate for the poor phases, which cause un-modelled features to appear only weakly in the density. It is universally represented as a blue mesh:

composite

The difference map is the modelled density subtracted from the observed density, and is used to identify un-modelled areas of the electron density. It contains the same correction factors to compensate for phase bias. It is universally represented as a green mesh for positive values, and a red mesh for negative values. The green and red meshes are always contoured at the same absolute values, e.g. ±1 or ±1.4.

difference

The problem of identifying features to model in the electron density is the point where the subjectivity of the crystallographer is most influential. For ligands, this means identifying blobs that are “significant”, and that match the shape of the molecule to be modelled.

When a crystallographer is actively searching for the presence of a binding molecule, in this case a peptide, it is easy to misinterpret density as the molecule you are searching for. You have to be disciplined and highly critical before accidentally contouring to levels that are too low, and modelling into density that does not really match the model. This is the case in the series of structures that are criticised by Stanfield et al.


Specific concerns with the structures of Dr Salenke et al.

1: Contouring difference maps at only positive values (and colouring them blue?)

The first questionable thing that Dr Salunke et al do is to present a difference map contoured at only positive values as evidence for the bound peptide. This oddity is compounded by colouring the resulting map as blue, which is unusual for a difference map.

bad1

2: Contouring difference maps to low values

Salunke et al claim that the image above shows adequate evidence for the binding of the peptide in a difference map contoured at 1.7𝛔.

When contouring difference maps to such low levels, weak features will indeed be detectable, if they are present, but in solvent channels, where the crystal is an ensemble of disordered states, there is no way to interpret the density as an atomic model. Hence, a difference map at 1.7𝛔 will show blobs across all of the solvent channels in the crystal.

This fact, in itself, does not prove that the model is wrong, but makes it highly likely that the model is a result of observation bias. This observation bias occurs because the authors were looking for evidence of the binding peptide, and so inspected the density at the binding site. This has lead to the over-interpretation of noisy and meaningless density as the peptide.

The reason that the 3𝛔 limit is used to identify crystallographic features in difference maps is that this identifies only strong un-modelled features that are unlikely to be noise, or a disordered feature.

More worryingly, the model does not actually fit the density very well.

3: Poor Model Quality

Lastly, the quality of the modelled peptides is very poor. The B-factors of the ligands are much higher than the B-factors of the surrounding protein side-chains. This is symptomatic of the modelled feature not being present in the data, and the refinement program tries to “erase” the presence of the model by inflating the B-factors. This, again, does not prove that the model is wrong, but highlights the poor quality of the model.

Lastly, the ramachandran outliers in the peptides are extreme, with values in the 0th percentile of empirical values. This means that the conformer of the peptide is highly strained, and therefore highly unlikely.


Combining all of the evidence above, as presented in the article written by Stanfield et al, there is little doubt that the models presented by Salunke et al are incorrect. Individual failings in the models in one area could be explained, but such a range of errors across such a range of quality metrics cannot.

Journal Club: Accessing Protein Conformational Ensembles using RT X-ray Crystallography

This week I presented a paper that investigates the differences between crystallographic datasets collected from crystals at RT (room-temperature) and crystals at CT (cryogenic temperatures). Full paper here.

The cooling of protein crystals to cryogenic temperatures is widely used as a method of reducing radiation damage and enabling collection of whole datasets from a single crystal. In fact, this approach has been so successful that approximately 95% of structures in the PDB have been collected at CT.

However, the main assumption of cryo-cooling is that the “freezing”/cooling process happens quickly enough that it does not disturb the conformational distributions of the protein, and that the RT ensemble is “trapped” when cooled to CT.

Although it is well established that cryo-cooling of the crystal does not distort the overall structure or fold of the protein, this paper investigates some of the more subtle changes that cryo-cooling can introduce, such as the distortion of sidechain conformations or the quenching of dynamic CONTACT networks. These features of proteins could be important for the understanding of phenomena such as binding or allosteric modulation, and so accurate information about the protein is essential. If this information is regulartly lost in the cryo-cooling process, it could be a strong argument for a return to collection at RT where feasible.

By using the RINGER method, the authors find that the sidechain conformations are commonly affected by the cryo-cooling process: the conformers present at CT are sometimes completely different to the conformers observed at RT. In total, they find that cryo-cooling affects a significant number of residues (predominantly those on the surface of the protein, but also those that are buried). 18.9% of residues have rotamer distributions that change between RT and CT, and 37.7% of residues have a conformer that changes occupancy by 20% or more.

Overall, the authors conclude that, where possible, datasets should be collected at RT, as the derived models offer a more realistic description of the biologically-relevant conformational ensemble of the protein.

5 Thoughts For… Comparing Crystallographic Datasets

Most of the work I do involves comparing diffraction datasets from protein crystals. We often have two or more different crystals of the same crystal system, and want to spot differences between them. The crystals are nearly isomorphous, so that the structure of the protein (and crystal) is almost identical between the two datasets. However, it’s not just a case of overlaying the electron density maps, subtracting them and looking at the difference. Nor do we necessarily want to calculate Fo-Fo maps, where we calculate the difference by directly subtracting the diffraction data before calculating maps. By the nature of the crystallographic experiment, no two crystals are the same, and two (nearly identical) crystals can lead to two quite different datasets.

So, here’s a list of things I keep in mind when comparing crystallographic datasets…

Control the Resolution Limits

1) Ensure that the resolution limits in the datasets are the same, both at the high AND the low resolution limits.

The High resolution limit. The best known, and (usually) the most important statistic of a dataset. This is a measure of the amount of information that’s been collected about the dataset. Higher resolution data gives more detail for the electron density. Therefore, if you compare a 3A map to a 1A map, you’re comparing fundamentally different objects, and the differences between them will be predominantly from the different amount of information in each dataset. It’s then very difficult to ascertain what’s interesting, and what is an artefact of this difference. As a first step, truncate all datasets at the resolution you wish to compare them at.

The Low Resolution Limit. At the other end of the dataset, there can be differences in the low resolution data collected. Low resolution reflections correspond to much larger-scale features in the electron density. Therefore, it’s just as important to have the same low-resolution limit for both datasets, otherwise you get large “waves” of electron density (low-frequency fourier terms) in one dataset that are not present in the other. Because low-resolution terms are much stronger than high resolution reflections, these features stand out very strongly, and can also obscure “real” differences between the datasets you’re trying to compare. Truncate all datasets at the same low resolution limit as well.

Consider the Unit Cell

2) Even if the resolution limits are the same, the number of reflections in maps can be different.

The Unit Cell size and shape. Even if the crystals you’re using are the same crystal form, no two crystals are the same. The unit cell (the building block of the crystal) can be slightly different sizes and shapes between crystals, varying in size by a few percent. This can occur by a variety of reasons, from the unpredictable process of cooling the crystal to cryogenic temperatures to entirely stochastic differences from the process of crystallisation. Since the “resolution” of reflections depends on the size of the unit cell, two reflections with the same miller index can have different “resolutions” when it comes to selecting reflections for map calculation. Therefore, if you’re calculating maps from nearly-isomorphous but non-identical crystals, consider calculating maps based on an high and a low miller index cutoff, rather than a resolution cutoff. This ensures the same amount of information in each map (number of free parameters).

Watch for Missing Reflections

3) Remove any missing reflections from both datasets.

Reflections can be missing from datasets for a number of reasons, such as falling into gaps/dead pixels on the detector. However, this isn’t going to happen systematically with all crystals, as different crystals will be mounted in different orientations. When a reflection is missed in one dataset, it’s best to remove it from the dataset you’re comparing it to as well. This can have an important effect when the completeness of low- or high-resolution shells is low, whatever the reason.

Not All Crystal Errors are Created Equal…

4) Different Crystals have different measurement errors.

Observation uncertainties of reflections will vary from crystal to crystal. This may be due to a poor-quality crystal, or a crystal that has suffered from more radiation damage than another. These errors lead to uncertainty and error in the electron density maps. Therefore, if you’re looking for a reference crystal, you probably want to choose one with as small uncertainties, σ(F), in the reflections as possible.

Proteins are Flexible

5) Even though the crystals are similar, the protein may adopt slightly difference conformations.

In real-space, the protein structure varies from crystal to crystal. For the same crystal form, there will be the same number of protein copies in the unit cell, and they will be largely in the same conformation. However, the structures are not identical, and the inherent flexibility of the protein can mean that the conformation seen in the crystal can change slightly from crystal to crystal. This effect is largest in the most flexible regions of the protein, such as unconstrained C- and N- termini, as well as flexible loops and crystal contacts.

Research Talk: Ligand Fitting in X-ray Crystallography

In the last group meeting, I reported on the success of ligand-fitting programs for the automated solution of ligand structures.

In Fragment Screens by X-ray Crystallography, a library of small compounds (fragments) is soaked into protein crystals, and the resulting structures are determined by diffraction experiments. Some of the fragments will bind to the protein (~5% of the library), and these are detected by their appearance in the derived electron density.

The models of binding fragments can be used to guide structure-based drug-design efforts, but first they must be built. Due to the large number of datasets (200-1000), the automated identification of the fragments that bind, and the automated building of atomic models is required for efficient processing of the data.

Density Blobs

Anecdotally, available ligand-fitting programs are unreliable when modelling fragments. We tested three ligand fitting programs in refitting a series of ligand structures. We found that they fail more frequently when the electron density for the ligand is weak. Many fragments that are seen to bind in screens do so only weakly, due to their size. So the weaker the fragment binds, the harder it will be for the automated programs to model.

Success Rates Identifying the Correct Model

Models are usually ranked by the Real-Space Correlation Coefficient (RSCC) between the model and the experimental electron density. This metric is good at identifying ‘correct’ models, and an RSCC > 0.7 normally indicates a correct, or at least mostly correct, model.

Typically, the binding locations of ligands are found by searching for un-modelled peaks in the electron density map. Models are then generated in these locations, and are then scored and ranked. Good models can be identified and presented to the user. However, if a ‘good’ model is not generated, to be scored and ranked, the RSCCs of the ‘bad’ models will not tell you that there is something to be modelled, at a particular place, and binding may be missed…

This is especially true for weak-binding ligands, which will not give a large electron density peak to give evidence that there is something there to be modelled.

Currently, all of the datasets must be inspected manually, to check that a weak-binding fragment has not been missed…

Journal Club: Statistical Quality Indicators for Electron Density Maps

This Week I presented Ian Tickle’s 2012 Paper “Statistical quality indicators for electron-density maps”. This paper presented new, statistically robust metrics for describing the agreement between an atomic model and the experimentally derived electron density.

Previous metrics such as the Real-space R (RSR) and Real-Space Correlation Coefficient (RSCC) (Brandon & Jones, 1991, and others) are popular electron density metrics, and can inform on the quality of an atomic model. However, as Tickle claims, they cannot inform on how the model is good, or bad, as they give no indication of the accuracy, or the precision, of the electron density.

Accuracy:

Ian Tickle describes accuracy as – “How close are the results on average to the truth (regardless of precision)?” This is more often referred to as ‘error’. The most accurate model is the one that best agrees with the electron density.

Precision:

Precision is described as – “If you were to repeat the experiment, how much would you expect the results to vary (regardless of accuracy)?” This is more often described as ‘uncertainty’. Precision is a property of the crystal and the experiment. It is independent of the model.

A pictographic representation is shown below –

Pictographic representation of accuracy and precision. Taken from Tickle, 2012.

Before the discussion of the new metrics proposed, there are several assumptions that must be made and several influencing factors to be considered.

Assumptions:

  • The electron density, and the phases used to generate it, are accurate. This assumption is reasonable because density-based validation is generally done near to the end of refinement when the model is mostly correct.

Metric usefulness depends critically on:

  • Accurate calculation and representation of the electron density from our atomic model.
  • Accurate scaling of the observed and model density (neither calculated nor observed density is not on an absolute scale).
  • Accurate determination of the area to be considered for the calculation of the metric. If too large an area is considered, noise and density from other features will influence the metric. Too small an area will not encompass the whole model and its environment.

Calculating the Model Density:

Accurate calculation of the model’s electron density is essential, as the profile of the atoms will of course affect the comparison of the model to the experimental density. Often (as in Jones 1991, and others) a fixed profile is assumed for all atoms. Of course, in reality the profile will depend on atom type, B-factors, data completeness, and resolution limits.

Due to the resolution limits, the electron density from an atom is the convolution of a 3d gaussian and a sphere of constant scattering power (Blundell & Johnson, 1976). The truncated density function for an atom then becomes:

Screen Shot 2014-04-08 at 19.50.14

Scaling the calculated density:

This, fortunately, is already available and calculated by refinement programs (when calculating 2mFo – DFc maps), and the correct scaling factor is the resolution-dependent D.

Visualising the quality of a model:

To demonstrate how the (global) quality of a model can easily be seen, Tickle calculates and normalises difference density maps for a good, and a bad, model. If the model is ‘correct’, then the difference density should be gaussian noise, but if the model is ‘incorrect’, it will be skewed. This can easily be seen in Figure 8 from the paper.

Screen Shot 2014-04-08 at 20.26.06

A difference density map is calculated, sorted by value and normalised to give a density distribution. For a good model, this should look like (a), where the density function is a gaussian ~ N(0,1). For a bad model, (b), the distribution is skewed.

The main feature that appears in a ‘bad’ model is the increased weight in the tails of the distribution. Extra weight on the left-hand side indicates model that is not supported by the evidence, and extra weight on the right-hand side indicates under-modelled density.

The New Accuracy Metric

Using the ideas from above (that the difference density between a model and the experimental density should be distributed as a gaussian) Tickle goes on to develop metrics for determining the likelihood that a model density and an experimental density differ only by the addition of random noise.

The metric he describes tests a hypothesis – Does the distribution of the difference density reflect that obtained from the propagation of random errors in the experimental data (and phases)?

To do this, statistical tests are developed. First we define the difference density Z-score (ZD)

Screen Shot 2014-04-08 at 21.05.19

This quantity is the difference between the calculated electron density and the experimental density (delta rho), divided by the error in the difference density, giving the normal definition of a normalised Z-score.

The difference density (the numerator) has been discussed above, so we now discuss the error in the difference density. Assuming that the experimental data and the phases are ‘correct’, any non-random errors arise only from the model.

That is, errors arising in the experimental data will appear only as random noise, whereas errors in the model will manifest as the signal that we are trying to detect.

To calculate the strength of the noise (that of the experimental data and the phases), we look at the bulk-solvent regions. Here, the atoms are unordered, and so should give uniform density. Any deviations from uniform should be approximately the random noise from the experimental data and the phases.

Maximum Z-score analysis

Tickle considers using the maximum ZD of a sample as a test for model accuracy, and discusses merits and failings. In brief, if we were to sample from a difference density distribution, and take only the most significant ZD score, “focusing only on the maximum value inevitably overstates the significance of the results”.

A Chi-Squared test for ZD scores

The solution that Tickle proposes is to allow that all sample values may be significant (rather than just the largest values). He creates a joint probability density function of the absolute sample values (assumed half-normal and iid). This probability density function then becomes a chi-squared distribution.

Screen Shot 2014-04-08 at 22.34.30By calculating the CDF of the chi-squared (a lower regularised gamma function), Tickle is able to attain p-values for a set of observations.

Screen Shot 2014-04-08 at 22.38.11These can then be converted back to Z-scores, which crystallographers are more comfortable using. As Tickle states, just because the metric is in terms of Z-scores does not mean that the distribution is normal (here it is clearly a chi-squared).

Problem: Large Samples

The problem with the above is that for large samples, a small number of significant values will be drowned out by noise and the signal may be missed. The failure of the above test in this situation is put down to the choice of null hypothesis. Multiple null hypotheses are needed in order to cover all possibilities.

When distinguishing between multiple hypotheses, we must aim to avoid type II errors wherever possible, whilst attempting to minimise type I errors. We must select the hypothesis “that maximises the probability of obtaining a result less extreme that the one actually observed (…) or equivalently the one that minimises the probability of obtaining a result more extreme than that observed”.

Solution: A new JPDF and CDF

To solve this, Tickle takes a subset of the highest values of the original sample of n, say from i=k to n (the n-k highest scores), and calculates the chi-squared and its associated cumulative probability. We will then choose the value of k such that it gives us the highest probability,

Screen Shot 2014-04-08 at 22.56.34However, the cumulative probability of chi-squared is no longer the regularised gamma function due to the bias introduced by selected the largest values. Recalculating the JPDF and integrating analytically and numerically to obtain the CDF, we could arrive at a result. This, however, has the problem of a large dimensionality, which requires the use of very accurate Monte Carlo integration (accuracy of much better 0.27% is required, since we are interested in the p-values between 0.9973 and 1 – greater than 3 sigma).

Fortunately, an approximation can be made to bring about a practical solution.

Examples of Significant Distributions:

Tickle generates a table which gives several scenarios that will give a significant result, for different extremes of Z-value, and different sample sizes. One particularly key point is

…small samples are statistically less reliable so require a higher proportion of significant data points to achieve the same overall level of significance. Large samples require relatively fewer data points but they must have higher values to overcome the ‘multiple comparisons’ effect, where large values are more likely to occur occur purely as a result of random error.

Summary

B-factors:

Tickle shows early in the paper that the RSR and RSCC are correlated with the B-factor of the model. RSZD shows no correlation with the B-factor, as is desired.

RSZD+ & RSZD:

More useful scores can be generated by scoring the negative and positive values of delta-rho separately. This gives two scores, RSZD+ and RSZD-. RSZD+ gives the significance/prevalence of unexplained density (missing atoms) and RSZD- gives the significance/prevalence of unjustified model/misplaced atoms.

Precision & Reliability:

Although not discussed in as much depth in the paper, Tickle also proposes a metric to account for the precision of the electron density

Screen Shot 2014-04-08 at 23.24.34

This is clearly independent of the model, and is the signal-to-noise ratio of the average observed density in a specified region. Weak density (or large noise) will lead to a small RSZO, implying that any model placed here should be considered unreliable.

Structural Biology Module @ the DTC

As part of the DTC Structural Biology module (Feb 2014), first year phD students were given 3 days to answer one of several questions from fields within structural biology. The format had to be an automated presentation, and it had to be ENTERTAINING.

Video 1: Is Your Ligand Really There?

The pilot episode of the award-winning series “Protein Hour”…

Video 2: Protein-Protein Docking

Do not attempt to spoof “The Matrix” – That is impossible…

Video 3: Are Membrane Proteins Special?

An appeal from “Protein Relief 2014″…

Video 4: Structure-based and fragment-based drug design – do they really work?

Is stop-motion animation the next blockbuster in drug design?

Journal Club: Ligand placement based on prior structures: the guided ligand-replacement method

Last week I presented a paper by Klei et al. on a new module in the Phenix software suite. This module, entitled Guided Ligand-Replacement (GLR), aims to make it easier to place ligands during the crystallographic model-building process by using homologous models of the ligand-protein complex for the initial placement of the ligand.

In the situation where ligands are being added to a crystallographic protein model, a crystallographer must first build the protein model, identify the difference electron density, and then build the ligand into this density.

The GLR approach is particularly helpful in several cases:

  • In the case of large complex ligands, which have many degrees of freedom, it can take a long time to fit the ligand into the electron density. There may be many different conformations of the ligand that fit the difference electron density to a reasonable degree, and it is the job of the crystallographer to explore these different conformations. They must then identify the true model, or perhaps an ensemble of models in the case where the ligand is mobile or present in different, distinct, binding modes. GLR makes this process easier by using a template from a similar, previously-solved structure. The ligand position and orientation is then transplanted to the new structure to give a starting point for the crystallographer, reducing the tedium in the initial placing the ligand.
  • In the case of a series of related crystal structures, where the same protein structure is determined a number of times, bound to different (but similar) ligands. This is common in the case of structure based drug-design (SBDD), where a compound is developed and elaborated upon to improve binding affinity and specificity to a particular protein. This process generates a series of crystal structures of the protein, bound to a series of ligands, where the binding modes of the ligands are similar in all of the structures. Therefore, using the position and orientation of the ligand from a structure is a good starting point for the placement of further elaborations of that ligand in subsequent structures.
  • In the case of several copies of the protein in the asymmetric unit cell of the crystal. After one copy of the ligand has been built, it can be quickly populated throughout the unit cell, removing the need for the crystallographer to undertake this menial and tedious task.

Program Description:

The required inputs for GLR are standard, as required by any ligand-fitting algorithm, namely:

  • The APO structure of the protein (the structure of the protein without the ligand)
  • A description of the ligand (whether as a SMILES string, or as a cif file etc)
  • An mtz file containing the experimental diffraction data

Overview of the program:

GLR Program Overview

Fig 1. Program Overview.

> Identification of the reference structure

Firstly, the program must determine the reference structure to be used as a template. This can be specified by the user, or GLR can search a variety of sources to find the best template. The template selection process is outlined below. Reference structures are filtered by the protein sequence identity, similarity of the molecular weights of the ligands, and finally by the similarity of the binary chemical fingerprints of the ligands (as calculated by the Tanimoto coefficient).

Template Selection

Fig 2. Reference Structure selection flow diagram.

Little justification is given for these cutoffs, although it is generally accepted that proteins with above 70% sequence identity are highly structurally similar. The Tanimoto coefficient cutoff of 0.7 presumably only serves to remove the possibly of very low scoring matches, as if multiple potential reference structures are available, the highest Tanimoto-scored ligand-match is used. They do not, however, say how they balance the choice in the final stage where they take the ligand with the highest Tanimoto score and resolution.

The method for assigning the binary chemical fingerprints can be found here (small error in link in paper).

> Superposition of Reference and Target structures

Once a reference structure has been selected, GLR uses graph-matching techniques from eLBOW to find the correspondences between atoms in the reference and target ligands. These atomic mappings are used to orient and map the target ligand onto the reference ligand.

Once the reference protein-ligand structure is superposed onto the target protein, these atomic mappings are used to place the target ligand.

The target complex then undergoes a real-space refinement to adjust the newly-placed ligand to the electron density. This allows the parts of the target ligand that differ from the reference ligand to adopt the correct orientation (as they will have been orientated arbitrarily by the graph-matching and superposition algorithms).

> Summary, Problems & Limitations

GLR allows the rapid placement of ligands when a homologous complex is available. This reduces the need for computationally intensive ligand-fitting programs, or for tedious manual building.

For complexes where a homologous complex is available, GLR will be able to quickly provide the crystallographer with a potential placement of the ligand. However, at the moment, GLR does not perform any checks on the validity of the placement. There is no culling of the placed ligands based on their agreement with the electron density, and the decision as to whether to accept the placement is left to the crystallographer.

As the authors recognise in the paper, there is the problem that GLR currently removes any overlapping ligands that are placed by the program. This means that GLR is unable to generate multiple conformations of the target ligand, as all but one will be removed (that which agrees best with the electron density). As such, the crystallographer will still need to check whether the proposed orientation of the ligand is the only conformation present, or whether they must build additional models of the ligand.

As it is, GLR seems to be a useful time-saving tool for crystallographic structure solution. Although it is possible to incorporate the tool into automated pipelines, I feel that it will be mainly used in manual model-building, due to the problems above that require regular checking by the crystallographer.

There are several additions that could be made to overcome the current limits of the program, as identified in the paper. These mainly centre around generating multiple conformations and validating the placed ligands. If implemented, GLR will become a highly useful module for the solution of protein-ligand complexes, especially as the number of structures with ligands in the PDB continues to grow.

A Colourblind Guide to Colourful Presentations…

Like many people, I am colourblind.

Fortunately I am only ‘mildly’ red-green colourblind and it doesn’t have a huge detrimental effect on my life.

Firstly, to dispel a couple of misconceptions:

  1. I can still see colour. ‘blindness’ here would be better called ‘deficiency’ or ‘desensitivity’. I am simply less sensitive to reds/greens than the ‘normal’ eye. Whilst I can discriminate between blues/yellows, it is harder to distinguish some reds from some greens.
  2. Colour blindness is not the swapping of colours. I don’t accidentally call red and green the wrong things – I just can’t tell what a colour is in some cases.
  3. I have no problem with traffic lights.
  4. Colour blindness does not mean poor eyesight. My cornea, lens, etc work fine, thank-you-very-much.

Approximately 8% of men and 0.5% of women are colourblind to various extents. There is a wide range of types, and severities, of colourblindness. For more information, there are a number of websites with helpful descriptions – This, for example…

There’s even a nature paper about colour blindness awareness…

The standard tests for colour-blindness are the well-recognised Ishihara colour tests. Lets do a few (just for fun)…

Ishihara1

An example Ishihara Colour Test. Most people can see the ’12’ in the image. Image: Wikipedia

Ishihara11

Another Colour Test. You might be able to see a ‘6’ in the image (I can’t…). Image: Wikipedia

Ishihara19

Another Colour Test. You should see nothing in this image. I see a ‘2’. Image: Wikipedia

Ishihara23

The last one. You should see a ’42’. I can just about see a nondescript blur that might be a ‘4’ on the left hand side. Image: Wikipedia

To give an idea of what it’s like, this page gives a very good example. For a theatre booking system, they indicate the seats that offer a restricted view of the stage –

Restricted view seats are marked in orange - or are they?

Restricted view seats are clearly indicated – or are they? Image: www.digitalartsonline.co.uk

Whilst most people will be able to tell where the best seats are, for those with colour blindness it might not be so easy. The image below shows the same image from the point of view of someone with colour blindness – can you still be sure of which seat is which?

Still clear?

Still clear? Image: www.digitalartsonline.co.uk

Mostly, being colourblind doesn’t affect my life (when I’m not at the theatre). However, there is one area of my life where being colourblind is *really* annoying: presentations (and picking ties to match shirts, but I’ve got that figured out now).

So here’s the Nick-approved guide to making colour-blind friendly presentations.

  1. Choose a colour scheme that is colour-blind friendly – these are readily available online. This is mainly for graphs. Just generally avoid pale green-pale red mixtures. Purples and pinks can also be pretty confusing.
  2. Along with the above, high contrast colour schemes can be very hard to see. For instance, a presentation with a white background can make it difficult to see coloured things on the slide, as everything is drowned out by the white background – especially yellow/green text. It is also very tiring to the eye. Try dark-coloured fonts on a light-coloured background.
  3. In graphs, don’t just use colours to match lines to the legend – matching colours from lines to the colours on the legend is hard – use shapes as well, or label the lines. An example.
  4. If 3. is impossible, make the lines on graphs a decent thickness – small areas of colour are harder to determine.
  5. When referring to slide, try not to refer to ‘the red box’. Refer instead to ‘the rounded red box in the top-right of the screen’.
  6. Please don’t use red laser pointers – these are evil [citation needed]. The red light is not easily distinguishable on bright screens (or if it’s zipping around the screen). Use a green laser pointer instead. Not only are green laser pointers generally more powerful, and therefore brighter, but they are also easier to see. Why?

For a fairly comprehensive guide of how to make colour-friendly presentations, look at this page. And for checking how things might look, there are many colour-blind simulators for both images and webpages.

I hope this helps to create colour-friendly presentations.

Research Talk: High Resolution Antibody Modelling

In keeping with the other posts in recent weeks, and providing a certain continuity, this post also focusses on antibodies. For those of you that have read the last few weeks’ posts, you may wish to skip the first few paragraphs, otherwise things may get repetitive…

Antibodies are key components of the immune system, with almost limitless potential variability. This means that the immune system is capable of producing antibodies with the ability to bind to almost any target. Antibodies exhibit very high specificity and very high affinity towards their targets, and this makes them excellent at their job – of marking their targets (antigens) to identify them to the rest of the immune system, either for modification or destruction.

Immunoglobulin G (IgG) Structure

(left) The Immunoglobulin (IgG) fold, the most common fold for antibodies. It is formed of four chains, two heavy and two light. The binding regions of the antibody are at the ends of the variable domains VH and VL, located at the ends of the heavy and light chains respectively. (right) The VH domain. At the end of both the VH and the VL domains are three hypervariable loops (CDRs) that account for most of the structural variability of the binding site. The CDRs are highlighted in red. The rest of the domain (coloured in cyan), that is not the CDRs, is known as the framework.

Over the past few years, the use of antibodies as therapeutic agents has increased. It is now at the point where we are beginning to computationally design antibodies to bind to specific targets. Whether they are designed to target cancer cells or viruses, the task of designing the CDRs to complement the antigen perfectly is a very difficult one. Computationally, the best way of predicting the affinity of an antibody for an antigen is through the use of docking programs.

For best results, high resolution, and very accurate models of both the antibody and the antigen are needed. This is because small changes in the antibodies sequence can be seen to produce large changes in the affinity, experimentally.

Many antibody modelling protocols currently exist, including WAM, PIGS, and RosettaAntibody. These use a variety of approaches. WAM and PIGS use homology modelling approaches to model the framework, augmented with expert knowledge-based rules to model the CDRs. RosettaAntibody also uses homology modelling to model the framework of the antibody, but then uses the Rosetta protocol to perform an exploration of the conformational space to find the lowest energy conformation.

However, there are several problems that remain. The orientation between the VH domain and the VL domain is shown to be instrumental in the high binding affinity of the antibody. Mutations to framework residues that change the orientation of the VH and VL domains have been shown to cause significant changes to the binding affinity.

Because of the multi-chain modelling problem, which currently has no general solution, the current approach is often to copy the orientation across from the template antibody to create the orientation of the target antibody. (The three examples above do perform some extent of orientation optimisation using conserved residues at the VH-VL interface.)

However, before we begin to consider how to effect the modelling of the VH-VL interface, we must first build the VH and the VL separately. All of the domain folds in the IgG structure are very similar, consisting of two anti-parallel beta sheets sandwiched together. These beta sheets are very well conserved. The VH domain is harder to model because it contains the CDR H3 – which is the longest and most structurally variable of the 6 CDRs – so we may as well start there…

Framework structural alignment of 605 non-redundant structures (made non-redundant @95% sequence identity). The beta sheet cores are very well conserved, but the loops exhibit more structural variability (although not that much by general protein standards...). The stumps where the CDRs have been removed are shown.

Framework structural alignment of 605 non-redundant VHs (made non-redundant @95% sequence identity). The beta sheet cores are very well conserved, but the loops exhibit more structural variability (although not that much by general protein standards…). The stumps where the CDRs have been removed are labelled.

But even before we start modelling the VH, how hard is the homology modelling problem likely to be for the average VH sequence that we come across? Extracting all of the VH sequences from the IMGT database (72,482 sequences) we find the structure in SAbDab (Structural Antibody Database) that exhibits the highest sequence identity to each of the sequences. This is the structure that would generally be used as the template for modelling. Results below…

ModellingProblem

 

Most of the sequences have a best template with over 70% sequence identity, so modelling them with low RMSDs (< 1 Angstrom) should be possible. However, there are still those that have lower sequence identity. These could be problematic…

When we are analysing the accuracy of our models, we often generate models for which we have experimentally derived crystal structures, and then compare them. But a crystal structure is not necessarily the native conformation of the protein, and some of the solvents added to aid the crystallisation could well distort the structure in some small (or possibly large) way. Or perhaps the protein is just flexible, and so we wouldn’t expect it to adopt just one conformation.

Again using SAbDab to help generate our datasets, we found the maximum variation (backbone RMSD) between sequence-identical VH domains, for the framework region only. How different can 100% identical sequences get? Again, results are below…

IdenticalRMSDs

We see that even for 100% identical domains, the conformations can be different enough for a significant RMSD. The change that created a 1.4A RMSD change (PDB entries 4fqc and 4fq1) is due to a completely different conformation for one of the framework loops.

So, although antibody modelling is easy in some respects – high conservation, large number of available structures for templates – it is not just a matter of getting it ‘close’, or even ‘good’. It’s about getting it as near to perfect as possible… (even though perfect may be ~ 0.4 A RMSD over the framework…)

Watch this space…

“Perfection is not attainable, but if we chase perfection we can catch excellence.”

(Vince Lombardi )