Category Archives: Group Meetings

What we discuss during cake at our Tuesday afternoon group meetings

Novelty in Drug Discovery

The primary aim of drug discovery is to find novel molecules that are active against a target of therapeutic relevance and that are not covered by any existing patents (1).  Due to the increasing cost of research and development in the later stages of drug discovery, and the increase in drug candidates failing at these stages, there is a desire to select the most diverse set of active molecules at the earliest stage of drug discovery, to maximise the chance of finding a molecule that can be optimised into a successful drug (2,3). Computational methods that are both accurate and efficient are one approach to this problem and can augment experiment approaches in deciding which molecules to take forward.

But what do we mean by a “novel” compound? When prioritising molecules for synthesis which characteristics do we want to be different?  It was once common to select subsets of hits to maximise chemical diversity in order to cover as much chemical space as possible (4).  These novel lead molecules could subsequently be optimised, the idea that maximising the coverage of chemical space would maximise the chance of finding a molecule that could be optimised successfully. More recently however, the focus has shifted to “biodiversity”: diversity in terms of how the molecule interacts with the protein (1). Activity cliffs, pairs of molecules that are structurally and chemically similar but have a large difference in potency, indicate that chemical diversity may not be the best descriptor to identify molecules that interact with the target in sufficiently diverse ways. The molecules to be taken forward should be both active against the target and diverse in terms of how they interact with the target, and the parts of the binding site the molecule interacts with.

This raises two interesting ideas. The first is prioritising molecules that form the same interactions as molecules known to bind but are chemically different: scaffold hopping (5). The second is prioritising molecules that potentially form different interactions to known binders. I hope to explore this in the coming months as part of my research.

References

(1) J. K. Medina-Franco et al., Expert Opin. Drug Discov., 2014, 9, 151-156.

(2) A. S. A. Roy, Project FDA Report, 2012, 5.

(3) J. Avorn, New England Journ. of Med., 2015, 372, 1877-1879.

(4)  P. Willet, Journ. Comp. Bio., 1999, 6, 447-457.

(5) H. Zhao, Drug Discov. Today,  2007, 12, 149–155.

New toys for OPIG

OPIG recently acquired 55 additional computers all of the same make and model; they are of a decent specification (for 2015), each with quad-core i5 processor and 8GB of RAM, but what to do with them? Cluster computing time!
cluster

Along with a couple of support servers, this provides us with 228 computation cores, 440GB of RAM and >40TB of storage. Whilst this would be a tremendous specification for a single computer, parallel computing on a cluster is a significantly different beast.

This kind of architecture and parallelism really lends itself to certain classes of problems, especially those that have:

  • Independent data
  • Parameter sweeps
  • Multiple runs with different random seeds
  • Dirty great data sets
  • Or can be snipped up and require low inter-processor communication

With a single processor and a single core, a computer looks like this:
1core

These days, when multiple processor cores are integrated onto a single die, the cores are normally independent but share a last-level cache and both can access the same memory. This gives a layout similar to the following:
2cores

Add more cores or more processors to a single computer and you start to tessellate the above. Each pair of cores have access to their own shared cache, they have access to their own memory and they can access the memory attached to any other cores. However, accessing memory physically attached to other cores comes at the cost of increased latency.
4cores

Cluster computing on the other hand rarely exhibits this flat memory architecture, as no node can directly another node’s memory. Instead we use a Message Passing Interface (MPI) to pass messages between nodes. Though it takes a little time to wrap your head around working this way, effectively every processor simultaneously runs the exact same piece of code, the sole difference being the “Rank” of the execution core. A simple example of MPI is getting every code to greet us with the traditional “Hello World” and tell us its rank. A single execution with mpirun simultaneously executes the code on multiple cores:

$mpirun -n 4 ./helloworld_mpi
Hello, world, from processor 3 of 4
Hello, world, from processor 1 of 4
Hello, world, from processor 2 of 4
Hello, world, from processor 0 of 4

Note that the responses aren’t in order, some cores may have been busy (for example handling the operating system) so couldn’t run their code immediately. Another simple example of this would be a sort. We could for example tell every processor to take several million values, find the smallest value and pass a message to whichever core has “Rank 0” that number. The core at Rank 0 will then sort that much smaller number set of values. Below is the kind of speedup which was achieved by simply splitting the same problem over 4 physically independent computers of the cluster.

cluster-results

As not everyone in the group will have the time or inclination to MPI-ify their code, there is also HTCondor. HTCondor, is a workload management system for compute intensive jobs which allows jobs to be queued, scheduled, assigned priorities and distributed from a single head node to processing nodes, with the results copied back on demand. The server OPIG provides the job distribution system, whilst SkyOctopus provides shared storage on every computation node. Should the required package currently not be available on all of the computation nodes, SkyOctopus can reach down and remotely modify the software installations on all of the lesser computation nodes.

Loop Model Selection

As I have talked about in previous blog posts (here and here, if interested!), the majority of my research so far has focussed on improving our ability to generate loop decoys, with a particular focus on the H3 loop of antibodies. The loop modelling software that I have been developing, Sphinx, is a hybrid of two other methods – FREAD, a knowledge-based method, and our own ab initio method. By using this hybrid approach we are able to produce a decoy set that is enriched with near-native structures. However, while the ability to produce accurate loop conformations is a major advantage, it is by no means the full story – how do we know which of our candidate loop models to choose?

loop_decoy_ranking

In order to choose which model is the best, a method is required that scores each decoy, thereby producing a ranked list with the conformation predicted to be best at the top. There are two main approaches to this problem – physics-based force fields and statistical potentials.

Force fields are functions used to calculate the potential energy of a structure. They normally include terms for bonded interactions, such as bond lengths, bond angles and dihedral angles; and non-bonded interactions, such as electrostatics and van der Waal’s forces. In principle, they can be very accurate, however they have certain drawbacks. Since some terms have a very steep dependency on interatomic distance (in particular the non-bonded terms), very slight conformational differences can have a huge effect on the score. A loop conformation that is very close to the native could therefore be ranked poorly. In addition, solvation terms have to be used – this is especially important in loop modelling applications since loop regions are generally found on the surface of proteins, where they are exposed to solvent molecules.

The alternatives to physics-based force fields are statistical potentials. In this case, a score is achieved by comparing the model structure (i.e. its interatomic distances and contacts) to experimentally-derived structures. As a very simple example, if the distance between the backbone N and Cα of a residue in a loop model is 2Å, but this distance has not been observed in known structures, we can assume that a distance of 2Å is energetically unfavourable, and so we can tell that this model is unlikely to be close to the native structure. Advantages of statistical potentials over force fields are their relative ‘smoothness’ (i.e. small variations in structure do not affect the score as much), and the fact that all interactions do not need to be completely understood – if examples of these interactions have been observed before, they will automatically be taken into account.

I have tested several statistical potentials (including calRW, DFIRE, DOPE and SoapLoop) by using them to rank the loop decoys generated by our hybrid method, Sphinx. Unfortunately, none of them were consistently able to choose the best decoy out of the set. The average RMSD (across 70 general loop targets) of the top-ranked decoy ranged between 2.7Å and 4.74Å for the different methods – the average RMSD of the actual best decoy was much lower at 1.32Å. Other researchers have also found loop ranking challenging – for example, in the latest Antibody Modelling Assessment (AMA-II), ranking was seen as an area for significant improvement. In fact, model selection is seen as such an issue that protein structure prediction competitions like AMA-II and CASP allow the participants to submit more than one model. Loop model selection is therefore an unsolved problem, which must be investigated further to enable reliable predictions to be made.

Network Pharmacology

The dominant paradigm in drug discovery has been one of finding small molecules (or more recently, biologics) that bind selectively to one target of therapeutic interest. This reductionist approach conveniently ignores the fact that many drugs do, in fact, bind to multiple targets. Indeed, systems biology is uncovering an unsettling picture for comfortable reductionists: the so-called ‘magic bullet’ of Paul Ehrlich, a single compound that binds to a single target, may be less effective than a compound with multiple targets. This new approach—network pharmacology—offers new ways to improve drug efficacy, to rescue orphan drugs, re-purpose existing drugs, predict targets, and predict side-effects.

Building on work Stuart Armstrong and I did at InhibOx, a spinout from the University of Oxford’s Chemistry Department, and inspired by the work of Shoichet et al. (2007), Álvaro Cortes-Cabrera and I took our ElectroShape method, designed for ultra-fast ligand-based virtual screening (Armstrong et al., 2010 & 2011), and built a new way of exploring the relationships between drug targets (Cortes-Cabrera et al., 2013). Ligand-based virtual screening is predicated on the molecular similarity principle: similar chemical compounds have similar properties (see, e.g., Johnson & Maggiora, 1990). ElectroShape built on the earlier pioneering USR (Ultra-fast Shape Recognition) work of Pedro Ballester and Prof. W. Graham Richards at Oxford (Ballester & Richards, 2007).

Our new approach addressed two Inherent limitations of the network pharmacology approaches available at the time:

  • Chemical similarity is calculated on the basis of the chemical topology of the small molecule; and
  • Structural information about the macromolecular target is neglected.

Our method addressed these issues by taking into account 3D information from both the ligand and the target.

The approach involved comparing the similarity of each set ligands known to bind to a protein, to the equivalent sets of ligands of all other known drug targets in DrugBank, DrugBank is a tremendous “bioinformatics and cheminformatics resource that combines detailed drug (i.e. chemical, pharmacological and pharmaceutical) data with comprehensive drug target (i.e. sequence, structure, and pathway) information.” This analysis generated a network of related proteins, connected by the similarity of the sets of ligands known to bind to them.

2013.ElectroShapePolypharmacologyServerWe looked at two different kinds of ligand similarity metrics, the inverse Manhattan distance of our ElectroShape descriptor, and compared them to 2D Morgan fingerprints, calculated using the wonderful open source cheminformatics toolkit, RDKit from Greg Landrum. Morgan fingerprints use connectivity information similar to that used for the well known ECFP family of fingerprints, which had been used in the SEA method of Keiser et al. We also looked at the problem from the receptor side, comparing the active sites of the proteins. These complementary approaches produced networks that shared a minimal fraction (0.36% to 6.80%) of nodes: while the direct comparison of target ligand-binding sites could give valuable information in order to achieve some kind of target specificity, ligand-based networks may contribute information about unexpected interactions for side-effect prediction and polypharmacological profile optimization.

Our new target-fishing approach was able to predict drug adverse effects, build polypharmacology profiles, and relate targets from two complementary viewpoints:
ligand-based, and target-based networks. We used the DUD and WOMBAT benchmark sets for on-target validation, and the results were directly comparable to those obtained using other state-of-the-art target-fishing approaches. Off-target validation was performed using a limited set of non-annotated secondary targets for already known drugs. Comparison of the predicted adverse effects with data contained in the SIDER 2 database showed good specificity and reasonable selectivity. All of these features were implemented in a user-friendly web interface that: (i) can be queried for both polypharmacology profiles and adverse effects, (ii) links to related targets in ChEMBLdb in the three networks (2D, 4D ligand and 3D receptor), and (iii) displays the 2D structure of already annotated drugs.

2013.ElectroShapePolypharmacologyServer.Screenshot

References

Armstrong, M. S., G. M. Morris, P. W. Finn, R. Sharma, L. Moretti, R. I. Cooper and W. G. Richards (2010). “ElectroShape: fast molecular similarity calculations incorporating shape, chirality and electrostatics.” J Comput Aided Mol Des, 24(9): 789-801. 10.1007/s10822-010-9374-0.

Armstrong, M. S., P. W. Finn, G. M. Morris and W. G. Richards (2011). “Improving the accuracy of ultrafast ligand-based screening: incorporating lipophilicity into ElectroShape as an extra dimension.” J Comput Aided Mol Des, 25(8): 785-790. 10.1007/s10822-011-9463-8.

Ballester, P. J. and W. G. Richards (2007). “Ultrafast shape recognition to search compound databases for similar molecular shapes.” J Comput Chem, 28(10): 1711-1723. 10.1002/jcc.20681.

Cortes-Cabrera, A., G. M. Morris, P. W. Finn, A. Morreale and F. Gago (2013). “Comparison of ultra-fast 2D and 3D ligand and target descriptors for side effect prediction and network analysis in polypharmacology.” Br J Pharmacol, 170(3): 557-567. 10.1111/bph.12294.

Johnson, A. M., & G. M. Maggiora (1990). “Concepts and Applications of Molecular Similarity.” New York: John Willey & Sons.

Landrum, G. (2011). “RDKit: Open-source cheminformatics.” from http://www.rdkit.org.

Keiser, M. J., B. L. Roth, B. N. Armbruster, P. Ernsberger, J. J. Irwin and B. K. Shoichet (2007). “Relating protein pharmacology by ligand chemistry.” Nat Biotechnol, 25(2): 197-206. 10.1038/nbt1284.

Wishart, D. S., C. Knox, A. C. Guo, S. Shrivastava, M. Hassanali, P. Stothard, Z. Chang and J. Woolsey (2006). “DrugBank: a comprehensive resource for in silico drug discovery and exploration.” Nucleic Acids Res, 34(Database issue): D668-672. 10.1093/nar/gkj067.

Co-translational insertion and folding of membrane proteins

The alpha-helical bundle is the most common type of fold for membrane proteins. Their diverse functions include transport, signalling, and catalysis. While structure determination is much more difficult for membrane proteins than it is for soluble proteins, it is accelerating and there are now 586 unique proteins in the database of Membrane Proteins of Known 3D Structure. However, we still have quite a poor understanding of how membrane proteins fold. There is increasing evidence that it is more complicated than the two-stage model proposed in 1990 by Popot and Engelman.

The machinery that inserts most alpha-helical membrane proteins is the Sec apparatus. In prokaryotes, it is located in the plasma membrane, while eukaryotic Sec is found in the ER. Sec itself is an alpha-helical bundle in the shape of a pore, and its structure is able both to allow peptides to pass fully across the membrane, and also to open laterally to insert transmembrane helices into the membrane. In both cases, this occurs co-translationally, with translation halted by the signal recognition particle until the ribosome is associated with the Sec complex.

If helices are inserted during the process of translation, does folding only begin after translation is finished? On what timescale are these folding processes occuring? There is evidence that a hairpin of two transmembrane helices forms on a timescale of miliseconds in vitro. Are helices already interacting during translation to form components of the native structure? It has also been suggested that helices may insert into the membrane in pairs, via the Sec apparatus.

There are still many aspects of the insertion process which are not fully understood, and even the topology of an alpha-helical membrane protein can be affected by the last part of the protein to be translated. I am starting to investigate some of these questions by using computational tools to learn more about the membrane proteins whose structures have already been solved.

Next generation sequencing of paired heavy and light chain sequences

At the last meeting before Christmas I covered the article by DeKosky et al. describing a new methodology for sequencing of paired VH-VL repertoire developed by the authors.

In the recent years there have been an exponential growth of available antibody sequences, caused mainly by the development of cheap and high-throughput Next Generation Sequencing (NGS) technologies. This trend led to the creation of several publicly available antibody sequence databases such as the DIGIT database and the abYsis database, containing hundreds of thousands of unpaired light chain and heavy chain sequences from over 100 species. Nevertheless, the sequencing of paired VH-VL repertoire remained a challenge, with the available techniques suffering from low throughput (<700 cells) and high cost. In contrast, the method developed by DeKosky et al. allows for relatively cheap paired sequencing of most of the 10^6 B cells contained within a typical 10-ml blood draw.

The work flow is as follows: first the isolated cells, dissolved in water, and magnetic poly(dT) beads mixed with cell lysis buffer are pushed through a narrow opening into a rapidly moving annular oil phase, resulting in a thin jet that coalescences into droplets, in such a way that each droplet has a very low chance of having a cell inside it. This ensures that the vast majority of droplets that do contain cells, contain only one cell each. Next, the cell lysis occurs within the droplets and the mRNA fragments coding for the antibody chains attach to the poly(dT) beads. Following that, the mRNA fragments are recovered and linkage PCR is used to generate 850 bp cDNA fragments for NGS.

To analyse the accuracy of their methodology the authors sequenced paired CDR-H3 – CDR-L3 sequences from blood samples obtained from three different human donors, filtering the results by 96% clustering, read-quality and removing sequences with less than two reads. Overall, this resulted in ~200,000 paired CDR-H3 – CDR-L3 sequences. The authors found that pairing accuracy of their methodology was ~98%.

The article also contained some bioinformatics analysis of the data. The authors first analysed CDR-L3 sequences that tend to pair up with many diverse CDR-H3 sequences and whether such “promiscuous” CDR-L3s are also “public” i.e. they are promiscuous and common in all three donors. Their results show that out of 50 most common promiscuous CDR-L3s 49 are also public. The results also show that the promiscuous CDR-L3s show little to no modification, being very close to the germline sequence.

Illustration of the sequencing pipeline

The sequencing data also contained examples of allelic inclusion, where one B-cell expresses two B cell receptors (almost always one VH gene and two distinct VL genes). It was found that about ~0.5% of all analysed B-cells showed allelic inclusion.

Finally, the authors looked at the occurrence of traits commonly associated with broadly Neutralizing Antibodies (bNAbs), produced to fight rapidly mutating pathogens (such as the influenza virus). These traits were short (<6 aa) CDR-L3 and long (11 – 18 aa) CDR-H3s. In total, the authors found 31 sequences with these features, suggesting that bNAbs can be found in the repertoire of healthy donors.

Overall this article presents very interesting and promising method, that should allow for large-scale sequencing of paired VH-VL sequences.

Racing along transcripts: Correlating ribosome profiling and protein structure.

A long long time ago, in a galaxy far away, I gave a presentation about the state of my research to the group (can you tell I’m excited for the new Star Wars!). Since then, little has changed due to my absenteeism from Oxford, which means ((un)luckily) the state of work is by and large the same. Now, my work focusses on the effect that the translation speed of a given mRNA sequence can have on the eventual protein product, specifically through the phenomena of cotranslational folding. I’ve discussed the evidence behind this in prior posts (see here and here), though I find the below video a good reminder of why we can’t always just go as fast as we like.

So given that translation speed is important, how do we in fact measure it? Traditional measures, such as tAI and CAI, infer them using the codon bias within the genome or by comparing the counts of tRNA genes in a genome. However, while these have been shown to somewhat relate to speed, they are still solely theoretical in their construction. An alternative is ribosome profiling, which I’ve discussed in depth before (see here), which provides an actual experimental measure of the time taken to translate each codon in an mRNA sequence. In my latest work, I have compiled ribosome profiling data from 7 different experiments, consisting of 6 diverse organisms and processed them all in the same fashion from their respective raw data. Combined, the dataset gives ribosome profiling “speed” values for approximately 25 thousand genes across the various organisms.

Screenshot from 2015-10-22 13:58:17

Our first task with this dataset is to see how well the traditional measures compare to the ribosome profiling data. For this, we calculated the correlation against CAI, MinMax, nTE and tAI, with the results presented in the figure above. We find that basically no measure adequately captures the entirety of the translation speed; some measures failing completely, others obviously capturing some part of the behaviour, and then some others even predicting the reverse! Given that no measure captured the behaviour adequately, we realised that existing results that related the translation speed to the protein structure, may, in fact, be wrong. Thus, we decided that we should recreate the analysis using our dataset to either validate or correct the original observations. To do this we combined our ribosome profiling dataset with matching PDB structures, such that we had the sequence, the structure, and the translation speed for approximately 4500 genes over the 6 species. While I won’t go in to details here (see upcoming paper – touch wood), we analysed the relationship between the speed and the solvent accessibility, the secondary structure, and linker regions. We found striking differences to the observations found in the literature that I’ll be excited to share in the near future.

Journal Club: Mechanical force releases nascent chain-mediated ribosome arrest in vitro and in vivo

For this week’s journal club, I presented the paper by Goldman et al, “Mechanical force releases nascent chain-mediated ribosome arrest in vitro and in vivo”. The reason for choosing this paper is that it discussed an influence on protein folding/creation/translation that is not considered in any of today’s modelling efforts and I think it is massively important that every so often we, as a community, step-back and appreciate the complexity of the system we attempt to understand. This work focuses on the the SecM protein, which is known to regulate SecA (which is part of the translocon) which in turn regulates SecM. The bio-mechanical manner in which this regulation takes place is not fully understood. However, SecM contains within its sequence a peptide motiff that binds so strongly to the ribosome tunnel wall that translation is stopped. It is hypothesised that SecA regulates SecM by applying a force to the nascent chain to pull it past this stalling point and, hence, allow translation to continue.

To begin their study, Goldman wanted to confirm that one could advance past the stall point merely by the application of force. By attaching the nascent chain and the ribosome to nano-tweezers and a micro-pipette respectively they could do this. However, to confirm that the system was stalled before applying a (larger) force, they created a sequence which included CaM, a protein which periodically hops between a folded and unfolded state when pulled at 7pN, followed by the section of SecM which causes the stalling. The nano-tweezers were able to sense the slight variations in length at 7pn from the unfolding and refolding of CaM, though no continuing extension, which would indicate translation, was found. This indicated the system had truly stalled due to the SecM sequence. Once at this point, Goldman increased the applied force, at which point distance between the pipette and the optical tweezers slowly increased until detachment when the stop codon was reached. As well as confirming that force on the nascent chain could make the SecM system proceed past the stalling point, they also noted a force dependence to the speed with which it would overcome this barrier.

Protein folding near the ribosome tunnel exit can rescue SecM-mediated stalling

With this force dependence established, they pondered whether a domain folding upchain of the stall point could generate enough force that it could cause translation to continue. To investigate, Goldman created a protein that contained Top7 followed by a linker of variable length, followed by the SeqM stalling motif, which was in turn followed by GFP. Shown in the figure above, altering the length of the linker region defined the location of Top7 while it attempts to fold. A long linker allows Top7 to fold completely clear from the ribosome tunnel. A short linker means that a it can’t fold due to many of its residues being inside the ribosome tunnel. Between these extremes, however, the protein may only have a few residues within the tunnel and by stretching the nascent chain it may access them so as to be able to fold. In addition, Top7 was chosen specifically as it was known to fold even under light pressure. Hence, by newtons third, Top7 would fold even while its C terminus would be under strain into the ribosome, it in turn generates an equal and opposite force on the stalling peptide sequence within the heart of the ribosome tunnel, which should allow translation to proceed past the stall. Crucially, if Top7 folded too far away from the ribosome, this interaction would not occur and translation would not continue.

Goldman’s experiments showed that this is in fact the case; they found that only linkers of 15 to 22 amino acid would successfully complete translation. This confirms that a protein folding at the mouth of the ribosome tunnel can generate sizeable force (they calculate roughly 12pN in this instance). Now I find this whole system especially interesting as the I wonder how this may generalise to all translation, both in terms of interactions of the nascent chain with the side wall and the domain folding at the ribosome tunnel mouth. Should I consider these when I calculate translation speeds for example? Oh well, we need a reasonable model for translation while ignoring these special cases first before I really need to worry!

Short project: “Network Approach to Identifying the Mode of Action of Environmental Changes in Yeast”

correlationComparisonSingle_Edited2

I recently had the pleasure of working for 11 weeks with the wonderful people in OPIG. I studied protein interaction networks and how we might discern the parts of the network that are important for disease (and otherwise). In the past, people have looked at differential gene expression or used community detection to this end, but both of these approaches have drawbacks. The former misses the fact that biological systems are rarely just binary systems or interactions. Community detection addresses this, but it in turn does not take into account the dynamic nature of proteins in the cell – how do their interactions change over time? What about interactions or proteins that are only present in some cells? Community detection tries to look at all proteins and ignores important context like this.

My aim was to develop approaches that combined these elements. We used Pearson’s correlation coefficient on gene expression data and community detection on an interaction network. We showed that the distribution of the correlation of pairs of genes is weighted towards 1.0 for those that interact compared to those that do not, and for those in the same community compared to those that are not – see the figure above. We went on to assign a “score” to communities based on their correlation in each set of expression data. For example, one community might have a high score in expression data from cells undergoing amino acid starvation. We ended up with a list of communities which seemed to be important in certain environmental conditions. We made use of functional enrichment – drawing on the lovely Malte’s work – to try and verify these scores.

I had a great time with some lovely people and produced something that I thought was very interesting. I really hope I see this work pop up again and get taken to interesting places! So long, and thanks for all the cookies!

Click here for some more pretty plots and a code repository (by request only).

Journal Club: Accessing Protein Conformational Ensembles using RT X-ray Crystallography

This week I presented a paper that investigates the differences between crystallographic datasets collected from crystals at RT (room-temperature) and crystals at CT (cryogenic temperatures). Full paper here.

The cooling of protein crystals to cryogenic temperatures is widely used as a method of reducing radiation damage and enabling collection of whole datasets from a single crystal. In fact, this approach has been so successful that approximately 95% of structures in the PDB have been collected at CT.

However, the main assumption of cryo-cooling is that the “freezing”/cooling process happens quickly enough that it does not disturb the conformational distributions of the protein, and that the RT ensemble is “trapped” when cooled to CT.

Although it is well established that cryo-cooling of the crystal does not distort the overall structure or fold of the protein, this paper investigates some of the more subtle changes that cryo-cooling can introduce, such as the distortion of sidechain conformations or the quenching of dynamic CONTACT networks. These features of proteins could be important for the understanding of phenomena such as binding or allosteric modulation, and so accurate information about the protein is essential. If this information is regulartly lost in the cryo-cooling process, it could be a strong argument for a return to collection at RT where feasible.

By using the RINGER method, the authors find that the sidechain conformations are commonly affected by the cryo-cooling process: the conformers present at CT are sometimes completely different to the conformers observed at RT. In total, they find that cryo-cooling affects a significant number of residues (predominantly those on the surface of the protein, but also those that are buried). 18.9% of residues have rotamer distributions that change between RT and CT, and 37.7% of residues have a conformer that changes occupancy by 20% or more.

Overall, the authors conclude that, where possible, datasets should be collected at RT, as the derived models offer a more realistic description of the biologically-relevant conformational ensemble of the protein.