Category Archives: Talks

A short account of the talks given by the OPIG group members and their highly esteemed guests.

ISMB 2018: Collaborative Structural Biology using Machine Learning and Jupyter Notebook

This post is a summary of the talk, Collaborative Structural Biology using Machine Learning and Jupyter Notebook, given by Fergus Imrie and Fergus Boyles at ISMB 2018. Materials for the experiments can be found here and here.

Myself and four other members of the Oxford Protein Informatics Group (a.k.a. OPIGlets) recently had the pleasure of attending the Intelligent Systems for Molecular Biology (ISMB) conference in Chicago. Organised by the International Society of Computational Biology (ISCB), ISMB is the largest computational biology conference in the world, with several thousand attendees.

Spread over four action-packed days in July (not including workshops/tutorial sessions), it was an eye-opening experience, showcasing the depth and breadth of computational biology research; particularly striking was the range of problems tackled, techniques applied, and data sources used.

I was fortunate enough to have the opportunity to present alongside my colleague, Fergus Boyles, as part of the 3DSIG Community of Special Interest (COSI). We led the first hands-on practical demonstration at 3DSIG, entitled “Collaborative Structural Biology using Machine Learning and Jupyter Notebook”. While a new format at the conference, with our presentation somewhat of an experiment, I understand the organising committee is keen to repeat the format next year.

In what follows, I’ll briefly outline the key themes and outcomes from our presentation. Full materials to reproduce all results presented in full can be found here and here.

Reproducibility crisis?

In a survey of 1,500 scientists by Nature in 2016 (link), more than 70% of participants had tried and failed to reproduce another scientist’s experiments, while 90% said there was a reproducibility crisis to some extent. Most striking, perhaps, was the revelation that “more than half have failed to reproduce their own experiments”!

Nature, 2016, M. Baker, 1,500 scientists lift the lid on reproducibility

While the focus of the survey was, admittedly, on traditional, lab-based, experimental research, this is certainly also an issue in computational approaches, with the machine learning community under the heaviest scrutiny.

This is clearly unsustainable and many efforts are being taken to address this across the scientific world. As one example, Nature has introduced a code and submission checklist that requires authors to submit custom algorithms or software that are central to the paper for peer review and editorial assessment. While only directly affecting a small portion of research, this is a big step in the right direction and I think we’re only going to see more of this in the future.

Software to the rescue?

With the rise of cloud computing, the open-source community, and much more, there is a plethora of software available that can be used to improve the accessibility of methods and improve the reproducibility of computational experiments. Below, I touch on a couple of general areas that are increasing used in computational pipelines and setups.

  • Cloud computing (such as Amazon Web Services, Google Cloud, and Microsoft Azure) provides widely accessible, standardised compute environments, and allows the use of anything from a single core to near-HPC-level resources for a short period of time at relative inexpensive.
  • Container solutions (such as Docker and Kubernets) allow developers to package an application, with all required libraries and dependencies, into a single executable for the end user, with no further dependencies.

Our approach

We didn’t use any of the above tools for purposes of our talk, but instead constructed our pipeline based on three other widely-used solutions: Conda, Project Jupyter, and Git/GitHub. For those unfamiliar, here is a brief overview of each.

  • Conda is an open-source package and environment management system. It works by creating distinct virtual environments and installing standalone interpreters or compilers within that virtual environment. You can then install additional packages within that virtual environment, that are completely isolated and separate from your system default packages, and other virtual environments.

  • For those of you who are familiar with the iPython notebook, Jupyter is an extension of this format to multiple languages. Jupyter provides an interactive browser-based coding environment in the form of a notebook, that can be thought of as similar to a lightweight IDE. The power of Jupyter notebooks comes from a combination of (1) the ability to intersperse code with markdown, which is much more human readable and friendly on the eye compared to traditional comments; (2) the cell-based format, where small pieces of code are contained in cells that can be run, and re-run, individually and without re-running the remainder of your code; (3) the ability to display inline figures, tables (among other things), rendering in HTML.

  • Git is an open-source version control system. Version control is an essential bedrock of good programming that we don’t have time to go into in more detail, but long-story short, Git takes any headache out of version control.

  • GitHub is a code hosting platform built for collaboration with Git at its core. Beyond a simple code repository, GitHub allows collaboration and development through two key features. “Forking” allows you to clone other projects, and either develop them yourself, or keep a record of a fixed version for integration within another project. “Pull requests” make large scale community collaboration projects possible, with users providing code for specific modifications for the original projects, which the owners/admin of the original project can choose to merge or reject.

Experiments

As a toy problem to showcase this approach to building a reproducible pipeline, we address the problem of protein classification according to the SCOP classification scheme. While the dataset we have shared contains examples of protein pairs that are in the same fold, superfamily, and family (as well as none of these), we focussed on the most straightforward task of determining whether a pair of proteins belong to the same family or not.

Our dataset is based on the Astral data set (06.02.2016 build), and consists of 8 pairwise features computed from the sequences of the two proteins. We won’t go into the details of the exact features here.

Using a simple random forest on these 8 pairwise features between the target and template protein, we achieved an accuracy of 88.0%, and an area under the receiver operative curve of 0.95. A confusion matrix and ROC curve summarising our results can be found below.

Instructions to reproduce these results, together with all materials needed, can be found here and here.

Conclusions

Reproducibility in science is facing a challenging time. All stakeholders, from researchers to funders and publishers, are placing more emphasis on work being reproducible, and are taking measures to ensure this. In computational research, in particular stochastic algorithms such as those prevalent throughout machine learning, the problem is no less serious, and on the face of it should be readily solvable.

In our demonstration, we have illustrated one approach to tackling this in a simple, efficient way. In addition, we only looked to tackle one possible problem or question, and only used a subset of the overall dataset. Please feel free to explore the dataset and pose your own questions. We’d love to hear from you if you do!

Acknowledgements

I’d like to thank all of OPIG for providing feedback on an early version of the talk. Crucially, I’d like to thank Dr Saulo de Oliveira who provided us with the dataset used in our exploratory analysis. Finally, I’d like to thank my co-presenter Fergus Bolyes, without whom I couldn’t have done this.

ISMB 2018 (Chicago): Summary of Interesting Talks/Posters

Catherine’s Selection

Network approach integrates 3D structural and sequence data to improve protein structural comparison

Why: Current graph mapping in protein structural comparison ignores sequence order of residues. Residues distant in sequence but close in 3D space are more important.
How: Introduce sequence order of residues, set a sequence-distance cutoff to consider structurally important residues, count the graphlet frequency and embed into PCA space.
Results: the new method is predictive of SCOP and CATH ‘groups’. Certain graphlets are enriched in alpha and beta folds.
Link: https://www.nature.com/articles/s41598-017-14411-y

Investigating the molecular determinants of Ebola virus pathogenicity

Why: Reston virus is the only Ebola virus that is not pathogenic to human
What they do: multiple sequence alignment to look for specificity determining positions (SDPs) using s3det, then predict the effect of each individual SDP on the stability of the protein with mCSM.
Results: VP40 SDPs alter octamer formation, structure hydrophobic core. VP24 SDPs leads to impair binding to KPNA5 in human, which inhibits interferon signalling.
Impact: only a few SDPs distinguish Reston VP24 from VP24 of others. Human-pathogenic Reston viruses may emerge.
Link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5558184/#__ffn_sectitle

Computational Analysis Highlights Key Molecular Interactions and Conformational Flexibility of a New Epitope on the Malaria Circumsporozoite Protein and Paves the Way for Vaccine Design

Why: An antibody with a strong binding affinity was found in a group of subjects. This antibody prevents cleavage of the surface protein.
What they do: They found the linear epitope, crystallise the strong and medium binders and run a molecular dynamic simulation to find out the flexibility of the structures.
Results: The strong binder is less flexible. Moreover, the strong binder is similar to the germline sequence which may mean that this antibody could have been readily formed.
Link: https://www.nature.com/articles/nm.4512



Matt’s Selection

“Analysis of sequence and structure data to understand nanobody architectures and antigen interactions”
Laura S. Mitchell (Colwell Group)
University of Cambridge, UK

This poster detailed the work from Laura’s two most recent publications, which can be found here: https://doi.org/10.1002/prot.25497, https://doi.org/10.1093/protein/gzy017

They describe a comprehensive analysis of the binding properties of the 156 non-redundant nanobody-antigen (Nb-Ag) complexes in the PDB/SAbDab (October 2017). Their analyses include Nb sequence variability (both global and across the binding regions), contact maps of nanobody-antigen interactions by region, and the typical chemical properties of each paratope. Nb-Ag complexes are compared to a reference set of monoclonal antibody-antigen (mAb-Ag) complexes. This work is a key first step in advancing our understanding of Nb paratopes, and will aid the development of new diagnostics and therapeutics.

OSPREY 3.0: Open-Source Protein Redesign for You, with Powerful New Features”
Jeffrey W. Martin (Donald Group)
Duke University, USA

OSPREY 3.0 (https://www.biorxiv.org/content/early/2018/04/23/306324) represents a large advance towards time-efficient continuous flexibility modelling of protein-protein interfaces.

Its new algorithms LUTE and BBK* allow for continuous rotamer flexibility searching and entropy-aware binding constant approximation in a much more efficient manner. The CATS algorithm also introduces local backbone flexibility as a long-awaited feature. This software now has a easy-to-use Python interface, and is fully Open-Source, making it an extremely attractive alternative to other proprietary protein design tools.

“Functional annotation of chemical libraries across diverse biological processes”
Scott Simpkins
University of Minnesota-Twin Cities, USA

This interesting talk detailed the work published in Nature Chemical Biology in September 2017 (https://doi.org/10.1038/nchembio.2436).

310 yeast gene-deletion mutants were isolated to perform chemical-genetic profile studies across six diverse small molecule high-throughput screening libraries. By studying which gene-deletion mutants were hypersensitive or resistant to each compound, the researchers could assign most members of each chemical library a probable functional annotation. Mapping back to gene-interaction profile data also allowed them to infer likely targets for some compounds. The GO annotations associated with these genes could then be used assess whether a given starting library is likely to contain promising starting-points that affect a given biological function. For example, the authors highlighted a deficiency across all libraries against the cellular processes of cytokinesis and ribosome biogenesis. Conversely, they found a large enrichment across all libraries for compounds likely to affect glycosylation or cell wall biogenesis. Compounds that target transcription and chromatin organisation were found to be enriched in certain datasets, and depleted in others. This genre of profiling provides researchers a way of judging a priori whether a given screening library is likely to contain promising lead compounds, given the functional role of the target of interest.

Storing your stuff with clever filesystems: ZFS and tmpfs

The filesystem is a a critical component of just about any operating system, however it’s often overlooked. When setting up a new server, the default filesystem options are often ticked and never thought about again. However, there exist a couple of filesystems which can provide some extraordinary features and speed. I’m talking about ZFS and tmpfs.

ZFS was originally developed by Oracle for their Solaris operating system, but has now been open-sourced and is freely available on linux. Tmpfs is a temporary file system which uses system memory to provide fast temporary storage for files. Together, they can provide outstanding reliability and speed for not very much effort.

Hard disk capacity has increased exponentially over the last 50 years. In the 1960s, you could rent a 5MB hard disk from IBM for the equivalent of $130,000 per month. Today you can buy for less than $600 a 12TB disk – a 2,400,000 times increase.

As storage technology has moved on, the filesystems which sit on top of them ideally need to be able to access the full capacity of those ever increasing disks. Many relatively new, or at least in-use, filesystems have serious limitations. Akin to “640K ought to be enough for anybody”, the likes of the FAT32 filesystem supports files which are at most 4GB on a chunk of disk (a partition) which can be at most 16TB. Bear in mind that arrays of disks can provide a working capacity of many times that of a single disk. You can buy the likes of a supermicro sc946ed shelf which will add 90 disks to your server. In an ideal world, as you buy bigger disks you should be able to pile them into your computer and tell your existing filesystem to make use of them, your filesystem should grow and you won’t have to remember a different drive letter or path depending on the hardware you’re using.

ZFS is a 128-bit file system, which means a single installation maxes out at 256 quadrillion zettabytes. All metadata is allocated dynamically so there isn’t the need to pre-allocate inodes and directories can have up to 2^48 (256 trillion) entries. ZFS provides the concept of “vdevs” (virtual devices) which can be a single disk or redundant/striped collections of multiple disks. These can be dynamically added to a pool of vdevs of the same type and your storage will grow onto the fresh hardware.

A further consideration is that both disks of the “spinning rust” variety and SSDs are subject to silent data corruption, i.e. “bit rot”. This can be caused by a number of factors even including cosmic rays, but the consequence is read errors when it comes time to retrieve your data. Manufacturers are aware of this and buried in the small print for your hard disk will be values for “unrecoverable read errors” i.e. data loss. ZFS works around this by providing several mechanisms:

  • Checksums for each block of data written.
  • Checksums for each pointer to data.
  • Scrub – Automatically validates checksums when the system is idle.
  • Multiple copies – Even if you have a single disk, it’s possible to provide redundancy by setting a copies=n variable during filesystem creation.
  • Self-healing – When a bad data block is detected, ZFS fetches the correct data from a redundant copy and replaces it with the correct data.

An additional bonus to ZFS is its ability to de-duplicate data. Should you be working with a number of very similar files, on a normal filesystem, each file will take up space proportional to the amount of data that’s contained. As ZFS keeps checksums of each block of data, it’s able to determine if two blocks contain identical data. ZFS therefore provides the ability to have multiple pointers to the same file and only store the differences between them.

 

ZFS also provides the ability to take a point in time snapshot of the entire filesystem and roll it back to a previous time. If you’re a software developer, got a package that has 101 dependencies and you need to upgrade it? Afraid to upgrade it in case it breaks things horribly? Working on code and you want to roll back to a previous version? ZFS snapshots can be run with cron or manually and provide a version of the filesystem which can be used to extract previous versions of overwritten or deleted files or used to roll everything back to a point in time when it worked.

Similar to deduplication, a snapshot won’t take up any disk extra space until the data starts to change.

The other filesystem worth mentioning is tmpfs. Tmpfs takes part of the system memory and turns it into a usable filesystem. This is incredibly useful for systems which create huge numbers of temporary files and attempt to re-read them. Tmpfs is also just about as fast as a filesystem can be. Compared to a single SSD or a RAID array of six disks, tmpfs blows them out of the water speed wise.

Creating a tmpfs filesystem is simple:
First create your mountpoint for the disk:

mkdir /mnt/ramdisk

Then mount it. The options are saying make it 1GB in size, it’s of type tmpfs and to mount it at the previously created mount point:

mount –t tmpfs -o size=1024m tmpfs /mnt/ramdisk

At this point, you can use it like any other filesystem:

df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1  218G 128G   80G  62% /
/dev/sdb1  6.3T 2.4T  3.6T  40% /spinnyrust
tank       946G 3.5G  942G   1% /tank
tmpfs      1.0G 391M  634M  39% /mnt/ramdisk

Slowing the progress of prion diseases

At present, the jury is still out on how prion diseases affect the body let alone how to cure them. We don’t know if amyloid plaques cause neurodegeneration or if they’re the result of it. Due to highly variable glycophosphatidylinositol (GPI) anchors, we don’t know the structure of prions. Due to their incredible resistance to proteolysis, we don’t know a simple way to destroy prions even using in an autoclave. The current recommendation[0] by the World Health Organisation includes the not so subtle: “Immerse in a pan containing 1N sodium hydroxide and heat in a gravity displacement autoclave at 121°C”.

There are several species including Water Buffalo, Horses and Dogs which are immune to prion diseases. Until relatively recently it was thought that rabbits were immune too. “Despite rabbits no longer being able to be classified as resistant to TSEs, an outbreak of ‘mad rabbit disease’ is unlikely”.[1] That being said, other than the addition of some salt bridges and additional H-bonds, we don’t know if that’s why some animals are immune.

We do know at least two species of lichen (P. sulcata and L. plumonaria) have not only discovered a way to naturally break down prions, but they’ve evolved two completely independent pathways to do so. How they accomplish this? We’re still not sure in fact, it was only last year that it was discovered that lichens may be composed of three symbiotic partnerships and not two as previously thought.[3]

With all this uncertainty, one thing is known: PrPSc, the pathogenic form of the Prion converts PrPC, the cellular form. Just preventing the production of PrPC may not be a good idea, mainly because we don’t know what it’s there for in the first place. Previous studies using PrP-knockout have shown hints that:

  • Hematopoietic stem cells express PrP on their cell membrane. PrP-null stem cells exhibit increased sensitivity to cell depletion. [4]
  • In mice, cleavage of PrP proteins in peripheral nerves causes the activation of myelin repair in Schwann Cells. Lack of PrP proteins caused demyelination in those cells. [5]
  • Mice lacking genes for PrP show altered long-term potentiation in the hippocampus. [6]
  • Prions have been indicated to play an important role in cell-cell adhesion and intracellular signalling.[7]

However, an alternative approach which bypasses most of the unknowns above is if it were possible to make off with the substrate which PrPSc uses, the progress of the disease might be slowed. A study by R Diaz-Espinoza et al. was able to show that by infecting animals with a self-replicating non-pathogenic prion disease it was possible to slow the fatal 263K scrapie agent. From their paper [8], “results show that a prophylactic inoculation of prion-infected animals with an anti-prion delays the onset of the disease and in some animals completely prevents the development of clinical symptoms and brain damage.”

[0] https://www.cdc.gov/prions/cjd/infection-control.html
[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3323982/
[2] https://blogs.scientificamerican.com/artful-amoeba/httpblogsscientificamericancomartful-amoeba20110725lichens-vs-the-almighty-prion/
[3] http://science.sciencemag.org/content/353/6298/488
[4] “Prion protein is expressed on long-term repopulating hematopoietic stem cells and is important for their self-renewal”. PNAS. 103 (7): 2184–9. doi:10.1073/pnas.0510577103
[5] Abbott A (2010-01-24). “Healthy prions protect nerves”. Nature. doi:10.1038/news.2010.29
[6] Maglio LE, Perez MF, Martins VR, Brentani RR, Ramirez OA (Nov 2004). “Hippocampal synaptic plasticity in mice devoid of cellular prion protein”. Brain Research. Molecular Brain Research. 131 (1-2): 58–64. doi:10.1016/j.molbrainres.2004.08.004
[7] Málaga-Trillo E, Solis GP, et al. (Mar 2009). Weissmann C, ed. “Regulation of embryonic cell adhesion by the prion protein”. PLoS Biology. 7 (3): e55. doi:10.1371/journal.pbio.1000055
[8] http://www.nature.com/mp/journal/vaop/ncurrent/full/mp201784a.html

Strachey Lecture – “Artificial Intelligence and the Future” by Dr. Demis Hassabis

For this week’s group meeting, some of us had the pleasure of attending a very interesting lecture by Dr. Demis Hassabis, founder of Deep Mind. Personally, I found the lecture quite thought-evoking and left the venue with a plethora of ideas sizzling in my brain. Since one of the best ways to end mental sizzlingness is by writing things down, I volunteered to write this week’s blog post in order to say my peace about yesterday’s Strachey Lecture.

Dr. Hassabis began by listing some very audacious goals: “To solve intelligence” and “To use it to make a better world”. At the end of his talk, someone in the audience asked him if he thought it was possible to achieve these goals (“to fully replicate the brain”), to which he responded with a simple there is nothing that tells us that we can’t.

After his bold introductory statement, Dr. Hassabis pressed on. For the first part of his lecture, he engaged the audience with videos and concepts of a reinforcement learning agent trained to learn and play several ATARI games. I was particularly impressed with the notion that the same agent could be used to achieve a professional level of gaming for 49 different games. Some of the videos are quite impressive and can be seen here or here. Suffice to say that their algorithm was much better at playing ATARi than I’ll ever be. It was also rather impressive to know that all the algorithm received as input was the game’s score and the pixels on the screen.

Dr. Hassabis mentioned in his lecture that games provide the ideal training ground for any form of AI. He presented several reasons for this, but the one that stuck with me was the notion that games quite often present a very simplistic and clear score. Your goal in a game is usually very well defined. You help the frog cross the road or you defeat some aliens for points. However, what I perceive to be the greatest challenge for AI is the fact that real world problems do not come with such a clear-cut, incremental score.

For instance, let us relate back to my particular scientific question: protein structure prediction. It has been suggested that much simpler algorithms such as Simulated Annealing are able to model protein structures as long as we have a perfect scoring system [Yang and Zhou, 2015]. The issue is, currently, the only way we have to define a perfect score is to use the very structure we are trying to predict (which kinda takes the whole prediction part out of the story).

Real world problems are hard. I am sure this is no news to anyone, including the scientists at Deep Mind.

During the second part of his talk, Dr. Hassabis focused on AlphaGo. AlphaGo is Deep Mind’s effort at mastering the ancient game of Go. What appealed to me in this part of the talk is the fact that Go has such a large number of possible configurations that devising an incremental score is no simple task (sounds familiar?). Yet, somehow, Deep Mind scientists were able to train their algorithm to a point where it defeated a professional Go player.

Their next challenge? In two weeks, AlphaGo will face the professional Go player with the highest number of titles in the last decade (the best player in the world?). This makes me reminiscent of when Garry Kasparov faced Deep Blue. After the talk, my fellow OPIG colleagues also seemed to be pretty excited about the outcome of the match (man vs. food computer).

Dr. Hassabis finished by saying that his career goal would be to develop AI that is capable of helping scientists tackle the big problems. From what I gather (and from my extremely biased point of view; protein structure prediction mindset), AI will only be able to achieve this goal once it is capable of coming up with its own scores for the games we present it to play with (hence developing some form of impetus). Regardless of how far we are from achieving this, at least we have a reason to cheer for AlphaGo in a couple of weeks (because hey, if you are trying to make our lives easier with clever AI, I am all up for it).

A program to aid primary protein structure determination -1962 style.

This year, OPIG have been doing series of weekly lectures on papers we considered to be seminal in the field of protein informatics. I initially started looking at “Comprotein: A computer program to aid primary protein structure determination” as it was one of the earliest (1960s) papers discussing a computational method of discovering the primary structure of proteins. Many bioinformaticians use these well-formed, tidy, sterile arrays of amino acids as the input to their work, for example:

MGLSDGEWQL VLNVWGKVEA DIPGHGQEVL IRLFKGHPET LEKFDKFKHL KSEDEMKASE DLKKHGATVL TALGGILKKK GHHEAEIKPL AQSHATKHKI PVKYLEFISE CIIQVLQSKH PGDFGADAQG AMNKALELFR KDMASNYKEL GFQG
(For those of you playing at home, that’s myoglobin.)

As the OPIG crew come from a diverse background and frequently ask questions well beyond my area of expertise, if for nothing other than posterior-covering, I needed to do some background reading. Though I’m not a researcher by trade any more, I began to realise despite the lectures/classes/papers/seminars I’d been exposed to, regarding all the clever things you do with a sequence when you have it, I didn’t know how you would actually go from a bunch of cells expressing (amongst a myriad of other molecules) the protein you were interested in, to the neat array of characters shown above. So without further ado:

The first stage in obtaining your protein is: cell lysis and there’s not much in it for the cell.
Mangle your cell using chemicals, enzymes, sonification or a French press (not your coffee one).

The second stage is producing a crude extract by centrifuging the above cell-mangle. This, terrifyingly, appears to be done between 10,000G and 100,000G and removes the cellular debris leaving it as a pellet in the bottom of the container, with the supernatant containing little but a mix of the proteins which were present in the cytoplasm along with some additional macromolecules.

Stage three is to purify the crude extract. Depending on the properties of the protein you’re interested in, one or more of the following stages are required:

  • Reverse-phase chromatography to separate based on hydrophobicity
  • Ion-exchange to separate based on the charge of the proteins
  • Gel-filtration to separate based on the size of the proteins

If all of the above are preformed, whilst the sequence of these variously charged/size-sorted/polar proteins will still be still unknown, they will now be sorted into various substrates based upon their properties. This is where the the third stage departs from science and lands squarely in the realm of art. The detergents/protocols/chemicals/enzymes/temperatures/pressures of the above techniques all differ depending on the hydrophobicity/charge/animal source of the type of protein one is aiming to extract.

Since at this point we still don’t know their sequence, working out the concentrations of the various constituent amino acids will be useful. One of the simplest methods of determining the amino acid concentrations of a protein is follow a procedure similar to:

Heat the sample in 6M HCL at at a temperature of 110C for 18-24h (or more) to fully hydrolyse all the peptide bonds. This may require an extended period (over 72h) to hydrolyse peptide bonds which are known to be more stable, such as those involving valine, isoleucine and leucine. This however can degrade Ser/Thr/Tyr/Try/Gln and Cys which will subsequently skew results. An alternative is to raise the pressure in the vessel to allow temperatures of 145-155C to for 20-240 minutes.

TL;DR: Take the glassware that’s been lying about your lab since before you were born, put 6M hydrochloric acid in it and bring to the boil. Take one difficultly refined and still totally unknown protein and put it in your boiling hydrochloric acid. Seal the above glassware in order to use it as a pressure vessel. Retreat swiftly whilst the apparatus builds up the appropriate pressure and cleaves the protein as required. -What could go wrong?

At this point I wondered if the almost exponential growth in PDB entries was due to humanity’s herd of biochemists now having been thinned to those which remained simply being several generations worth of lucky.

Once you have an idea of how many of each type of amino acid comprise your protein, we can potentially rebuild it. However at this point it’s like we’ve got a jigsaw puzzle and though we’ve got all the pieces and each piece can only be one of a limited selection of colours (thus making it a combinatorial problem) we’ve no idea what the pattern on the box should be. To further complicate matters, since this isn’t being done on but a single copy of the protein at a time, it’s like someone has put multiple copies of the same jigsaw into the box.

Once we have all the pieces, to determine the actual sequence, a second technique needs to be used. Though invented in 1950, Edman degradation appears not to have been a particularly widespread protocol, or at least it wasn’t in the National Biomedical Research Foundation from which the above paper emerged. This means of degradation tags the N-terminal amino acid and cleaves it from the rest of the protein. This can then be identified and the protocol repeated. Whilst this would otherwise be ideal, it suffers from a few issues in that it takes about an hour per cycle, only works reliably on sequences of about 30 amino acids and doesn’t work at all for proteins which have their n-terminus bonded or buried.

Instead, the refined protein is cleaved into a number of fragments at known points using a single enzyme. For example, Trypsin will cleave on the carboxyl side of arginine and lysine residues. A second copy of the protein is then cleaved using a different enzyme at a different point. These individual fragments are then sorted as above and their individual (non-sequential) components determined.

For example, if we have a protein which has an initial sequence ABCDE
Which then gets cleaved by two different enzymes to give:
Enzyme 1 : (A, B, C) and (D, E)
Enzyme 2 : (A, B) and (C, D)

We can see that the (C, D) fragment produced by Enzyme 2 overlaps with the (A, B, C) and (D, E) fragments produced by Enzyme 1. However, as we don’t know the order in which the amino acid appear within in each fragment, thus there are a number of different sequences which can come to light:

Possibility 1 : A B C D E
Possibility 2 : B A C D E
Possibility 3 : E D C A B
Possibility 4 : E D C B A

At this point the paper comments that such a result highlights to the biochemist that the molecule requires further work for refinement. Sadly the above example whilst relatively simple doesn’t include the whole host of other issues which plague the biochemist in their search for an exact sequence.

Introduction to the protein folding problem

Recently (read: this week), I had to give a presentation on my research to my college. We were informed that the audience would be non-specialist, which in fact turned out to be an understatement. For example, my presentation followed on from a discussion of the differences in the education systems of North and South Korea for the period 1949-1960. Luckily, I had tailored my entire talk to be understandable by all and devoid of all Jargon. I choose to deviate from the prescribed topic and Instead of talking about my research specifically, I choose to discuss the protein folding problem in general. Below you’ll find the script I wrote, which I feel gives a good introduction to the core problem of this field.

———————-

The protein folding problem is one of the great projects within the life sciences. Studied by vast numbers of great scientists over the last half century, with backgrounds including chemistry, physics, maths and biology, all were beaten by the sheer complexity of the problem. As a community we still have only scraped the surface with regards to solving it. While I could, like many of you here, go into my own research in great detail and bore you wholeheartedly for the next 10 minutes with technical details and cryptic terminology, I will instead try to give an overview of the problem and why thousands of scientists around the globe are still working on cracking this.

First of all, I guess that a few of you are trying to remind yourself what a protein is; the horror that is high school biology crawling back from that area in your brain you keep for traumatic experiences like family gatherings. Luckily, I’m fairly new to the topic myself, my background being in physics and chemistry, so hopefully my explanation will be still in the naive terms that I use to explain the core concepts to myself. Proteins are the micro-machines of your body, the cogs that keep the wheels turning, the screws that hold the pieces together and the pieces themselves. Proteins run nearly all aspects of your body and biochemistry, your immune system, your digestion and your heart beating. There are approximately between 20 to 30 thousand different proteins in your body, depending on who you ask, and trillions overall. In fact, if we take every protein in our body and scale it up to the size of a penny, the proteins in a single human, albeit a rather dead human, would be enough to fill the entire pacific ocean. Basically, there is a hell of a lot of proteins, with a vast range of different types, each of which is very individual, both in its compositions and function, and, crucially, they are nearly all essential. The loss of any protein can lead to dramatic consequences including heart disease, cancer, and even death.

So now that you know that they are important and there are lots of them, what exactly is a protein? The easiest analogy I have is that of a pearl necklace, a long string of beads in a chain. Now consider your significant other has gone slightly insane and instead of purchasing jewelry for you that consists of a single bead type, or even two if you have slightly exotic tastes, they have been shopping at one of the jewellery stores found in the part of town that smells rather “herby”. You receive a necklace which has different beads across the entire length of the necklace. We have blues, yellows, pinks, and so on and so forth. In fact, we have 20 different types of beads, each with its own colour. This is basically a protein chain: each of the beads represents one of the twenty essential amino acids, each of which has its own chemical and physical properties. Now suppose you can string those pearls together in any order: red, green, blue, blue, pink etc. It turns out that the specific order that these beads are arranged along the length of the protein chain define exactly how this chain “crumples” into a 3D shape. If you think that adding an extra dimension is impossible, just consider crumpling a piece of paper; that is 2D –> 3D transition (mathematicians please bite your tongue). Now one string of colours, blue, blue, pink for example, will crunch into one shape, and that shape may become your muscle, while a different sequence, say, green, blue, orange, will crumple down into something different, for example an antibody to patrol your blood stream.

So essentially we have this “genetic code”, the sequence of amino acids (or beads), which in turn defines the shape that the protein will take. We in fact know that it is this shape that is the most important aspect of any protein, this having been found to define the protein’s actual function. This is because, returning to the bead analogy, we can change up to 80% of the beads to different colour while still retaining the same shape and function. This is amazing when you consider how many other objects can have their baseline composition changed to the same extent while still retaining the same function. The humble sausage is one of those objects (actually below 40% meat content they are referred to as “bangers”), but even then would you want 80% of you sausage to be filler. There is a reason Tesco value sausages taste so different to the nice ones you buy at the butchers. Returning to proteins, we are not trying to say that the sequence isn’t important, sometimes changing just a single bead can lead to a completely different shape. Instead it says that the shape is the critical aspect which defines the function. To summarise, sequence leads to shape, which in turn leads to function.

This is unfortunate, because while it is getting increasingly simple to experimentally determine the sequence of a protein, that is the exact order of coloured beads, the cost and time of getting the corresponding structure (shape) is still extremely prohibitive. In fact, we can look at two of the major respective databases, the PDB, which contains all known proteins structures, and GENBANK, which contains all known protein sequences, and compare the respective number of entries. The disparity between the two is huge, we are talking orders of magnitude huge, 10^15 huge, i.e the number of humans on the planet squared huge. AND this gap is growing larger every year. Basically, people in the last few years have suddenly gained access to cheap and fast tools to get a protein’s sequence, to the extent that people are widely taking scoops of water across the world and sequencing everything, not even bothering to separate the cells and microorganisms beforehand. Nothing analogous exists to get the structure of a protein. The process taking months to years, depending on many factors, each of which may be “something” for one protein and then a completely different “something” for a similar protein. This has led to a scenario where we know the sequence of every protein in the human genome, yet we only know the structure of only about 10% of them. This is utterly preposterous in my opinion given how important this information is to us! We basically don’t know what 90% of our DNA does!

Basically, until an analogous method for structure determination is produced, we have no choice but to turn to predictive methods to suggest the function of proteins that we do not have the structure for. This is important as it allows us, to some degree, target proteins that we “think” may have an important effect. If we didn’t do this, we simply would be searching for needles in haystacks. This is where my research, and that of my group, kicks in. We attempt to take these sequences, these string of beads, and predict the shape that they produce. Unfortunately, the scientific community as a whole still relatively sucks at this. Currently, we are only successful in predicting structures for very small proteins, and when anything more complex is attempted, we, in general, fail utterly miserably. In my opinion, this is because the human body is by far the most complex system on the planet and so far we have tried to simply supplant physics on top of the problem. This has failed miserably due to the sheer complexity and multitude of factors involved. Physics has mostly nice vacuums and pleasant equations, however ask a physicist about a many body system and they will cry. So many factors are involved that we must integrate them altogether which is why there are so many people are working on this, and will be for many years to come. Well, I guess that’s good news for my future academic career.

Anyway, I hope this talk has given you some degree of insight into the work I do and you have learned something about how your body works. For those extremely interested, please feel free to approach me later and I will happily regale you with the exact aspect of protein folding I work on. But for now I would love to try and answer any questions you all have on the content contained in this talk.

Journal club: Half a century of Ramachandran plots

In last week’s journal club we delved into the history of Ramachandran plots (Half a century of Ramachandran plots; Carugo & Djinovic-Carugo, 2013).

Polypeptide backbone dihedral angles

Polypeptide backbone dihedral angles. Source: Wikimedia Commons, Bensaccount

50 years ago Gopalasamudram Narayana Ramachandran et al. predicted the theoretically possible conformations of a polypeptide backbone. The backbone confirmations can be described using three dihedral angles: ω, φ and ψ (shown to the right).

The first angle, ω, is restrained to either about 0° (cis) or about 180° (trans) due to the partial double bond character of the C-N bond. The φ and ψ angles are more interesting, and the Ramachandran plot of a protein is obtained by plotting φ/ψ angles of all residues in a scatter plot.

The original Ramachandran plot showed the allowed conformations of the model compound N-acetyl-L-alanine-methylamide using a hard-sphere atomic model to keep calculations simple. By using two different van der Waals radii for each element positions on the Ramachandran plot could be classified into either allowed regions, regions with moderate clashes and disallowed regions (see Figure 3 (a) in the paper).

The model compound does not take side chains into account, but it does assume that there is a side chain. The resulting Ramachandran plot therefore does not describe the possible φ/ψ angles for Glycine residues, where many more conformations are plausible. On the other end of the spectrum are Proline residues. These have a much more restricted range of possible φ/ψ angles. The φ/ψ distributions of GLY and PRO residues are therefore best described in their own Ramachandran plots (Figure 4 in the paper).

Over time the Ramachandran plot was improved in a number of ways. Instead of relying on theoretical calculations using a model compound, we can now rely on experimental observations by using high quality, hand picked data from the PDB. The way how the Ramachandran plot is calculated has also changed: It can now be seen as a two-dimensional, continuous probability distribution, and can be estimated using a full range of smoothing functions, kernel functions, Fourier series and other models.
The modern Ramachandran plot is much more resolved than the original plot. We now distinguish between a number of well-defined, different regions which correlate with secondary protein structure motifs.

Ramachandran plots are routinely used for structure validation. The inherent circular argument (A good structure does not violate the Ramachandran plot; The plot is obtained by looking at the dihedral angles of good structures) sounds more daring than it actually is. The plot has changed over time, so it is not as self-reinforcing as one might fear. The Ramachandran plot is also not the ultimate guideline. If a new structure is found that claims to violate the Ramachandran plot (which is based on a huge body of cumulative evidence), then this claim needs to be backed up by very good evidence. A low number of violations of the plot can usually be justified. The Ramachandran plot is a local measure. It therefore does not take into account that domains of a protein can exert a force on a few residues and just ‘crunch’ it into an unusual conformation.

The paper closes with a discussion of possible future applications and extensions, such as the distribution of a protein average φ/ψ and an appreciation of modern web-based software and databases that make use of or provide insightful analyses of Ramachandran plots.

Viewing ligands in twilight electron density

In this week’s journal club we discussed an excellent review paper by E. Pozharski, C. X. Weichenberger and B. Rupp investigating crystallographic approaches to protein-ligand complex elucidation. The paper assessed and highlighted the shortcomings of deposited PDB structures containing ligand-protein complexes. It then made suggestions for the community as a whole and for researchers making use of ligand-protein complexes in their work.

The paper discussed:

  • The difficulties in protein ligand complex elucidation
  • The tools to assess the quality of protein-ligand structures both qualitative and quantitative
  • The methods used describing their analysis of certain PDB structures
  • Some case studies visually demonstrating these issues
  • Some practical conclusions for the crystallographic community
  • Some practical conclusions for non-crystallographer users of protein-ligand complex structures from the PDB

The basic difficulties of ligand-protein complex elucidation

  • Ligands have less than 100% occupancy – sometimes significantly less and thus will inherently show up less clearly in the overall electron density.
  • Ligands make small contributions to the overall structure and thus global quality measures , such as r-factors, will be affected only minutely by the ligand portion of the structure being wrong
  • The original basis model needs to be used appropriately. The r-free data from the original APO model should be used to avoid model bias

The following are the tools available to inspect the quality of agreement between protein structures and their associated data.

  • Visual inspection of the Fo-Fc and 2Fo-Fc maps,using software such as COOT, is essential to assess qualitatively whether a structure is justified by the evidence.
  • Use of local measures of quality for example real space correlation coefficients (RSCC)
  • Their own tool, making use of the above as well as global quality measure resolution

Methods and results

In a separate publication they had analysed the entirety of the PDB containing both ligands and published structure factors. In this sample they demonstrate 7.6% had RSCC values of less than 0.6 the arbitrary cut off they use to determine whether the experimental evidence supports the model coordinates.

Figure to show an incorrectly oriented ligand (a) and its correction (b)

An incorrectly oriented ligand (a) and its correction (b). In all of these figures Blue is the 2mFoDFc map contoured at 1σ and Green and Red are positive and negative conturing of the mFoDFc map at 3σ

In this publication they visually inspected a subset of structures to assess in more detail how effective that arbitrary cutoff is and ascertain the reason for poor correlation. They showed the following:

(i) Ligands incorrectly identified as questionable,false positives(7.4%)
(ii) Incorrectly modelled ligands (5.2%)
(iii) Ligands with partially missing density (29.2%).
(iv) Glycosylation sites (31.3%)
(v) Ligands placed into electron density that is likely to
originate from mother-liquor components
(vi) Incorrect ligand (4.7%)
(vii) Ligands that are entirely unjustified by the electron
density (11.9%).

The first point on the above data is that the false-positive rate using RSCC of 0.6 is 7.4%. This demonstrates that this value is not sufficient to accurately determine incorrect ligand coordinates. Within the other categories all errors can be attributed to one of or a combination of the following two factors:

  • The inexperience of the crystallographer being unable to understand the data in front of them
  • The wilful denial of the data in front of the crystallographer in order that they present the data they wanted to see
Figure to show a ligand placed in density for a sulphate ion from the mother liquor (a) and it's correction (b)

A ligand incorrectly placed in density for a sulphate ion from the mother liquor (a) and it’s correction (b)

The paper observed that a disproportionate amount of poor answers was derived from glycosylation sites. In some instances these observations were used to inform the biochemistry of the protein in question. Interestingly this follows observations from almost a decade ago, however many of the examples in the Twilight paper were taken from 2008 or later. This indicates the community as a whole is not reacting to this problem and needs further prodding.

Figure to show an incomplete glycosylation site inaccurately modeled

Figure to show an incomplete glycosylation site inaccurately modeled

Conclusions and suggestions

For inexperienced users looking at ligand-protein complexes from the PDB:

  • Inspect the electron density map using COOT if is available to determine qualitatively is their evidence for the ligand being there
  • If using large numbers of ligand-protein complexes, use a script such as Twilight to find the RSCC value for the ligand to give some confidence a ligand is actually present as stated

For the crystallographic community:

  • Improved training of crystallographers to ensure errors due to genuine misinterpretation of the underlying data are minimised
  • More submission of electron-density maps, even if not publically available they should form part of initial structure validation
  • Software is easy to use but difficult to analyse the output

GPGPUs for bioinformatics

As the clock speed in computer Central Processing Units (CPUs) began to plateau, their data and task parallelism was expanded to compensate. These days (2013) it is not uncommon to find upwards of a dozen processing cores on a single CPU and each core capable of performing 8 calculations as a single operation. Graphics Processing Units were originally intended to assist CPUs by providing hardware optimised to speed up rendering highly parallel graphical data into a frame buffer. As graphical models became more complex, it became difficult to provide a single piece of hardware which implemented an optimised design for every model and every calculation the end user may desire. Instead, GPU designs evolved to be more readily programmable and exhibit greater parallelism. Top-end GPUs are now equipped with over 2,500 simple cores and have their own CUDA or OpenCL programming languages. This new found programmability allowed users the freedom to take non-graphics tasks which would otherwise have saturated a CPU for days and to run them on the highly parallel hardware of the GPU. This technique proved so effective for certain tasks that GPU manufacturers have since begun to tweak their architectures to be suitable not just for graphics processing but also for more general purpose tasks, thus beginning the evolution General Purpose Graphics Processing Unit (GPGPU).

Improvements in data capture and model generation have caused an explosion in the amount of bioinformatic data which is now available. Data which is increasing in volume faster than CPUs are increasing in either speed or parallelism. An example of this can be found here, which displays a graph of the number of proteins stored in the Protein Data Bank per year. To process this vast volume of data, many of the common tools for structure prediction, sequence analysis, molecular dynamics and so forth have now been ported to the GPGPU. The following tools are now GPGPU enabled and offer significant speed-up compared to their CPU-based counterparts:

Application Description Expected Speed Up Multi-GPU Support
Abalone Models molecular dynamics of biopolymers for simulations of proteins, DNA and ligands 4-29x No
ACEMD GPU simulation of molecular mechanics force fields, implicit and explicit solvent 160 ns/day GPU version only Yes
AMBER Suite of programs to simulate molecular dynamics on biomolecule 89.44 ns/day JAC NVE Yes
BarraCUDA Sequence mapping software 6-10x Yes
CUDASW++ Open source software for Smith-Waterman protein database searches on GPUs 10-50x Yes
CUDA-BLASTP Accelerates NCBI BLAST for scanning protein sequence databases 10 Yes
CUSHAW Parallelized short read aligner 10x Yes
DL-POLY Simulate macromolecules, polymers, ionic systems, etc on a distributed memory parallel computer 4x Yes
GPU-BLAST Local search with fast k-tuple heuristic 3-4x No
GROMACS Simulation of biochemical molecules with complicated bond interactions 165 ns/Day DHFR No
GPU-HMMER Parallelized local and global search with profile Hidden Markov models 60-100x Yes
HOOMD-Blue Particle dynamics package written from the ground up for GPUs 2x Yes
LAMMPS Classical molecular dynamics package 3-18x Yes
mCUDA-MEME Ultrafast scalable motif discovery algorithm based on MEME 4-10x Yes
MUMmerGPU An open-source high-throughput parallel pairwise local sequence alignment program 13x No
NAMD Designed for high-performance simulation of large molecular systems 6.44 ns/days STMV 585x 2050s Yes
OpenMM Library and application for molecular dynamics for HPC with GPUs Implicit: 127-213 ns/day; Explicit: 18-55 ns/day DHFR Yes
SeqNFind A commercial GPU Accelerated Sequence Analysis Toolset 400x Yes
TeraChem A general purpose quantum chemistry package 7-50x Yes
UGENE Opensource Smith-Waterman for SSE/CUDA, Suffix array based repeats finder and dotplot 6-8x Yes
WideLM Fits numerous linear models to a fixed design and response 150x Yes

It is important to note however, that due to how GPGPUs handle floating point arithmetic compared to CPUs, results can and will differ between architectures, making a direct comparison impossible. Instead, interval arithmetic may be useful to sanity-check the results generated on the GPU are consistent with those from a CPU based system.