Monthly Archives: June 2017

Le Tour de Farce v5.0

Every summer the OPIGlets go on a cycle ride across the scorched earth of Oxford in search of life-giving beer. Now in its fifth iteration, the annual Tour de Farce took place on us on Tuesday the 13th of June.

Establishments frequented included The Victoria, The Plough, Jacobs Inn (where we had dinner and didn’t get licked by their goats, certainly not), The Perch and finally The Punter. Whilst there were plans to go to The One for their inimitable “lucky 13s” by 11PM we were alas too late, so doubled down in The Punter.

Highlights of this years trip included certain members of the group almost immediately giving up when trying to ride a fixie and subsequently being shown up by our unicycling brethren.

Computational immunogenicity reduction

In my last presentation, I talked about the article by King et al. describing a method for computationally removing T-cell receptor epitopes from proteins. The work could have significant impact on the field of designing protein therapeutics, where immunogenicity is a serious obstacle.

One of the major challenges when developing a protein therapeutic is the activation of the immune system by the drug and subsequent production of antibodies against it, rendering the therapeutic ineffective. This process is known as immunogenicity. Immunogenicity is triggered by T-cells recognition of peptide epitopes displayed on the MHC (major histocompatibility complex). This recognition can be impeded by designing the protein therapeutic to remove the potential T-cell epitopes from its surface. There has been some success in experimental T-cell epitope removal, but the process remains resource and time consuming.

In this work, King et al. created a function which assigns to each residue a score that measures its propensity to be a part of a T-cell epitope. The score consists of three parts. The first part is based on a SVM (Support Vector Machine) score calculated over each 15-residue long window, that attempts to predict how likely is the corresponding peptide sequence to bind the MHC. The SVM has been trained on the available immunological data from the Immune Epitope Database (IEDB). The second part of the score is calculated on each 9-residue window and compares the frequency of the 9-mer in the host genomic data and in the known epitope data (a sequence occurring with a high frequency in a human genome would be rewarded while the opposite is true for sequences occurring in the known epitope data). The third part penalizes any deviations from the original charge of the protein. These three parts are combined with a standard Rosetta score that measures the stability of the protein. The weights assigned to each segment were calibrated on existing protein structures. The combined score would be used to score the mutations in the sequence of the protein of interest, according to their propensity of reducing immunogenicity. The top scoring mutations would then be combined in a greedy fashion.

The authors tested their method on fluorescent reporter protein superfolder GFP (sfGFP) and the toxin domain of the cancer therapeutic HA22. In the case of sfGFP the authors targeted the four top-scoring T-cell epitopes. They created eight different proteins designs, out of which all preserved the function of the original protein (fluorescence). The authors selected the top scoring design for experimental immunogenicity testing. The experiments have shown that the selected design had a significantly reduced immunogenicity in comparison to the original protein. In the case study of HA22 the authors created five designs, out of which three displayed cytotoxicities at the same level or higher than the original protein. The two most cytotoxic designs were further characterized experimentally for their propensity to induce immune response. The authors have found that the two mutants elicited a significantly reduced T-cell response.

Figure 1: Reduction of immunogenicity without loss of function. A) Three of the five designs show cytotoxicity at the same level or higher than the original protein. B) Two of the three cytotoxicity-preserving designs show reduced immunogenicity

Overall, this very interesting study showed that computational methods can be successfully used for reducing immunogenicity of protein therapeutics, opening new avenues for computational protein design.

 

Computationally designing antibodies using a known binding motif

This blogpost is be about the “Computational design of an epitope-specific Keap1 binding antibody using hotspot residues grafting and CDR loop swapping” by Liu et. al. that I presented at group meeting in May.

Antibody design is a subject that I am closely interested in, especially methods that have an important computational step. So far the go-to methods for designing an antibody used by industry are animal model immunisation and/or phage display, with little or no use of computational methods. In the past few years, however, a few computational methods for rational design of antibodies have been making a showing. Firstly, there are the ones where a structure of the docked antibody-antigen already exists, and the antibody is further refined computationally to increase binding affinity. Then there are the ones where the paratope of the antibody is proposed by the designer against a specific target. The paper I am summarizing here by Liu et. al follows the latter idea in a neat way.

Liu et. al. show that if a specific motif is important for binding a certain target, i.e. there is a crystal structure which shows that the motif is buried in the target and/or you predict that its residues are important for binding, it is worthwhile trying to graft that that motif in the CDR area of antibody (the one which is responsible for antibody specificity and affinity). Grafting of entire CDR loops has been long used for antibody humanisation, with many examples where CDR loops maintaining conformation and binding specificity when being transferred from a non-human scaffold to a human scaffold. This is somewhat  aided by the fact that the starting and end points of the area being grafted is stable (i.e. the anchors are  conformationally the same in all the antibody structures that we observe), which is not the case in Liu et al where they graft a four residue motif. The cool thing they do which makes it more probable for the motif to maintain conformation is identify an antibody which has in one of its CDR loops a fragment with the same backbone conformation with the motif they are trying to graft.  They then just replace the residue types to the ones that are known to bind the target. For the Nrf2 motifs (that binds Keap1) they managed to create 5 potential designs. These were further expanded, using rational point mutations on the rest of the antibody in order to increase possibility of binding, to 10. Out of the 10 two showed binding.

One of the potential issues in a real scenario however is the fact that not an entire binding site is copied on antibody, the motif being a subset of the whole, which means the possibility of a low affinity and/or low chances of competing with the original protein (i.e. Nrf2) from which the motif was copied. This actually turned out to be the case, with the initial designs showing low mM affinity. Liu et. al. further worked on improving the initial designs, and they did so by computationally swapping the H3 CDR of the initial designs to a set of other H3 structures that have been seen in other solved antibodies using the Rosetta design protocol. They retained the ones that had a predicted buried SASA of > 2000 A^2, a change in energy of more than 20 REU and a shape complementarity greater than 0.6. These were then tested experimentally with a few of them showing nM affinities, a result which at this time should make you very happy if your entire design phase was done computationally.

Using bare git repos

Git is a fantastic method of doing version control of your code. Whether it’s to share with collaborators, or just for your own reference, it almost acts as an absolute point of reference for a wide variety of applications and needs. The basic concept of git is that you have your own folder (in which you edit your code, etc.) and you commit/push those changes to a git repository. Note that Git is a version control SYSTEM and GitHub/BitBucket etc. are services that host repositories using Git as its backend!

The basic procedure of git can be summarised to:

1. Change/add/delete files in your current working directory as necessary. This is followed by a git add or git rm command.
2. “Commit” those changes; we usually put a message reflecting the change from step 1. e.g. git commit -m "I changed this file because it had a bug before."
3. You “push” those changes with git push to a git repository (e.g. hosted by BitBucket, GitHub, etc.); this is sort of like saying “save” that change.

Typically we use services like GitHub to HOST a repository. We then push our changes to that repository (or git pull from it) and all is good. However, a powerful concept to bear in mind is the ‘bare’ git repository. This is especially useful if you have code that’s private and should be strictly kept within your company/institution’s server, yet you don’t want people messing about too much with the master version of the code. The diagram below makes the bare git repository concept quite clear:

The bare repo acts as a “master” version of sorts, and every other “working”, or non-bare repo pushes/pulls changes out of it.

Let’s start with the easy stuff first. Every git repository (e.g. the one you’re working on in your machine) is a WORKING/NON-BARE git repository. This shows files in your code as you expect it, e.g. *.py or *.c files, etc. A BARE repository is a folder hosted by a server which only holds git OBJECTS. In it, you’ll never see a single .py or .c file, but a bunch of folders and text files that look nothing like your code. By the magic of git, this is easily translated as .py or .c files (basically a version of the working repo) when you git clone it. Since the bare repo doesn’t contain any of the actual code, you can safely assume that no one can really mess up with the master version without having gone through the process of git add/commit/push, making everything documented. To start a bare repo…

# Start up a bare repository in a server
user@server:$~  git init --bare name_to_repo.git

# Go back to your machine then clone it
user@machine:$~ git clone user@server:/path/to/repo/name_to_repo.git .

# This will clone a empty git repo in your machine
cd name_to_repo
ls
# Nothing should come up.

touch README
echo "Hello world" >> README
git add README
git commit -m "Adding a README to initialise the bare repo."
git push origin master # This pushes to your origin, which is user@server:/path/to/repo/name_to_repo.git

If we check our folders, we will see the following:

user@machine:$~ ls /path/to/repo
README # only the readme exists

user@server:$~ ls /path/to/repo/name_to_repo.git/
branches/ config description HEAD hooks/ info/ objects/ refs/

Magic! README isn’t in our git repo. Again, this is because the git repo is BARE and so the file we pushed won’t exist. But when we clone it in a different machine…

user@machine2:$~ git clone user@server:/path/to/repo/name_to_repo.git .
ls name_to_repo.git/
README
cat README
Hello world #magic!

This was a bit of a lightning tour but hopefully you can see that the purpose of a bare repo is to allow you to host code as a “master version” without having you worry that people will see the contents directly til they do a git clone. Once they clone, and push changes, everything will be documented via git, so you’ll know exactly what’s going on!

Experimental Binding Modes of Small Molecules in Protein-Ligand Docking

Protein-ligand docking tends to be very good at generating binding modes that resemble experimental binding modes from X-ray crystallography and other methods (assuming we have a high quality structure…); but it is also very good at generating plausible models for ligands that don’t bind. These so-called “false positives” lead to reduced accuracy in structure-based virtual screening campaigns.

Structure-based methods are not the only way of approaching virtual screening: when all we know is the chemical structure of an active molecule, but nothing about its target (or targets), we can use ligand-based virtual screening methods, which operate on the principle of molecular similarity (Maggiora et al., 2014).

But what if we combine both methods?

Continue reading