Viewing 3D molecules interactively in Jupyter iPython notebooks

Greg Landrum, curator of the invaluable open source cheminformatics API, RDKit, recently blogged about viewing molecules in a 3D window within a Jupyter-hosted iPython notebook (as long as your browser supports WebGL, that is).

The trick is to use py3Dmol. It’s easy to install:

pip install py3Dmol

This is built on the object-oriented, webGL based JavaScript library for online molecular visualization 3Dmol.js (Rego & Koes, 2015); here's a nice summary of the capabilities of 3Dmol.js. It's features include:

  • support for pdb, sdf, mol2, xyz, and cube formats
  • parallelized molecular surface computation
  • sphere, stick, line, cross, cartoon, and surface styles
  • atom property based selection and styling
  • labels
  • clickable interactivity with molecular data
  • geometric shapes including spheres and arrows

I tried a simple example and it worked beautifully:

import py3Dmol
view = py3Dmol.view(query='pdb:1hvr')


The 3Dmol.js website summarizes how to view molecules, along with how to choose representations, how to embed it, and even how to develop with it.


Nicholas Rego & David Koes (2015). “3Dmol.js: molecular visualization with WebGL”.
Bioinformatics, 31 (8): 1322-1324. doi:10.1093/bioinformatics/btu829

SMEs in Research

Scientific research relies on collaboration, between academics across the world, between big and small groups of people, and between companies and universities. Many people’s first thoughts of companies involved in academia will be large multinational corporations, such as pharmaceutical or aerospace companies. However mutually beneficial relationships exist between smaller companies and academia. Scientific spin out companies, such as Oxford Nanopore Technologies, Theta Technologies or Reinnervate, are one way research is expanded from an idea held by researchers at a university and expanded into a commercial product or service. However, there are many other ways and reasons for scientists collaborating with small companies.

Small and medium sized enterprises (SMEs) are independent businesses with up to 249 people, they represent over 99% of all UK and EU businesses, and 45-50% of turnover and employment. But why would researchers be interested in involvement with SMEs? Access to unique intellectual property and innovation in the SMEs, as well as access to funding targeted at fostering research led projects in SMEs. Innovate UK, support growth by enabling and funding innovative opportunities. SMEs can access this support through Knowledge Transfer Partnerships (KTP), a three-way partnership between a graduate, academic institution and a business. These KTPs can last between 12 months and 3 years, and provide financial support for academics to monitor the graduate’s work. KTP projects lead to an average of 2 papers per project, and are attractive to SMEs as they only fund 33% of the project cost.

Impact of research outside academia is becoming increasingly important to identify as impact case studies form a part of the Research Excellence Framework (REF), making many sources of funding contingent on providing a strong case for the wider impact of the research. Work with innovation focused SMEs can provide a focus for research impact in engagement and economic terms. For example, a REF case study highlighting the impact of spintronics research in the development of non-contact sensors, through a spin out company, Salunda.

Plotting and storing a 3D network in R

A simple toy example of a three layered network:

Note 1: In order to view the 3D plots, mac users will need Xquartz  installed (

#Another package that might be needed is "rglwidget". The function writeWebGL will show an error stating if rglwidget is required.
######The basics######################////
#1) Create a "food" network (three layers) 
g1< = 1,size = 5,nei = 2,p = .5,loops = FALSE,multiple = FALSE)
g2< = 1,size = 10,nei = 2,p = .2,loops = FALSE,multiple = FALSE)
g3< = 1,size = 30,nei = 1,p = .5,loops = FALSE,multiple = FALSE)

#Create more edges btw layers 
g123=rewire(g123,each_edge(prob=.4,loops = FALSE,multiple = FALSE)) 
ne=15;add_edges(g123,edges = cbind(sample(1:vcount(g1),size = ne,replace = TRUE), sample((vcount(g1)+1):vcount(g123),size = ne,replace = TRUE)))#top layer 
ne=30;add_edges(g123,edges = cbind(sample((vcount(g1)+1):(vcount(g1)+vcount(g2)),size = ne,replace = TRUE), sample((vcount(g1)+vcount(g2)+1):vcount(g123),size = ne,replace = TRUE)))#second layer 

#A quick plot of the graph

#Create 3d coordinates of the network layout
circpos=function(n,r=1){#Coordinates on a circle
lay=rbind(cbind(circpos(vcount(g1),r=1), runif(n = vcount(g1),-1,1)),
 cbind(circpos(vcount(g2),r=2), runif(n = vcount(g2),6,7)),
 cbind(circpos(vcount(g3),r=4), runif(n = vcount(g3),13,17))

#2d plot using the previous layout


#3D graph plot
#Add some colour to nodes and edges

 for(i in 1:nrow(elist)){
 for(k in length(cols):1){ 
 if( k * (length( intersect(elist[i,], grouplist[[k]]) ) > 0)){
 whatcol[i]=min(whatcol[i], k )

#Open 3d viewer
rglplot(g123, layout=lay,vertex.size=5,vertex.label=NA,vertex.color=nodecols,
 edge.color=edgecols(elist=get.edgelist(g123,names = FALSE),cols=c("orange","green","pink"),grouplist=list(1:vcount(g1), (vcount(g1)+1):(vcount(g1)+vcount(g2)), (vcount(g1)+vcount(g2)+1):vcount(g123)) )


###Storing the plot in an html file###

dirfolder="..." #your dir of use open3d, in order to save the plot. 
rglplot(g123, layout=lay,vertex.size=5,vertex.label=NA,vertex.color=nodecols,
 edge.color=edgecols(elist=get.edgelist(g123,names = FALSE),cols=c("orange","green","pink"),grouplist=list(1:vcount(g1), (vcount(g1)+1):(vcount(g1)+vcount(g2)), (vcount(g1)+vcount(g2)+1):vcount(g123)) )
#Fix the view
rgl.viewpoint(theta=90, phi=0)

#Save a static 2d image:
rgl.snapshot(paste(dirfolder,"a_png_pic_0.png",sep=""), fmt="png", top=TRUE)

#Save the plot in a .htlm file:
rglfolder=writeWebGL(dir = paste(dirfolder,"first_net3d",sep=""), width=700)

#The previous function should create a file called index.htlm inside the folder "first_net3d". By opening this file in a browser (with javascript enabled) the 3d plot will be displayed again.
#Also the following command will open the plot in the browser:

Note 2: In order to view the .htlm file javascript should be enabled in the browser. (Here is an example on how to do this for safari ).

Although not covered in the previous script, further options are available such as edge/vertex size and the ability to control independently each of the nodes and edges in the graph. Here is an example that makes more use of these options:


3d network representing a T cell receptor. Edges are coloured according to a relevant path found between the bottom green node and the upper red node cluster.

T cell receptor (in blue), binding to a peptide (in red).

T cell receptor (in blue), binding to a peptide (in red).

Is contacts-based protein-protein affinity prediction the way forward?

The binding affinity of protein interactions is useful information for a range of protein engineering and protein-protein interaction (PPI) network challenges. Obvious applications include the development of therapeutic antibodies to given drug targets or the engineering of novel interfaces for synthetic protein complexes. An accurate model would furthermore allow us to predict a large proportion of affinities in existing PPI networks, and enable the identification of new PPIs, which is critical for our ability to model protein network dynamics effectively.


“The design of an ideal scoring function for protein−protein docking that would also predict the binding affinity of a complex is one of the challenges in structural proteomics.” Adapted from Kastritis, Panagiotis L., and Alexandre MJJ Bonvin. Journal of proteome research 9.5 (2010): 2216-2225.

In last week’s paper a new binding-affinity prediction method based on interfacial contact information was described. Contacts have long been used to in docking methods but surprisingly this was the first time that binding affinity was predicted with them. Largely, this was due to the lack of a suitable benchmark data set that contained structural as well as affinity data . In 2011, however, Kastritis et al. presented a curated database of 144 non-redundant protein–protein complexes with experimentally determined Kd (ΔG) as well as x-ray structures.
Using this data set they trained and validated their method, compared it against others and concluded that interfacial contacts `can be considered the best structural property to describe binding strength`. This claim may be true but as we discussed in the meeting there is still some work to do before we take this model an run with it. A number of flags were raised:

  • Classification of experimental methods into reliable and non-reliable is based on what gives the best results with their method. Given that different types of protein complexes are often measured with different methods, some protein classes for which contact-based predictions are less effective may be excluded.
  • Number of parameters for model 6 is problematic without exact AIC information. As Lyuba righlty pointed out, the intercept in model 6 `explodes`. It is no surprise that the correlation improves with more parameters. Despite their AIC analysis, overfitting is still a worry due to the lack of details presented in the paper.


  • Comparison against other methods is biased in their favour; their method was trained on the same data set, the others were not. In order to ensure a fair comparison all methods should be trained on the same data set. Of course this is hard to do in practice, but the fact remains that a comparison of methods that has been trained on different data sets will be flawed.

Paper: Vangone, A., Bonvin, A. M. J. J., Alberts, B., Aloy, P., Russell, R., Andrusier, N., … Zhou, Y. (2015). Contacts-based prediction of binding affinity in protein-protein complexes. eLife, 4, e07454.

Physical-chemical property predictors as command line tools

The Instant JChem Suite, from ChemAxon, is a fantastic set of software designed for Chemists. It allows easy and simple database management to store both chemical and non-chemical data. It also contains a plethora of physical-chemical prediction and visualisation tools that can be utilised by chemists and computational based scientists alike.

I personally believe that the hidden gem within the suite is the availability of these predictive tools in the command line, found within the ChemAxon Calculator (cxcalc). In addition, calculator plug ins have also been developed by external developers. This allows you to incorporate the powerful predictive tools of ChemAxon into your larger workflows, with a little scripting.

For example, it can be used to predict the dominant protonation state of a ligand before use in MD or docking studies, with the majormicrospecies tool. You can input and output all major file types including SDF, PDB and MOL2 using commands such as:

cxcalc [Input_File].sdf -o [Output_File].sdf majormicrospecies -H [pH] -f [Output_File_Type]:H

You can easily find the calculator plugins available and how to construct input commands using the cxcalc –h command, or the available online information. I would thoroughly recommend looking at the tools available and how you could incorporate them into your workflows.

Counting Threads

When someone talks about “counting threads” the first thing that you think of is probably shopping for bed sheets. But this post is not about the happy feeling of drifting off to sleep on smooth, comfortable Egyptian cotton. This post is about that much less happy feeling when you want to quickly run a bit of code on a couple of data sets to finish the results section of a thesis chapter, and you see this:

Luis blocking the server again....

Luis blocking the server again….

Obviously someone (*cough*Luis*cough*) is having some fun on the server without nice-ing their code to allow people who are much less organized than they should be (*cough*me*cough*) to do a quick last-minute data analysis run.

The solution: confront the culprit with their excessive server usage (Note: alternatives include manual server restart with a power cord to make the world your enemy – not recommended).

So… now we just need to find out how much of the server “ospina” is using. Screenshots won’t convince him… and we can’t take enough screenshots to show the extent of the server-hogging with his 1000s of processes anyway. We need to count…

Luckily there is a handy function to find out information about processes called pgrep. This is basically a ‘ps | grep’ function which has a bunch of options to reflect the many ways it can be used. We see opsina is running R, so here goes:

pgrep -c R

The -c flag counts processes and the pattern matches the command name that was run. But yeah, it turns out this wasn’t the best idea ever. A lot of people are running R (as might be expected in the Statistics Department), and you get a number that is really too high to be likely. We need to be more specific in our query, so let’s go back to the ps command. Second attempt:

ps -Af | grep ospina | wc -l

What we’re doing now is first showing all processes that are run on the server (ps -A) also showing details of the command run and who ran it (-f flag). Then we’re finding the ones that are labelled with our server culprit (grep ospina) and counting the lines we find. There are annoyingly still a few problems with this approach.

  1. We just ran this command on the server and thus will count a command like grep –color=auto ospina,
  2. User “ospina” is probably running a few more things than just his R command (like ssh-ing into the server and maybe a couple of screens)
  3. We get a number than looks far lower than what we expected just by visual inspection.

So… what happened? We can fix problems 1 and 2 by just piping to a further grep command. But problem 3 is different. As it turns out, our culprit is running multiple threads from the same process (which is also why you find so many chrome instances on htop for example). We just counted processes, when really the server is being occupied by his multi-threading exploits. So… if you want to back up your complaint with a nice number, here’s your baby:

ps -ALf | grep opsina | grep R-3.3 | wc -l

The -L flag displays all threads instead of only the processes. I further used R-3.3 as it turns out he is using a specific version of R, which I can use to specify this command. Otherwise it also helps to use inputs arguments to functions to search against. If your fingers get too tired to press the shift-key that often, ps -ALf is equivalent to ps -eLf.

For now: moan away, folks!


Disclaimer: Any scenarios alluded to in the above text are fictitious and do not represent the behaviour of the individuals mentioned. Read: obviously I do not do my analysis last-minute.

The Protein World

This week’s issue of Nature has a wonderful “Insight” supplement titled, “The Protein World” (Vol. 537 No. 7620, pp 319-355). It begins with an editorial from Joshua Finkelstein, Alex Eccleston & Sadaf Shadan (Nature, 537: 319, doi:10.1038/537319a), and introduces four reviews, covering:

  • the computational de novo design of proteins that spontaneously fold and assemble into desired shapes (“The coming of age of de novo protein design“, by Po-Ssu Huang, Scott E. Boyken & David Baker, Nature, 537: 320–327, doi:10.1038/nature19946). Baker et al. point out that much of protein engineering until now has involved modifying naturally-occurring proteins, but assert, “it should now be possible to design new functional proteins from the ground up to tackle current challenges in biomedicine and nanotechnology”;
  • the cellular proteome is a dynamic structural and regulatory network that constantly adapts to the needs of the cell—and through genetic alterations, ranging from chromosome imbalance to oncogene activation, can become imbalanced due to changes in speed, fidelity and capacity of protein biogenesis and degradation systems. Understanding these complex systems can help us to develop better ways to treat diseases such as cancer (“Proteome complexity and the forces that drive proteome imbalance“, by J. Wade Harper & Eric J. Bennett, Nature, 537: 328–338, doi:10.1038/nature19947);
  • the new challenger to X-ray crystallography, the workhorse of structural biology: cryo-EM. Cryo-electron microscopy has undergone a renaissance in the last 5 years thanks to new detector technologies, and is starting to give us high-resolution structures and new insights about processes in the cell that are just not possible using other techniques (“Unravelling biological macromolecules with cryo-electron microscopy“, by Rafael Fernandez-Leiro & Sjors H. W. Scheres, Nature, 537: 339–346, doi:10.1038/nature19948); and
  • the growing role of mass spectrometry in unveiling the higher-order structures and composition, function, and control of the networks of proteins collectively known as the proteome. High resolution mass spectrometry is helping to illuminate and elucidate complex biological processes and phenotypes, to “catalogue the components of proteomes and their sites of post-translational modification, to identify networks of interacting proteins and to uncover alterations in the proteome that are associated with diseases” (“Mass-spectrometric exploration of proteome structure and function“, by Ruedi Aebersold & Matthias Mann, Nature, 537: 347–355, doi:10.1038/nature19949).

Baker points out that the majority of de novo designed proteins consist of a single, deep minimum energy state, and that we have a long way to go to mimic the subtleties of naturally-occurring proteins: things like allostery, signalling, and even recessed binding pockets for small moleculecules, functional sites, and hydrophobic binding interfaces present their own challenges. Only by increasing our understanding, developing better models and computational tools, will we be able to accomplish this.

A beginner’s guide to Rosetta

Rosetta is a big software suite, and I mean really big. It includes applications for protein structure prediction, refinement, docking, and design, and specific adaptations of these applications (and others) to a particular case, for example protein-protein docking of membrane proteins to form membrane protein complexes. Some applications are available in one of the hassle-free servers online (e.g. ROSIE, Robetta,, which might work well if you’ve got just a few tests you would like to try using standard parameters and protocols. However, it’s likely that you will want to download and install a version if you’re interested in carrying out a large amount of modelling, or using an unusual combination of steps or scoring function. This is not a trivial task, as the source code is a 2.5G download, then your machine will be busy compiling for some time (around 5 hours on two cores on my old laptop). Alternatively, if the protocols and objects you’re interested in are part of PyRosetta, this is available in a pre-compiled package for most common operating systems and is less than 1G.

This brings me to the different ways to use Rosetta. Most applications come as an executable which you can find in Rosetta/main/source/bin/ after completing the build. There is documentation available on how to use most of these, and on the different flags which can be used to input PDB structures and parameters. Some applications can be run using RosettaScripts, which uses an xml file to define the protocol, including scoring functions, movers and other options. In this case, Rosetta/main/source/bin/rosetta_scripts.* is run, which will read the xml and execute the required protocol.


An example RosettaScript, used for the MPrelax protocol

PyRosetta is even more flexible, and relatively easy to use for anyone accustomed to programming in python. There are python bindings for the fast C++ objects and movers so that the increased usability is generally not greatly compromised by slower speeds. One of the really handy things about PyRosetta is the link to PyMOL which can be used to view the trajectory of your protein moving while a simulation is running. Just add the following to your .pymolrc file in your home directory to set up the link every time you open pymol:


When it comes to finding your way around the Rosetta package, there are a few things it is very useful to know to start with. The demos directory contains plenty of useful example scripts and instructions for running your first jobs. In demos/tutorials you will find introductions to the main concepts. The demos/protocol_capture subdirectory is particularly helpful, as most papers which report a new Rosetta protocol will deposit here the scripts required to reproduce their results. These may not currently be the best methods to approach a problem, but if you have found a research article describing some results which would be useful to get for your system, they are a good starting point to learn how to make an application work. Then the world is your oyster as you explore the many possible options and inputs to change and add!

What happens to the Human Immune Repertoire over time?

Last week during the group meeting we talked about a pre-print publication from the Ippolito group in Austin TX (here). The authors were monitoring the antibody repertoire from Bone Marrow Plasma Cells (80% of circulating abs) over a period of 6.5 years. For comparison they have monitored another individual over the period of 2.3 years. In a nutshell, the paper is like Picture 1 — just with antibodies

This is what the paper talks about in a nutshell. How the antibody repertoire looks like taken at different timepoints in an individual's lifetime.

This is what the paper talks about in a nutshell. How the antibody repertoire looks like taken at different timepoints in an individual’s lifetime.


The main question that they aimed to answer was: ‘Is the Human Antibody repertoire stable over time‘? It is plausible to think that there should be some ‘ground distribution’ of antibodies that are present over time which act as a default safety net. However we know that the antibody makeup can change radically especially when challenged by antigen. Therefore, it is interesting to ask, does the immune repertoire maintain a fairly stable distribution or not?

Firstly, it is necessary to define what we consider a stable distribution of the human antibody repertoire. The antibodies undergo the VDJ recombination as well as Somatic Hypermutation, meaning that the >10^10 estimated antibodies that a human is capable of producing have a very wide possible variation. In this publication the authors mostly focused on addressing this question by looking at how the usage of possible V, D and J genes and their combinations changes over time.

Seven snapshots of the immune repertoire were taken from the individual monitored over 6.5 years and two from the individual monitored over 2.3 years. Looking at the usage of the V, D and J genes over time, it appears that the proportion in each of the seven time points appears quite stable (Pic 2). Authors claim similar result looking at the combinations.  This would suggest that our antibody repertoire is biased to sample ‘similar’ antibodies over time. These frequencies were compared to the individual who was sampled over the period of 2.3 years and it appears that the differences might not be great between the two.

How the frequencies of V, D  and J genes change (not) over 6.5 years in a single individual

How the frequencies of V, D and J genes change (not) over 6.5 years in a single individual

It is a very interesting study which hints that we (humans) might be sampling the antibodies from a biased distribution — meaning that our bodies might have developed a well-defined safety net which is capable of raising an antibody towards an arbitrary antigen. It is an interesting starting point and to further check this hypothesis, it would be necessary to carry out such a study on multiple individuals (as a minimum to see if there are really no differences between us — which would at the same time hint that the repertoire do not change over time).


Processing large files using python: part duex

Last week I wrote a post on some of the methods I use in python to efficiently process very large datasets. You can find that here. Roughly it details how one can break a large file into chunks which then can be passed onto multiple cores to do the number crunching. Below I expand upon this, first creating a parent class which turns a given (large) file into chunks. I construct it in a manner which children classes can be easily created and tailored for specific file types, given some examples. Finally, I give some wrapping functions for use in conjunction with any of the chunkers so that the chunks can be processed using multiple cores.

First, and as an aside, I was asked after the previous post, at what scale these methods should be considered. A rough answer would be when the size of the data becomes comparable to the available RAM. A better answer would be, when the overhead of reading each individual line(/entry) is more than the operation on that entry. Here is an example of this case, though it isn’t really that fair a comparison:

>>> import timeit,os.path
>>> os.path.getsize("Saccharomyces_cerevisiae.R64-1-1.cds.all.fa")
>>> timeit.timeit('f = open("Saccharomyces_cerevisiae.R64-1-1.cds.all.fa");sum([l.count(">") for l in f])',number=10)
>>> timeit.timeit('f = open("Saccharomyces_cerevisiae.R64-1-1.cds.all.fa");sum([c.count(">") for c in iter(lambda:*1024),"")])',number=10)

For a small 10MB fasta file, we count the number of sequences present in a fifth of the time using chunks. I should be honest though, and state that the speedup is mostly due not having the identify newline characters in the chunking method; but nevertheless, it shows the power one can have using chunks. For a 14GB fasta file, the times for the chunking (1Mb chunks) and non-chunking methods are 55s and 130s respectively.

Getting back on track, let’s turn the chunking method into a parent class from which we can build on:

import os.path

class Chunker(object):

    #Iterator that yields start and end locations of a file chunk of default size 1MB.
    def chunkify(cls,fname,size=1024*1024):
        fileEnd = os.path.getsize(fname)
        with open(fname,'r') as f:
            chunkEnd = f.tell()
            while True:
                chunkStart = chunkEnd
                chunkEnd = f.tell()
                yield chunkStart, chunkEnd - chunkStart
                if chunkEnd >= fileEnd:

    #Move file pointer to end of chunk
    def _EOC(f):

    #read chunk
    def read(fname,chunk):
        with open(fname,'r') as f:

    #iterator that splits a chunk into units
    def parse(chunk):
        for line in chunk.splitlines():
            yield chunk

In the above, we create the class Chunker which has the class method chunkify as well as the static methods, _EOC, read, and parse. The method chunkify does the actual chunking of a given file, returning an iterator that yields tuples containing a chunk’s start and size. It’s a class method so that it can make use of _EOC (end of chunk) static method, to move the pointer to a suitable location to split the chunks. For the simplest case, this is just the end/start of a newline. The read and parse methods read a given chunk from a file and split it into units (single lines in the simplest case) respectively. We make the non-chunkify methods static so that they can be called without the overhead of creating an instance of the class.

Let’s now create some children of this class for specific types of files. First, one of the most well known file types in bioinformatics, FASTA. Below is an segment of a FASTA file. Each entry has a header line, which begins with a ‘>’, followed by a unique identifier for the sequence. After the header line, one or more lines follow giving the sequence. Sequences may be either protein or nucleic acid sequences, and they may contain gaps and/or alignment characters.


And here is the file type specific chunker:

from Bio import SeqIO
from cStringIO import StringIO

class Chunker_FASTA(Chunker):

    def _EOC(f):
        l = f.readline() #incomplete line
        p = f.tell()
        l = f.readline()
        while l and l[0] != '>': #find the start of sequence
            p = f.tell()
            l = f.readline() #revert one line

    def parse(chunk):
        fh = cStringIO.StringIO(chunk)
        for record in SeqIO.parse(fh,"fasta"):
            yield record

We update the _EOC method to find when one entry finishes and the next begins by locating “>”, following which we rewind the file handle pointer to the start of that line. We also update the parse method to use fasta parser from the BioPython module, this yielding SeqRecord objects for each entry in the chunk.

For a second slightly harder example, here is one designed to work with output produced by bowtie, an aligner of short reads from NGS data. The format consists of of tab separated columns, with the id of each read located in the first column. Note that a single read can align to multiple locations (up to 8 as default!), hence why the same id appears in multiple lines. A small example section of the output is given below.


with the corresponding chunker given by:

class Chunker_BWT(chunky.Chunker):

    def _EOC(f):
        l = f.readline()#incomplete line
        l = f.readline()
        if not l: return #EOF
        readID = l.split()[0]
        while l and (l.split()[0] != readID): #Keep reading lines until read IDs don't match
            p = f.tell()	
            l = f.readline() #revert one line

    def parse(chunk):
        lines = chunk.splitlines()
        N = len(lines)
        i = 0 
        while i < N:
            readID = lines[i].split('\t')[0]
            j = i
            while lines[j].split('\t')[0] == readID:
                j += 1
                if j == N:
            yield lines[i:j]
            i = j

This time, the end of chunk is located by reading lines until there is a switch in the read id, whereupon we revert one line. For parsing, we yield all the different locations a given read aligns to as a single entry.

Hopefully these examples show you how the parent class can be expanded upon easily for most file types. Let’s now combine these various chunkers with the code from previous post to show how we can enable multicore parallel processing of the chunks they yield. The code below contains a few generalised wrapper functions which work in tandem with any of the above chunkers to allow most tasks to be parallelised .

import multiprocessing as mp, sys

def _printMP(text):
    print text

def _workerMP(chunk,inFile,jobID,worker,kwargs):
    _printMP("Processing chunk: "+str(jobID))
    output = worker(chunk,inFile,**kwargs)
    _printMP("Finished chunk: "+str(jobID))
    return output	

def main(inFile,worker,chunker=Chunker,cores=1,kwargs={}):
    pool = mp.Pool(cores)

    jobs = []
    for jobID,chunk in enumerate(chunker.chunkify(inFile)):
        job = pool.apply_async(_workerMP,(chunk,inFile,jobID,worker,kwargs))

    output = []
    for job in jobs:
        output.append( job.get() )

    return output

The main function should be recognisable as the code from the previous post. It generates the pool of workers, ideally one for each core, before using the given chunker to split the corresponding file into a series of chunks for processing. Unlike before, we collect the output given by each job and return it after processing is finished. The main function acts as wrapper allowing us to specify different processing functions and different chunkers, as given by the variables worker and chunker respectively. We have wrapped the processing function call within the function _workerMP which prints to the terminal as tasks are completed. It uses the function _printMP to do this, as you need to flush the terminal after a print statement when using multi core processing, otherwise nothing appears until all tasks are completed.

Let’s finish by showing an example of how we would use the above to do the same task as we did at the start of this post, counting the sequences within a fasta file, using the base chunker:

def seq_counter(chunk,inFile):
    data =,chunk)
    return data.count('>')

and using the FASTA chunker:

def seq_counter_2(chunk,inFile):
    data = list(Chunker_FASTA.parse(,chunk)))
    return len(data)

And time they take to count the sequences within the 14GB file from before:

>>> os.path.getsize("SRR951828.fa")
>>> x=time.time();f = open("SRR951828.fa");sum([l.count(">") for l in f]);time.time()-x
>>> x=time.time();f = open("SRR951828.fa");sum([c.count(">") for c in iter(lambda:*1024),"")]);time.time()-x
>>> x=time.time();sum(main("SRR951828.fa",seq_counter,cores=8));time.time()-x
>>> x=time.time();main("SRR951828.fa",seq_counter_2,Chunker_FASTA,8);time.time()-x

Let’s ignore that last one, as the slowdown is due to turning the entries into BioPython SeqRecords. The prior one, which combines chunking and multicore processing, has roughly a factor of 50 speed up. I’m sure this could be further reduced using more cores and/or optimising the chunk size, however, this difference alone can change something from being computationally implausible, to plausible. Not bad for only a few lines of code.

Anyway, as before, I hope that some of the above was either new or even perhaps helpful to you. If you know of a better way to do things (in python), then I’d be very interested to hear about it. If I feel like it, I may follow this up with a post about how to integrate a queue into the above which outputs the result of each job as they are produced. In the above, we currently hold collate all the results in the memory, which has the potential to cause a memory overflow depending on what is being returned.