A quantitative way to measure targeted protein degradation

Whenever we order consumables in the Chemistry department, the whole lab gets an email notification once they arrive. So I can understand why I got some puzzled reactions from my colleagues when one such email arrived saying that my ‘artichoke’ was ready to collect from stores. Had I been sneakily doing my grocery shopping on a university research budget?

Artichoke is, in fact, the name of a plasmid designed by the Ebert lab (https://www.addgene.org/73320/), which I have been using in some of my research on targeted protein degradation. The premise is simple enough: genes for two different fluorescent proteins, one of which is fused to a protein-of-interest.

Specifically, this protein-of-interest is a so-called ‘degron’: a protein that can be degraded upon addition of a small molecule, such as a PROTAC (PROteolysis-TArgeting-Chimera) or molecular glue. Once transfected into a human cell line, such as HEK293Ts or HeLas, the cells produce a red fluorescent protein (mCherry) and a green fluorescent protein (GFP) fused to the degron. If you incubate the cells with a small molecule degrader, the level of GFP will drop as it gets degraded (thanks to its attached degron), but the mCherry level will stay the same

What makes Artichoke and other similar plasmids neat is that the DNA code for the two fluorescent proteins resides in the same gene, separated by an internal ribosomal entry site (IRES). This means that one piece of mRNA codes for both protein products, but the mechanics of translation are such that two separate proteins are made (NB there are other ways of achieving this, often more cleanly, but an IRES works fine for this purpose). This means that the ratio of green to red fluorescent remains consistent across a population of transfected cells, regardless of overall transfection efficiency. Plot all the cells on a graph of green vs red, and you’ll see a nice clean gradient.

…that is until you add your degrader compound. After incubating the cells with an effective degrader, the green fluorescence dips but the red stays the same, so the gradient on our green-vs-red graph dips too. Normalise this against an untreated control, and what you get is a number between 1 (no degradation) and 0 (complete degradation) indicating how effective a degrader your compound is.

What makes this so useful is that this change in gradient is much more quantifiable than, say, the change in intensity in a band in a western blot. Instead of just seeing that a band gets fainter as degradation occurs, which is quite a hand-wavy method of measuring degradation effectiveness, this more quantitative method allows you to hone in on subtle differences in degrader effectiveness.

But of course all this is dependent on being able to optimise the plasmid for your particular degron. The Ebert lab has already done such optimisation work on the zinc finger degron I’m interested in, resulting in another publicly available plasmid (https://www.addgene.org/74451/). Western blots may not give as slick a dataset, but for a novel degron system they have the advantage that all you need to start out with is the correct antibody, whereas you’ll have to test several different plasmid constructs with your degron before you can be certain that artichoke will properly work its magic for you.

Author