BiologyMedicineNatureScienceScience & Nature

Yet another high profile paper about yet another epigenome that says the same thing, yet again.

A recent study developed a new way of probing the genome for “accessible” sites, and found the same things that we’ve been seeing for years.

A new method, called ATACseq, uses a transposase to insert into the genome at random intervals, and sequences from these insertions. Researchers applied this method to single cells, sorted into individual wells via microfluidics, and…. didn’t find very much.

Combining high-throughput DNA sequencing methods with microfluidics to get a “single cell genome” of sorts is a hot topic right now. It seems promising, but the fact of the matter is that sequencing modalities simply by nature have noise. There are random fluctuations in which molecules get sequenced (basically, sampling bias), and these are compounded by consistent biases due to chemistry. It can all be washed out, and averaged out, when you have two million copies of every gene to go from. But it’s problematic when you have a single genome — two copies of each region of interest, which means the “maximum” signal is 2, the “minimum” is 0, and there is only one intermediate value. Even a little bit of noise can make a big difference there. The authors don’t really propose anything new to deal with this noise, other than averaging over genomic distance and over many cells: meaning that the big advantage of using the microfluidics to get “single-cell” data here is that your mean also comes with a standard deviation. You can’t say much from any individual cell — you can still only interrogate them on average. And having a standard deviation is an important enough thing that most of the findings of the paper were based on leveraging the variance rather than the mean. So, okay.

My biggest problem, though, wasn’t the oversell on getting ‘single cell’ data from this. My biggest problem was that they didn’t see anything biologically interesting: “accessibility” isn’t a fundamentally new way of looking at the genome, and this study didn’t find anything fundamentally exciting and new in their analysis. I spent a long time wondering how to write about this because I couldn’t distill it down into one cool, big, finding. There isn’t one. They see roughly the same things that we have always seen or would expect to see: cell-type-specific transcription factors, with levels that vary widely between cells, have binding sites that vary greatly in accessibility between cells. This might be because in some cells, the transcription factor is present and binding, and in others it isn’t. General transcription factors, with levels that do not vary as widely, have sites that do not vary greatly in accessibility. By looking at which clusters of sites tend to vary in accessibility together, they found a pattern of blocks of open chromatin and closed chromatin which neatly replicated other studies that had seen blocks of open chromatin and closed chromatin.

Don’t get me wrong: this is a good piece of work, and my sticking point is by no means limited to this work or these researchers. ATACseq, and transposase-mediated methods, seem like they would improve high-throughput sequencing studies. Most studies built around high-throughput sequencing of DNA fragments require cutting the DNA into small pieces, and then gluing adapters (which can guide the sequencing reaction) on to the end of those pieces. Methods like ATACseq are more straightforward and thereby yield improved reliability and results. In the next few years, I can see transposases replacing DNAses for much of sequencing library preparation, and reducing the amount of time and effort it takes to do high-throughput sequencing studies, which would be great. But while the technical improvement noted in this study was significant, the biology they observed was all stuff we’ve seen already. As a paper demonstrating an improved way to study chromatin accessibility, it’s great. As a paper demonstrating principles of chromatin accessibility, it doesn’t go very far. If it was limited to this one paper, it wouldn’t be an issue at all. But this kind of paper is more and more common; another massive data set that fewer and fewer people can understand, published even though it lacks much by way of new, interesting biology.

Things that would have been cool, but I didn’t see: a way to turn this data into a model of the shape of the chromosome that can be tested. A way to multiplex this analysis with RNA sequencing or RTPCR, to see if more “open” chromatin is actually more transcribable in any individual cell. A measure of whether the specific cells with higher levels of a transcription factor were actually more “accessible” at the binding sites of that transcription factor, or a way to start teasing apart the tricky, important chicken-and-egg question of which happens first: DNA being easily accessible to proteins, or proteins binding to DNA. (Perhaps even a way to identify so-called “pioneer factors” that can reorganize chromatin, as opposed to more mundane “transcription factors” that cannot.) Most importantly: a single, overarching hypothesis, and an elegant experiment to back up the biology.

I’ve been seeing a lot of studies like this recently. I’ve been pulled into a couple. I want to start fighting back against it, but it’s hard to see how. Right now in biology we’re in love with our tools: every grad student and post doc is told to make a tool to make their name. It’s worked for many of our mentors, so why change it? Mostly, because I want to study biology, not study how to build tools to study biology. We’re swimming in so much data that it can be hard to make heads or tails of any of it, and it seems like some of the best scientists — or at least the most successful ones — get ahead by adding to that cacophony more than clarifying it. It’s only made worse by the fact that most readers, most reviewers, and even many authors don’t understand the statistics they need to analyze their own data. I’d love to see a return to simple, elegant experiments with clear results used in tandem with these “next-generation” techniques. But as long as we remain enamored with new techniques and big data, and as long as we don’t understand enough math and statistics to interpret that data, we’ll be hard pressed not to get more and more undecipherable cacophony.

Featured image derived from “Nucleosome1” by Thomas Splettstoesser. Licensed under CC BY-SA 3.0 via Wikimedia Commons

Previous post

Mad Quickies: Art Crimes, Red Sprites, New Boroughs and More!

Next post

Of Fae and Fungi

Elizabeth Finn

Elizabeth Finn

Elizabeth is a geneticist working for a shady government agency and therefore obliged to inform you that all of the views presented in her posts are her own, and not official statements in any capacity. In her free time, she is an aerialist, a dancer, a clothing designer, and an author. You can find her on tumblr at, on twitter at @lysine_rich, and also on facebook or google+.

No Comment

Leave a reply