to BioTechniques free email alert service to receive content updates.
Lessons from ENCODE
 
Nathan S. Blow, PhD.
Editor-in-Chief, BioTechniques
BioTechniques, Vol. 53, No. 4, October 2012, p. 203
Full Text (PDF)

In case you might have missed it, the ENCODE project made a big splash in September, with 30 papers detailing the results published simultaneously in a number of journals. These results, and the technology developed to enable the project, have led to new insights about gene regulation and genome structure/function, and will undoubtedly lead to further insights in the future as more researchers begin to dive into the large treasure trove of data that the members of the ENCODE consortium have produced.

At its heart, the ENCODE project, which was launched in 2003 by the National Human Genome Research Institute (NHGRI), is an effort to catalog and characterize all the functional elements present in the human genome, along with regulatory elements. More than 400 researchers from 32 labs around the globe have contributed to the ENCODE project through various studies since its inception.

While the studies published in early September examined a range of genomic elements, the finding that elicited the strongest response from scientists and the general public seemed to be the report that biochemical function could be assigned to 80% of the human genome, although only 3% of the genome codes for proteins.

This disclosure following the ENCODE publication has raised questions about what the commonly used term “junk DNA” really means. Junk DNA has often been the moniker given to those regions of the genome that were not protein coding or regulatory — regions where, at the time, scientists could not attribute a function. ENCODE's announcement that 80.4% of the genome could be assigned at least one biochemical function led to a wave of news reports that junk DNA was no longer. But the response from several scientists outside ENCODE has been “not so fast.”

The issue for many scientists with the ENCODE prediction is that it is in contrast to established theories of genome evolution. By some accounts, up to 50% of the human genome possesses inactive transposons and transposable elements, a quandary if 80% of the genome is functional. Not to mention that the generally acknowledged mutation rate seems out of line with an 80% functional genome. In the end, the true issue likely comes down to the terms “functional DNA” and “junk DNA.” Not all non-coding DNA is junk DNA (for example, telomeres are obviously critical functional elements), making it important to arrive at a better definition of the word “functional.” Another issue the ENCODE results raise is the implication of such a large percentage of functional DNA for the so-called “C-value paradox,” which is the inconsistency observed between genome size and organism complexity. If the results from ENCODE are indicative of a wider trend among organisms in terms of genome functionality, then the question of why the lung fish has a genome 40× the size of a human's while that of a puffer fish is 1/10 the size cannot be as easily explained.

In the end, this phase of the ENCODE project has accomplished a monumental feat by creating a resource that will fuel the research of biologists and genome scientists for years to come. It has also opened the door to an important discussion of the relationships between genome structure/function and evolution. Press coverage of the ENCODE results stating that junk DNA was no more were definitely premature — junk DNA will be an even bigger topic in the months and years to come.

As always, please share your thoughts with us by posting at our Molecular Biology Forums under “To the Editor” (http://molecularbiology.forums.biotechniques.com) or sending an email directly to the editors ([email protected]).