Assembling the 3-D Genome: A Puzzle with Many Solutions

Using computational approaches to assemble plausible 3-D structures

As a result of experimental techniques developed about a decade ago, researchers now have data that can be used to reconstruct how the genome is arranged inside the nucleus. This 3-D structure likely plays a role in determining cellular function by affecting cells’ ability to access, read and interpret genetic information.


Stem Cell (Re)Programming: Computing New Recipes

Leveraging big data, modeling, and computational biology to create new protocols

Most scientists seeking to turn back adult cells’ developmental clocks rely on go-to recipes that—when followed just right—will yield stem cells. A dash of one reprogramming factor, a sprinkle of another, and let the mixture stew. Likewise, when researchers want stem cells to remain stem cells or, alternatively, when they want them coaxed down a particular developmental pathway, they have cocktails they turn to. Most of these recipes were concocted using trial and error over the past few years, and then they’ve been passed between labs.

Welcome to the New Biomedical Computation Review

For nearly ten years, this magazine has been published by Simbios (under principal investigator [PI] Russ Altman) as part of the National Institutes of Health’s National Center for Biomedical Computing (NCBC) program. With the end of that program last summer, the magazine faced an uncertain future. But it has gained new life with the support of the Mobilize Center (under PI Scott Delp) as part of BD2K.


Building a Biomedical Data Ecosystem

This issue of the Biomedical Computation Review features the Centers of Excellence for Big Data Computing. These 12 Centers, funded by the NIH’s Big Data to Knowledge Initiative (BD2K), have been established on the principle that we must be united in our efforts to accelerate the translational impact of big data on human health.


Mutual Information: A Universal Measure of Statistical Dependence

And how mutual information is useful in Big Data settings

A  deluge of data is transforming science and industry. Many hope that this massive flux of information will reveal new vistas of insight and understanding, but extracting knowledge from Big Data requires appropriate statistical tools. Often, very little can be assumed about the types of patterns lurking in large data sets. In these cases it is important to use statistical methods that do not make strong assumptions about the relationships one hopes to identify and measure.


Syndicate content