Using computational approaches to assemble plausible 3-D structures
As a result of experimental techniques developed about a decade ago, researchers now have data that can be used to reconstruct how the genome is arranged inside the nucleus. This 3-D structure likely plays a role in determining cellular function by affecting cells’ ability to access, read and interpret genetic information.
Leveraging big data, modeling, and computational biology to create new protocols
Most scientists seeking to turn back adult cells’ developmental clocks rely on go-to recipes that—when followed just right—will yield stem cells. A dash of one reprogramming factor, a sprinkle of another, and let the mixture stew. Likewise, when researchers want stem cells to remain stem cells or, alternatively, when they want them coaxed down a particular developmental pathway, they have cocktails they turn to. Most of these recipes were concocted using trial and error over the past few years, and then they’ve been passed between labs.
Bringing big data to gait analysis
For nearly ten years, this magazine has been published by Simbios (under principal investigator [PI] Russ Altman) as part of the National Institutes of Health’s National Center for Biomedical Computing (NCBC) program. With the end of that program last summer, the magazine faced an uncertain future. But it has gained new life with the support of the Mobilize Center (under PI Scott Delp) as part of BD2K.
This issue of the Biomedical Computation Review features the Centers of Excellence for Big Data Computing. These 12 Centers, funded by the NIH’s Big Data to Knowledge Initiative (BD2K), have been established on the principle that we must be united in our efforts to accelerate the translational impact of big data on human health.
Visualizing complex molecular systems
And how mutual information is useful in Big Data settings
A deluge of data is transforming science and industry. Many hope that this massive flux of information will reveal new vistas of insight and understanding, but extracting knowledge from Big Data requires appropriate statistical tools. Often, very little can be assumed about the types of patterns lurking in large data sets. In these cases it is important to use statistical methods that do not make strong assumptions about the relationships one hopes to identify and measure.
Experts reflect on challenges identified ten years ago.
The first issue of this magazine (June 2005) featured a story called “Top Ten Challenges of the Next Decade” written by Eric Jakobssen, PhD, who had recently left his position as Director of the Center for Bioinformatics and Computational Biology in the National Institute of General Medical Sciences (NIGMS) at the National Institutes of Health (NIH).