What Value Could Fractals Add to Biomedical Image Analysis?

Could fractals join the collection of mathematical gems that propel biomedical image analysis to new heights?  

We collect large amounts of biomedical image data, hoping to glean insights into our biological world. While deep learning has become popular for finding features that, for example, distinguish between benign and malignant tumors in biomedical images, how these features relate to conclusions we care about remains a mystery hidden in a labyrinth of neural networks.


Tackling Tumors

Turning immune cells into cancer killers

Tumors often contain a hodgepodge of cells. Some cells have genetic glitches, others don’t; some obey normal growth rules, others divide out of control. Immune cells enter the mix as well, initially swooping in to reject the tumor as they would other foreign substances. Later, however, these same cellular sentinels may inexplicably let down their guard, allowing the cancer to gain a foothold.


Privacy-protecting analysis of distributed big data

A practical solution for sharing patient data while maintaining privacy protections.

Large clinical data research networks (e.g., PCORnet, HMORnet, ESPnet) have been established to accelerate scientific discovery and improve health. However, a big barrier to making full use of clinical data is the public’s concern that researchers’ access to demographics, diagnostic codes, genome sequences, etc., can pose risks for individual privacy, with potential implications for employment, security, and life and disability insurance.

Engineering the Learning Process

Leveraging Science and Technology for Effective Instruction

There is currently unprecedented interest in the potential of technology to transform learning. This buzz around technology and learning is especially loud in higher education, where pundits, entrepreneurs and academics offer outspoken predictions that technology-enhanced learning (TEL) will productively disrupt the sector by addressing long-standing structural issues and the dual challenges of cost and attainment.


Machine Learning using Big Data: How Apache Spark Can Help

Cleverly designed software makes applications running in clusters more fault-tolerant

Machine learning is the process of automatically building models from data. In the past two decades, researchers in many fields of study have been generating these models from progressively more data. Because this has led to higher quality learned models, researchers are using even greater quantities of data that require more and more complex distributed computing systems.

Beyond Principal Components Analysis (PCA)

Using low rank models to understand big data

In many application areas, researchers seek to understand large collections of tabular data, for example, patient lab test results. The values in the table might be numerical (3.14), Boolean (yes, no), ordinal (never, sometimes, always), or categorical (A, B, O). As a practical matter, some entries in the table might also be missing.


Mutual Information: A Universal Measure of Statistical Dependence

And how mutual information is useful in Big Data settings

A  deluge of data is transforming science and industry. Many hope that this massive flux of information will reveal new vistas of insight and understanding, but extracting knowledge from Big Data requires appropriate statistical tools. Often, very little can be assumed about the types of patterns lurking in large data sets. In these cases it is important to use statistical methods that do not make strong assumptions about the relationships one hopes to identify and measure.


Getting Started with Cloud Services for Biomedical Computation

How to tap into this cost-effective and flexible solution

Biomedical researchers who work with large data sets may run out of both disk space and patience while waiting for a computation to finish. Though buying more hard drives and faster computers may seem tempting, the cloud is now a realistic option.


In 2008, when cloud computing was relatively new, this magazine published a column by Alain Laederach predicting that scientists would be won over to cloud computing, despite some people’s concerns about a loss in performance with the added layer of virtualization.


Syndicate content