Identifying and Overcoming Skepticism about Biomedical Computing
Modelers should take the lead.
Many collaborators 1 with whom modelers2 work have little or no training in modeling3 and so it is natural that they may be cautious, intimidate d or disinterested—attitudes that give rise to skepticism4. Although, ideally, collaborators could learn more about modeling, it is understandable that they don’t: They are busy keeping up with their own dynamically changing specialty fields, don’t have the time, or are simply not interested. Identifying and overcoming such skepticism is important if biomedical computing is to be of greater value to society, and so I would like to suggest here that we, the modelers, take the lead in addressing and reducing this skepticism.
I hope we can agree that there really is no well-established “modeling community.” Typically, modelers are renegade individualists who are fuzzy members of the fuzzy subsets of different modeling disciplines such as computer science, statistics, bioinformatics, analytics and others. It would be helpful if these renegades would transcend their silos, overcome their self-oriented competitive urges and establish more cooperative relationships with one another and with their collaborators. This objective has motivated the NIH Biomedical Computing Interest Group (BCIG) since its inception 10 years ago. BCIG’s mission is to encourage, support and promote good and appropriate computing methodology and technology in all aspects of biomedical research, development, and patient care; and it is open to everyone having interest in this mission. I propose that we form other geographically disparate BCIG groups and network them electronically. Are you interested? I would be happy to help facilitate this.
Conference participation, tutorial production and distribution, crowdsourcing and multi-institutional team building are examples of what we can do to improve relationships and extend computational methodology choices and accessibility. For example, BCIG is helping to formulate a panel for a workshop on “Proper Methods for Evaluating Performance of Computational Intelligence Methods and How to Encourage Use of these Evaluation Methods.” This workshop has been proposed for the 2012 World Congress on Computational Intelligence. As another example, BCIG is about to put in place a mechanism for modelers who subscribe to BCIG to brainstorm on broad biomedical computing topics—a kind of local crowdsourcing operation. The first topic will be “Machine Learning and Statistics: the Interface.”
We modelers also need to integrate and standardize our style of thinking as well as our terminology and nomenclature. Many other fields do this as they begin to mature. Statisticians, computer scientists and bioinformaticians think differently from one another. Even within modeler subgroups, individuals think differently about their approaches to modeling. We need to focus on concept consilience and common ontologies!
Modelers should try to convince collaborators that modeling is meaningful even when the models may be imperfect. The key is to demonstrate success in significant collaborative biomedical projects—in particular (given current priorities) in translational medicine projects, i.e., projects with results that have a direct and important positive impact on health care. Although many collaborators may not be skeptical per se, some fail to see the value of using modeling in their fields. This can be framed as a challenge for modelers. They can explore these fields and find better ways to introduce modeling. I can point to several examples where computational modeling demonstrated the potential to have significant impact on medicine and biology, particularly with respect to translational medicine. For example, I have developed methodologies that predict glucose tolerance test results, breast cancer, and adverse drug reactions with accuracies suitable for clinical use.
Unfortunately, there have been cases in which modeling has produced overhyped, misleading, or flawed outcomes. Years ago a modeler claimed that his artificial neural network (ANN) computer program could predict whether a patient presenting at an emergency room with certain symptoms and findings should be admitted to the ICU. He claimed that his ANN could do a better job than human experts faced with the same task, but his performance statistics were based only on the data used in the ANN training. He had no hold-out data for testing and validation. This is the kind of ill-designed hyped work that gives bad press to modeling. Like all good science, modeling needs good statistical oversight, which includes proper testing and validation. But modelers are often not doing this. We must correct this. When proper testing and validation are missing, it provides strong support for certain groups (e.g., certain fundamentalist, turf-protecting statisticians) who feel that these new-fangled tools from computer science are threatening their professional identity. Computer scientists and other modelers must learn to properly validate their models according to standards set by good classical statistical methodology. I know of other horror stories of modeling misuse. One example is where a physician used evolutionary computing to fit data in an application where a simple linear regression model would have been sufficient.
I have suggested here that we, the modelers, take the lead in addressing skepticism associated with biomedical computing and that we do what we can to reduce it. I have suggested several specific things we can do in this regard, namely (1) create other BCIG groups like the NIH BCIG and network hem, (2) engage in conference participation, tutorial production and distribution, crowdsourcing and multi-institutional team building, (3) integrate and standardize our style of thinking, our terminology and our nomenclature, (4) demonstrate success in projects, particularly in translational medicine projects, and (5) avoid overhyping, and misleading and flawed outcomes.
1. Physicians, biologists and others who work in biomedical research and health care delivery
2. Fuzzy heterogeneous collection of individuals who work with all types of computational tools used under the general rubric “biomedical computing”
3. Developing algorithms and computer programs to solve specific problems
4. Any questioning attitude towards knowledge, facts, or opinions stated as facts, or any doubt regarding claims
Jim DeLeo has been a computer scientist for over 40 years during which he has designed, developed and implemented new and innovative computational solutions to solve medical, space exploration and defense problems. Presently at the NIH, he works collaboratively with most of the NIH institutes and centers, other government agencies, universities and industry. His current work is inspired by the NIH Roadmap translational medicine theme and is directed toward building computational, intelligent systems that have practical impact in improving patient care.