Repost By: Matthew Sappern, CEO at PeriGen – via LinkedIn

Machine Learning Myths in Healthcare

In full transparency and admitting to more than a hint of plagiarism, my comments in this post are stimulated/lifted/inspired by conversations I have had since HBR published the below article “3 Myths About Machine Learning in Healthcare” written by @jhalamka @DerekAHaas @EricMakhniMD and @JoeschwabMD .

As CEO of a leading healthcare AI firm with significant published outcome studies, positive cash flow and some not so fun lessons learned along the way, I wanted to add to this article (though not invited to)! The authors effectively rebuff three pervasive myths but I hope to expand on their message:

3 Myths about Machine Learning in Health Care – Harvard Business Review

1: Machine learning can do much of what doctors do. My first addition is to include nurses here, since the majority of treatment depends on nurses and since I have had MANY nurses voice concern over being replaced. That simply won’t happen. As the authors point out, treatment will always have an element of caregiver/patient joint decision making. The reality is more accurately summed up by a physician I recently saw present: “Doctors (and nurses) won’t be replaced by technology, they will be replaced by doctors (& nurses) that understand how to work better by using technology.” The fact is that caregivers cannot possibly assess massive amounts of data available in real time without help. AI is perfectly suited to ongoing, laborious, detailed analysis and pointing out troubling inconsistencies.

2:Big Data and brilliant data scientists= SUCCESS. As the authors point out, this is a faulty equation in that much of the data in medical records currently is subjective – originated by a single clinician. The reality is that data used for machine learning needs to be skillfully curated and contextualized by providers – a collection of them. Anyone that has participated in chart reviews sees a wide variety of data quality and the occasional impossibility! Simply dumping this on data experts without clinical context will teach fallacy and end up negatively impacting care.

3: Proven algorithms will get used. Nope. One can show that a magic algorithm that when applied can hasten world peace, but if it is not within an established workflow, its chances at adoption are slim to none. At PeriGen, we really learned this one the hard way. Some customers had bad outcomes, and we learned they had not instantiated the AI analytics because they were outside standard workflow. We proved retrospectively that the AI would have alerted to clinical warning signs in a more timely, data driven fashion. Since then, we have reworked our entire platform and adoption has skyrocketed! I believe we set ourselves back 18-24 months and given that we currently help protect over 500k births each year, that was a big miss.

Bonus Myth Debunked: AI always be right. Manage your expectations here folks. The best we can all currently hope for is to equate to an expert perspective, BUT delivered in an efficient, consistent and scalable way.  I have had a nurse point to a single fetal heart rate over the course of four hours and argue that the AI categorization of this single heart beat was wrong – one moment over four hours. But the assessment of the entire labor was quite accurate and effective in helping the clinicians manage labor. Experts occasionally have a miss. BUT, the vast majority of times and data bits processed, these systems will be right.

I want to thank the authors above for investing their time, reputation and pulpits to progress this discussion. Given the growth and aging of populations worldwide, it is critical that healthcare accelerate the adoption of carefully designed tools.

Please share, comment, let me know where you agree or disagree. We are learning together.