Three thoughts on Dr. Google

- 1 min

Earlier this month Verily came out with an interesting piece of software that identifies several covariates with surprising accuracy from retinal scans of 300,000 patients. These features are fed into a model which yields the probability of suffering a major cardiac event. Although we’ve seen several versions of medical algorithms that aim to supplement or (gasp) replace physicians, this tech is unique in that it’s trying to repurpose existing data. Furthermore, the combination of a massive shortage of clinicians and the leading cause of death worldwide means a huge potential for disruption.

Although they made the headlines, the ugly underbelly of this technology is dependent on agile and interoperable hospital systems–both of which have made tremendous progress over the past decade (Thanks, Obama). Google has deep learning experts and world renowned scientists, but at the end of the day any team’s ability is a function of data quality and quantity. Open source, clean, and anonymized medical records are critical. We have a long way to go until CSV heaven.

Finally, with new machine learning applications coming right and left when are we actually going to start trusting these things? Obviously, this method needs to be thoroughly validated before it can be used in a clinical setting, but when is “good” good enough? The traditional Systematic COronary Risk Evaluation or “SCORE” method of predicting risk is correct 72% of the time, while Google’s was correct 70% of the time. How much strategic risk are health care organizations willing to take on for innovative technologies or increased productivity?

I sure as hell don’t have the answers, but the first step is asking the right questions.

rss facebook twitter github youtube mail spotify lastfm instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora quora