Healthcare professionals and AI experts alike are warning about some issues they’ve identified in a machine learning model made for diagnosing the coronavirus.
The idea behind these technologies was to help health professionals tell the difference between coronavirus and other similarly-presenting ailments like pneumonia.
But concerned professionals say changes need to be made before the COVID diagnosis machine learning is used in a clinical environment.
More from a recent VentureBeat article:
Of those 62 papers included in the analysis, roughly half made no attempt to perform external validation of training data, did not assess model sensitivity or robustness, and did not report the demographics of people represented in training data.
“Frankenstein” datasets, the kind made with duplicate images obtained from other datasets, were also found to be a common problem, and only one in five COVID-19 diagnosis or prognosis models shared their code so others can reproduce results claimed in literature.
“In their current reported form, none of the machine learning models included in this review are likely candidates for clinical translation for the diagnosis/prognosis of COVID-19,” the paper reads. “Despite the huge efforts of researchers to develop machine learning models for COVID-19 diagnosis and prognosis, we found methodological flaws and many biases throughout the literature, leading to highly optimistic reported performance.”
Publicly available datasets also commonly suffered from lower quality image formats and weren’t large enough to train reliable AI models.
How does that old expression go? — “the problem with computers is that they do exactly what you tell them to do.”
I love that saying because, despite the fact that AI is growing the point of teaching itself without as much human intervention, it’s still a glorified computer.
After all, a model is still only as good as the data being fed to it, and instances like this only underline just how much of a strict science machine learning is.