AI: better than your doctor?

Adrien Foucart, PhD in biomedical engineering.

Get sparse and irregular email updates by subscribing to https://adfoucart.substack.com. This website is guaranteed 100% human-written, ad-free and tracker-free.

AI system is better than human doctors at predicting breast cancer (J. Hamzelou, NewScientist)
AI Now Diagnoses Disease Better Than Your Doctor, Study Finds (D. Leibowitz, Towards Data Science)
AI Can Outperform Doctors. So Why Don’t Patients Trust It? (C. Longoni and C. K. Morewedge, Harvard Business Review)
Image by Alex Knight from Pexels.

Machines have beaten us at chess and Go, they can drive our cars... and now they are better doctors? That last article from Harvard Business Review asks an interesting question. If machines can perform "with expert-level accuracy", why don't we trust them? In their study, they find that resistance to medical AI is driven by "a concern that AI providers are less able than human providers to account for consumers’ unique characteristics" [1]. The problem, then, would be mostly one of perception. AI can be better than doctors in general, but we fear that they might not be better for us personally.

It's really hard to have a purely "rational" (whatever that means) discussion about the merits of AI versus human doctors, and part of the problem is in the terminology: "Artificial Intelligence". AI is a relatively vague categorization of a certain domain of computer science, but it is perhaps more importantly a term heavily associated with its historical use in science-fiction and scientific speculation. In fiction, AI is often used as a device to explore what it means to be human. It almost always comes along with an "artificial conscience". An AI is a person - or at least it tries to be. It's a notion that we find both in stories where AI is the "good guy" who just wants to be accepted into society, and in dystopian stories where an AI becomes "self-aware" and starts a war on humans.

So when we say "AI is better than human at...", we think about those AI, the self-aware machines. But modern AI, and in particular the kind of algorithms which are behind the "better than humans" headlines, has absolutely nothing to do with any of that. Deep Learning algorithms are tools which are entirely in the hands of the engineers and doctors that use them. They have no more "personality", "wants", or "conscience" than any other tool.

AI is "better than humans" at predicting breast cancer in the same sense as a bread slicer machine is "better than humans" at slicing bread. In both cases, humans are in control of what goes in, what comes out, and how the machine is set up.

It doesn't really make sense to talk about "better than human" performances for machine learning systems, because everything that work or doesn't work in such algorithms can be traced to humans. The engineers, computer scientists or mathematicians create a mathematical model which fully determines the range of data that the system can manage, and the kind of output that it can produce. Doctors and medical experts provide the data and annotations that will determine the parameters of the model, which only learns how to best reproduce what it's been shown.

If there is a bias in the results of an algorithm, it's not because "the AI" is biased, it's because the persons who designed the dataset and the learning process were. The term "AI" was reportedly coined in 1956 by John McCarthy, at the foundational "Dartmouth workshop". At the time, computer science was in its infancy, and the dream was that every aspect of human intelligence could eventually be replicated by a computer. That idea is now mostly reserved for the field of "Artificial General Intelligence" (AGI), and it's not certain that such a thing is even possible. AI, as it's mostly used today, is not "intelligence". It's a toolbox, a set of techniques that we can use to perform various tasks, but it doesn't have an "identity". It starts and stops at the press of a button, and does nothing more and nothing less than what it's programmed to do.

What's the place of AI in medicine?

AI will never be "better than doctors" because it's a meaningless proposition, but that doesn't mean AI has no place in medicine. AI techniques provide very useful tools which can vastly improve patient care.

The best way (in my opinion) to describe existing AI is as a very well indexed library of knowledge. The very complex models that compose modern Deep Learning methods can "store" in their parameters huge amount of medical knowledge, in the forms of links between observations and desired output. Trained algorithms can afterwards process huge amounts of data in short amounts of time.

In clinical practice, this may be very useful to "flag" potentially difficult cases (for instance, if a doctor's diagnosis is different from what the algorithm says, it may be useful to review the results), or to provide a quick first diagnosis in cases where there are no doctors available. It can also serve as a form of quality control. Perhaps more importantly, it can help a lot with ongoing research. Taking large amounts of patient information along with the evolution of their diseases in retrospective studies, machine learning algorithms can detect patterns associated with clinical outcomes. Sometimes, those patterns aren't associated with things that we already know about the disease, so this may give new avenues to explore. Avenues which may lead to dead ends, or to important discoveries. To find out which, human experts will remain well into the loop, at least for the foreseeable future.

References

  1. [] C. Longoni, A. Bonezzi, C. Morewedge, " Resistance to Medical Artificial Intelligence ", Journal of Consumer Research 46:4 (2019). doi:10.1093/jcr/ucz013