Published on [Permalink]
Reading time: 4 minutes
Posted in:

Why AI can't replace health care workers just yet

To convince myself that I am not completely clueless in the ways of medicine, I occasionally turn to my few diagnostic successes. To be clear: this is cherry-picking, and I make no claim for being a master diagnostician. Yes, a bunch of my colleagues had missed the first patient’s friction rub that was to me so evident; but say “friction rub” to a third-year medical student and they will know immediately the differential diagnosis and the treatment. How many friction rubs have I missed actually hearing? Plenty, I am sure! Like this one time when a 20-something year old man who languished in the hospital for days with severe but mysterious chest pain. Our first encounter was on a Saturday, when I saw him as the covering weekend resident; he was discharged Sunday, 24 hours after I started treatment for the acute pericarditis he so obviously had.

Once, during a mandatory ER rotation, I figured out that a patient who came in complaining of nausea and vomiting actually had an eye problem: bilateral acute angle closure glaucoma. I pestered the skeptical ophthalmology resident to come in on a Sunday afternoon, confirm the diagnosis, treat the glaucoma, likely save the patient’s vision, and get a case report for a conference out of it.

And I will never forget the case of the patient who was in the steaming hospital room shower whenever I saw him; he had come in for kidney failure from severe vomiting and insisted he never used drugs, illicit or otherwise. Still, it was obvious with anyone with a sense of smell that he had cannabinoid hyperemesis syndrome and would have to quit.

Superficial commonalities aside — all three were men with an acute health problem — what ties these together is that I had to use senses other than sight to figure them out: This being the 21st century taste is no longer allowed, but I will leave to your imagination how doctors of old could tell apart the “sweet” diabetes (mellitus) from the “flavorless” one (insipidus). hearing the friction rub, feeling the rock-hard eyeballs, smelling the pungent aroma of cannabis. And all three cases came to mind when I read a tweet an X about ChatGPT’s great diagnostic acument.

I can’t embed it — and wouldn’t even if I could — but the gist of Luca Dellanna’s extended post is that he:

  1. Had a “bump” on the inside of his eyelid that was misdiagnosed by three different doctors.
  2. Saw the fourth doctor, who made the correct diagnosis of conjunctival lymphoma.
  3. Got the same, correct diagnosis from ChatGPT on his/its first try.

A slam-dunk case for LLMs replacing doctors, right? Well, not quite: the words Luca used to describe the lesion, “a salmon-pink mass on the conjunctiva”, will give you the correct response even when using a plain old search engine. And he only got those words from the fourth doctor, who was able to convert what they saw into something they could search for, whether in their own mind palace or online.

Our mind’s ability to have seamless two-way interactions with the environment is taken for granted so much that it has become our water. This is the link to the complete audio and full text of David Foster Wallace’s commencement speech that became the “This is Water” essay, and if you haven’t read it yet, please do so now. But it is an incredibly high hurdle to jump over, and one that is in no danger of being passed just yet. It is the biggest reason I am skeptical of any high proclamations that “AI” will replace doctors, and why I question the critical reasoning skills and/or medical knowledge of the people who make them.

In fact, the last two years of American medical education could be seen as simply a way of honing this skill: to convert the physical exam findings into a recognizable pattern. A course in shark tooth-finding, if you will. This is, alarmingly, also the part of medical education that is most in danger of being replaced by courses on fine arts, behavioral psychology, business administration, medical billing, paper-pushing, box-checking, etc. But I digress.

Which is not to say that LLMs could not be a wonderful tool in the physician’s arsenal, a spellcheck for the mind. But you know what? Between UpToDate, PubMed, and just plain online search doctors already have plenty of tools. What they don’t have is time to use them, overburdened as they are with administrative BS. And that is a problem where LLMs can and will do more harm than good.

✍️ Reply by email

✴️ Also on