And…what AI is Actually Good at in Healthcare (From Someone Who’s Been Inside the System)
On a recent listen to Vox’s Explain it to Me (Paging Dr. Chatbot), I listened attentively to two key parts of my professional background being explored: health care and AI. This seems to be something that many are interested in, perhaps best evidenced by the widespread use of WebMD’s Symptom Checker since its 2005 launch. This product uses patient-reported data to retrieve possible diagnoses for what ails them. This is more decision-tree based rather than AI, however. Chatbots that listen to symptoms would create probabilistic recommendations based on natural language inputs like, “My breathing feels crunchy,” or, “I ate my chemistry homework.”
Why do so many users seek out this experience? Quite simply: cost. People want to know if what they’re experiencing is worth the trip to the doctor’s and the money out of their pocket for a copay and prescription, or if a swing by the drugstore and a remote (if lung-crunchy) workday can save them some money.
Now with advanced chatbots in play, for example, poison control centers are logging fewer and fewer calls; what they are fielding is more severe cases. What else could explain this but user access to AI? Caroline and I experienced something like this firsthand a few times in the last few months because our dog is a major nutter and has gotten into our Vitamin D stash and decided to dig her nose into a wasp nest. (Good news: She’s still alive. Bad news: She has only one braincell).

In honor of her new home in the Netherlands, Bodie’s cortisone response was positively Van Gogh-ian

But now it’s not simply patients wanting to Dr. House themselves into convalescence; doctors are now using ChatGPT.
Doctors have always needed a little backup now and then. When I lived in Portland, I worked as a data scientist in care management. Two aspects of that were measuring the quality of care delivery within the hospital and then actually interventions for patients who, data science could predict, had a high potential for health crises, so that care managers could help them better navigate the health system, not just their individual health. I enjoyed being part of a team whose intent wasn’t merely to devise better health management “to the highest bidder,” but ensuring those with lower socioeconomic status got better care. In this sense, our work was to provide an alley-oop for care providers.
Now, AI Chatbots are taking on the role of navigating symptoms. In the case of the ER doctor interviewed in the episode, Dhruv Khullar, part of what’s happening is that AI has essentially scanned all the literature. For him, it’s like getting a literature search that would have been impossible before. It really can expand his ability to diagnose.
But even he emphasizes that you don’t want to use it as the way to diagnose. It’s better to think of it as an assistant in the patient’s diagnostic journey. It can help confirm and give confidence to a diagnosing ER doctor: she or he is in a very stressful place, maybe with not a lot of expert diagnosticians backing them up, and maybe a broken coffee machine on a back-breaking shift.
That’s where I see a legitimate use: it’s not substituting for a necessary professional; it’s filling a void where professionals are grasping at Yankauer suction tips.

Personalizing aftercare is another aspect I’ve considered. You might recognize this scene: you’re woozy, someone else has to drive you home, and they send you off with a piece of paper that’s been photocopied a thousand times. Part of why aftercare looks so “factory-like” is because medicine has to cater to people with very different levels of knowledge, intelligence, and language comprehension. Are we sure that my needs are just the same as the person who came before me?
Dr. Khullar is much more open to chatbots’ potential to assist than the second guest, Dr. Eric Topol, is.
Topol makes the point that AI chatbots:
- don’t reason clinically the way doctors do
- can’t integrate patient values, preferences, and circumstances
- can’t manage pain, talk with families, or guide people through trade-offs in the way a human can
They’ve never experienced pain and they don’t actually empathize.
I think that’s crucial. Diagnosis and treatment aren’t just about running a formula on symptoms. They’re about integrating unstructured human information that doesn’t fit neatly in data fields.
Computers can only handle the data they’re given. Humans have access to far more information, but more importantly, judgment.
That’s part of why you can’t safely offload full diagnosis to AI. AI as assistant? Fine.
AI as actual doctor? No, thank you!
The podcast guests (of course!) go deep into the cognitive de-skilling that could result from over-reliance on AI. This is already showing negative outcomes in other industries. But to me, use of AI in healthcare is signalling a big shift: In the information age, the bottleneck was information processing: collecting, storing, analyzing data. With AI, we’re moving into something like an imagination age, where information processing can increasingly be delegated to machines.
https://room-files.butter.us/PDFUpload/YVEXAD/NrYaKZzNz8hWvg74mBAuV0IvwK9NqA.pdf
There’s a temptation to turn everything into KPIs and metrics, but what we can measure is never exactly what we care about. Humans are still needed where things can’t be fully quantified–-where we’re dealing with impact, meaning, and values.
Cowritten by AlignIQ’s founders
——————-
Need to figure out how to delegate to AI in the information age? Check out our workbook for care sector professionals.
For more on where AI helps and where humans still carry the hardest questions, explore these AlignIQ essays.
RAG Time: How Mercy Corps Is Using AI in the Field
https://aligniq.ai/rag-time-how-mercy-corps-is-using-ai-in-the-field/
Pushing our Buttons: AI’s Default to Truth
https://aligniq.ai/pushing-our-buttons-ais-default-to-truth/
Wanted: Nonprofit Staff Who Can Wrangle AI (and Still Care About People)
https://aligniq.ai/wanted-nonprofit-staff-who-can-wrangle-ai-and-still-care-about-people/
[In this post, we used AI for polish, not purpose.]


Leave a Reply