AI has raced into every corner of healthcare—from note-taking to diagnostics—often with bold claims about what it can replace. Health2047 Managing Director Dr. Chris Stock spoke with Dr. Spencer Dorn, Vice Chair of Medicine at the University of North Carolina, about what AI can realistically do today, where it’s showing the most promise, and why the human side of medicine still matters most.
Chris Stock: There’s this misconception in the media that as humans develop “smart software,” we might not need physicians anymore. What’s your view?
Spencer Dorn: Healthcare’s not a single thing. We tend to conflate all the various aspects of healthcare into whatever idea we have in our mind, but it’s really different to do a well-child visit than it is to fix a torn ACL, or to deal with an acute rash. Healthcare is many different things for countless individuals.
Human physicians don’t need to be involved in every one of those types of healthcare, but I think it’s silly to suggest we don’t need physicians at all because software’s getting smart. I really like what computer scientist Melanie Mitchell said: we humans tend to overestimate AI advances and underestimate the complexity of our own intelligence. We give computers too much credit and we don’t give ourselves enough.
Today’s AI is far from reliable enough to perform high-stakes tasks without supervision. Medicine is also a lot more complex than outsiders think it is. Many people assume we just walk around inputting diagnoses and prescriptions, but medical practice is messy. A lot of what we do can’t easily be written into code or solved by AI.
A big part of medicine is accepting risk. We’re accountable—to state licensing boards, to hospital committees, to the legal system, and to ourselves. We lose sleep when we don’t do a good job. We don’t have that accountability framework for AI.
And finally, people love their doctors. They may not love all it takes to see them, but they do love them. Surveys show that most patients rate their doctors quite highly. So no, I don’t think we’re ready for smart software to replace physicians entirely.
CS: How do you see technology and AI evolving healthcare and the practice of medicine?
SD: I think it’s really exciting. In the short term, most of the changes are around the administrative tasks that clinicians, nurses, and other healthcare workers do—things like revenue cycle, prior authorization, documentation, and summarization. These are areas where we can deal with what people sometimes call “drudgery.”
That’s where we’ll continue to see activity over the next few years, largely because it’s lower-stakes work. Not that it’s unimportant—writing a good clinical note matters—but it’s at the periphery of care compared to clinical decision-making.
At the same time, we’ll continue to see predictive algorithms being pushed into practice. Some organizations will adopt them, some clinicians will pay attention, and others won’t. In the longer term, I think we’ll see two things: one, moving toward clinical decision-making—helping us make better decisions with our patients—and two, the rise of so-called “AI agents.”
That’s a broad term, but it refers to AI that doesn’t just support but actually executes work, completing specific workflows from end to end with minimal clinician or healthcare worker input.
CS: Going back to clinical decision support, is AI-driven clinical decision support reliable enough to use clinically? What improvements do you think need to be made in how we use it and how it’s presented to us?
SD: We’ve been doing clinical decision support for decades—long before generative AI. Even something like a medication interaction checker is a form of it. Traditionally, those were based on hard-coded rules—if X, then Y—and there’s a limit to how much of the world you can code that way.
The opportunity with newer types of AI is flexibility. We can apply decision support to a broader range of situations. For example, high-fidelity clinical summaries now make it easier to see what’s important in a patient’s history—say you’re an anesthesiologist and six years ago the patient had an echocardiogram showing mild pulmonary hypertension. That’s crucial information that might otherwise be buried deep in the record.
We’re also seeing tools that summarize the medical literature. Using the same example, you could ask, “What’s the standard of care for sedating patients with pulmonary hypertension?” and instantly get a concise overview.
We’re not yet at the point where the machine connects all of that automatically—recognizing the condition, the procedure, and linking to guidance—but that’s where we’re headed. The goal is to combine a high-fidelity clinical summary, what’s happening in real time, and the medical knowledge base to inform high-quality decisions.
CS: Where do you think those tools should live in the workflow? Should they be running all the time or only when needed?
SD: No one’s figured that out yet. Traditionally, decision support was embedded in the electronic health record, but we don’t want to stare at screens all day—we want to look at our patients. Now we’re seeing the rise of ambient tools that listen in and may free us from typing and clicking so much.
The question is how decision support fits into that world. Personally, I like the idea of running it in the background—let it check my work rather than me checking it. It should nudge us when appropriate, not constantly. If you’re seeing alerts all the time, you get desensitized. If everything’s important, then nothing is.
We need to set the right thresholds—specific enough to be useful but not overwhelming. And the form factor matters. Maybe it’s on a screen, maybe an auditory cue, maybe through a watch or phone. The right balance of timing, format, and specificity will make the difference between something we ignore and something we value.
CS: Are we actually using AI agents in the delivery of healthcare yet?
SD: “Agents” is one of those terms that people use broadly, but essentially it means a language model that’s connected to tools—it can retrieve information, interact with systems, and actually carry out tasks.
For example, a call center agent that can schedule appointments. It understands natural language, accesses the clinical schedule, and books the appointment directly. We’re seeing those kinds of agents mainly on the administrative side—scheduling, revenue cycle, fax processing, data movement.
We’re not yet seeing agents assume clinical care responsibilities like prescribing medications or making diagnoses. The technology just isn’t reliable enough for high-stakes activities.
CS: What are the key challenges keeping us from using these tools more broadly?
SD: The first is reliability. AI isn’t reliable enough for high-stakes work. The second is change. Healthcare pulls toward inertia—it’s very difficult for people and organizations to change without major incentives.
We’ve only had two big digital transformations in healthcare: electronic health records, which came from billions in Meaningful Use incentives and penalties, and virtual care in 2020, driven by the pandemic. I’m not sure what the motivation is now. There are some factors—staffing shortages, administrative burden—but nothing on that scale.
So reliability is one big challenge, and change is the other. And ultimately, if something goes wrong, who’s responsible? Usually it’s the clinician or the organization, not the AI vendor. That’s another reason adoption has been slow.
CS: You mentioned AI scribes earlier. Are they gaining traction?
SD: Definitely. We’ve seen great uptake in AI scribes—about a third of clinicians in large health systems use them. That’s because they solve real pain points. People want to look at their patients, not their computers.
I think summarization tools will be the next wave. I spend a lot of time preparing to see patients—there’s so much information locked in different places, including PDFs. Anything that helps bring that together will be valuable.
So yes, there’s motivation to use AI for these kinds of tasks, but for higher-stakes work, I don’t think the reliability is there yet.
CS: What does the future look like to you?
SD: The future isn’t about AI replacing doctors—it’s about using technology to let each person do what they do best. Machines will handle the repetitive and the routine. Humans will handle the nuanced and the relational. That’s where real progress will come from.