What should ChatGPT not do in the medical record?
Just because technology can do something doesn’t mean it should
I woke up this morning and was fixated on this question around ChatGPT. So I thought I'd hammer it out and share it. It's only mildly processed, so forgive me for tangents and lapses of logic. And if you’re not a subscriber, now’s as good a time as ever to jump on….
Everyone's talking about ChatGPT. In medicine, doctors are talking about ChatGPT and the medical record. Specifically, the problems it's potentially going to create. There was a great editorial in Nature last week, ChatGPT is not the solution to physicians’ documentation burden (it’s paywalled but good). CT Lin, CMIO at the University of Colorado, thought out loud about 'automation complacency' and the challenges with AI in the EHR (free on his blog).
The thinking and the points are all excellent
But I think that the discussion shouldn't be about what ChatGPT can or might do in the medical record. Our conversations will never keep up with that one — it will change month-over-month.
What we need to ask is: What do we not want ChatGPT doing in the medical record?
In other words, even when LLMs/AI become capable of doing everything, is there something that happens in the medical record that needs human process and integration. Is there something that calls for us and us alone?
As much product as process
in 2023 the question of 'what is a medical record' is a big one that I'll pick up at a later date. In the context of this question we might consider that the medical record is as much about process as it is product. Collecting information in the narrative form and piecing it together can be time consuming but telling. The cross-examination of the patient and testing of fleeting hypotheses in real time can be key to understanding. So while there may be some parts of the problem solving experience that can get a boost from AI, there may be other parts that should be left to our critical thinking and synthesis
We should be viewing this issue from the angle of what is critical to do our job.
So what is our job?
Then the question becomes, what's our job? Or what should our job be?
If a patient can speak into a black box and the black box can give us an answer, what do we do? Or, do we even need doctors?
I know it sounds like a crazy question. And I ask it to raise and important point. Because it's our failure to ask and answer this question that has allowed us to be defined by technology, not the other way around. Technology should be designed and defined by our needs. And we’re good at complaining about the tools that drop in our laps, but not so good at saying where a given tool should or absolutely shouldn't fit.
(I know I'll get pushback on the idea of EHRs, but that's a huge discussion.)
Defining absolutes — what we want and don’t want — is how we begin to exercise agency.
The failure to ask the question is to see technology as purely deterministic — because a tool was created, it then defines what we do. This issue of tech determinism and healthcare is a big one. I'll rant on it later.
What I think should be ours
I have always said that physicians won't be replaced, but radically redefined. And I suspect that as medicine continues its march toward precision, the physician role will evolve to be more meta — I see us as highly trained docents in a world of data, information, markers and inputs.
As such, I think that the EHR must preserve the impression as a unique place of human synthesis and consolidation. I'm happy to pass some or all of the data collection to The Machine. I think there are huge opportunities in voice-first interfaces that listen to my conversation with a patient and cleanly summarize the history. This is a great example of technology becoming invisible and allowing doctors to get back to what they're supposed to do: talk to patients. This solves the doctor typing and staring at the screen.
I'm happy to take AI suggestions for things that I will no doubt miss. Like a digital tap on my shoulder from the crusty old nurse who winks and says, 'have you thought about this?' IBM Watson was supposed to do this, but he was too far ahead of the parade.
Robots are fun and games until you need to have a critical conversation
It's easy to get caught up in speculation about technology, medicine and efficiencies. Twitter and the public square is full of technoutopians who relish the coming irrelevance of human physicians. There's a kind of manic ecstasty as we see expectations for the latest tool go up and up on the hype cycle.
But it's harder to get excited about critical conversations and how they should happen. Those are the slow, painful elements the human experience that can't and shouldn't be industrialized. Think of the end-of-life conversations that every one of us will ultimately have. With someone.
And how will we want that to go down?
Just because a machine can do something doesn't mean it should
And to be clear, none of this is techlash. Its just raising the idea of defining what we want or don't want.
Going forward we will be faced with remarkable tools that can appear to replicate everything that humans have traditionally done. So even beyond the trivial bits of the EHR, it may be helpful to think about the things that we don't want turned over to AI.
These preferences can arise from ethical, emotional, or practical concerns. A lot of this, of course, is driven by our individual values. But here are some things we may never want a machine to perform:
Make life or death decisions: Alot of us are uncomfortable with the idea of machines making decisions with life or death consequences, as in medical treatment or military actions.
Raise children: The emotional bond, empathy, and nuanced understanding of human behavior needed to raise a child are qualities most of us prefer from a human caregiver.
Provide emotional support: While AI can simulate empathy, true emotional connections and support are often seen as a uniquely human capability. This is where therapy chatbots are built on shaky ground.
Make ethical judgments: Many people feel that ethical and moral decisions should be made by humans who possess a deep understanding of the complexities and nuances involved in these decisions. I think of this as human synthesis and processing.
Create art and literature: Although AI can make things that look like art and literature, many people believe that the creative process, emotional depth, and cultural context behind these works are better suited to humans.
Spiritual and religious guidance: As spirituality and religion are deeply personal and human experiences, many prefer to seek guidance from fellow human beings who share their values and beliefs.
Political leadership: Human leaders are often preferred due to their ability to understand and empathize with the needs of their constituents, as well as navigate the complexities of diplomacy and decision-making.
Provide physical touch and comfort: Humans have a natural desire for human touch and comfort. Machine touch just isnt the same.
This is just a starter.
I understand that some of this is easier said than done. Defining agency is easy; exercising, it is another thing altogether. How we as individual physician employees (well more than half of us in the U.S.) impose limits and parameters on enterprise technology creates a challenge. But at least we can start a discussion. And alot of these tools like our EHRs allow for manual override.
What do you think?
Thanks for being a subscriber. If you like this, please pass it along. I’d also love to hear your thoughts in the comments.
We need an AI that can recognise emanations from an AI. I bet that many medical notes would already fail that test. Especially notes that repeatedly use the word “nuanced”.
This would also be a good tool for immediately flunking class papers and science papers that an AI was used to pad out the work.
It always amazes me how something as basic as medication reconciliation remains so challenging. Pt use different pharmacies ( mail, 24 hour, “regular”). What is sent is not always dispensed or even if dispensed doesn’t work out and then what does the petite actually take and when and how. Patients often are resentful of being asked (over and over) and think “it’s in THERE”. So in the 21st Century it’s still about the giant ZipLoc. Despite smart phones and apps a good chunk of patients who take one medication, eg OCPs, don’t know the name. Generic or the Proprietary Generic ( and I am unable to retrieve it from the database as not every pharmacy uses it). In older people drug side effects or their dosing schedule-or lack of ( including missed doses) play a huge role in people feeling well AND control of their chronic conditions. It still can boil down to me spending 15-30 minutes spreading all the bottles on the exam table, checking doses etc and then sorting them... When we worked with RNs they often managed that but now with MAs with marginal reading and spelling skills I am often doing it. I don’t mind as it usually has a huge payoff and if that’s what it takes. And of course the patients typically taking 5-10 meds will also be seeing 2-3 specialists who also prescribe meds.... weirdly with the EMR it feels almost worse but the I Tri of the EMR has overlapped with the explosion of the elderly populations, chronic disease and therapeutics. I would love an AI “pharmacist” who could just figure all that out-including the ZipLoc maneuver which is what the patient is actually doing. My other wish list item is AI that goes to battle with other AI for Prior Auth and figuring out formularies etc.