18 Comments
May 22, 2023Liked by Bryan Vartabedian

We need an AI that can recognise emanations from an AI. I bet that many medical notes would already fail that test. Especially notes that repeatedly use the word “nuanced”.

This would also be a good tool for immediately flunking class papers and science papers that an AI was used to pad out the work.

Expand full comment

It always amazes me how something as basic as medication reconciliation remains so challenging. Pt use different pharmacies ( mail, 24 hour, “regular”). What is sent is not always dispensed or even if dispensed doesn’t work out and then what does the petite actually take and when and how. Patients often are resentful of being asked (over and over) and think “it’s in THERE”. So in the 21st Century it’s still about the giant ZipLoc. Despite smart phones and apps a good chunk of patients who take one medication, eg OCPs, don’t know the name. Generic or the Proprietary Generic ( and I am unable to retrieve it from the database as not every pharmacy uses it). In older people drug side effects or their dosing schedule-or lack of ( including missed doses) play a huge role in people feeling well AND control of their chronic conditions. It still can boil down to me spending 15-30 minutes spreading all the bottles on the exam table, checking doses etc and then sorting them... When we worked with RNs they often managed that but now with MAs with marginal reading and spelling skills I am often doing it. I don’t mind as it usually has a huge payoff and if that’s what it takes. And of course the patients typically taking 5-10 meds will also be seeing 2-3 specialists who also prescribe meds.... weirdly with the EMR it feels almost worse but the I Tri of the EMR has overlapped with the explosion of the elderly populations, chronic disease and therapeutics. I would love an AI “pharmacist” who could just figure all that out-including the ZipLoc maneuver which is what the patient is actually doing. My other wish list item is AI that goes to battle with other AI for Prior Auth and figuring out formularies etc.

Expand full comment

I suspect that some of these problems will be resolved as liability law grows to encompass AI. The machine will produce an EHR/EMR, but the doctor will have some defined responsibility for its accuracy. “I missed that drug interaction because ChatGPT screwed up the EMR,” will not pass muster beyond some point in tort cases. Doctors and tech magnates will likely feel their way through the minefield together, and, given the speed of innovation, there will be some bad cases along the way. Those who hope for slow, deliberate, FDA-style regulation will likely be sorely disappointed. The technology is simply too fast-evolving and mobile for such restraints. And those chores with regulating will be dreaming of jobs with the tech companies. (https://graboyes.substack.com).

Expand full comment

New tech is exciting until big money comes around. We may discuss what AI should or should not do, but the course of events will be determined by largest stakeholders. Inevitably there will be attempts, on one hand, to minimize state expenses by making governmental health care as AI-automated and monopolized as possible. And on the other hand, attempts to maximize Big Pharma profits by enforcing certain AI answers in health care. This should be prevented by AI demonopolization and by providing informed human freedom of choice instead of readymade answers.

Expand full comment

Good thoughts here thank you. I’m not worried about physicians being replaced by AI either, at least for another 50-75 years, and even then we will still be collaborating.

I ran through ChaGPT a few 99215 level hospital follow up type visits I performed. I cut and paste the very detailed HPI and subjective sections I had purposefully documented with great detail, including all pertinent data that informed my assessment and plan. I then asked Chat GPT to come up with an assessment and plan. I compared it to my own. My typical primary care brain was an order of magnitude better, with Chat GPT sounding boilerplate and formulaic and medicolegal, with no synthesis between multiple interacting medical problems, medications, patient preferences, or future plans for treatment and surveillance. I would post it but I think there’s some privacy stuff I don’t want to test.

Anyway, for help with a focused diagnostic question, or perhaps a clinical question we might otherwise ask of UpToDate, generative AI can fake it pretty well. But references are a joke, synthesis of real world complexity is not built in, and I don’t believe the hype.

Expand full comment

I think it's important to recognize that LLMs are exactly that: language models. They may seem like intelligence because they are really good at stringing words together that seem to make sense... but it is a gross misunderstanding of the technology to think that the computer is making judgements, or even relying on any logic, algorithmic or not. Current reasonable uses: generate a PA, generate a thank you letter to referrer, create post-op instructions for X, given Y restrictions, make the following AP more concise, formatted in bullet points... future: here is a picture of prescription bottles, make a list of medications, formatted in standard medical language (med, route, frequency)

Expand full comment

AI is here, and, given its power, will be more widely introduced . But that does not mean that it cannot evolve and slowly expand its utility across the board. While I think that the eventual communication face-to-face is critically dependent on human interaction, this can be facilitated and the doctor will have the role of being the mediator and guide along the path of prevention or treatment. This will be supported by much better evidence than one currently find among us doctors. The exponential increase in knowledge is already unbearable for most of us, and trying to keep up with the constant flood of knowledge has been already futile for a while. I also think that the time that is spent for documenting is ridiculous, so AI can also here facilitate and free up mindspace for doctors to be doctors.

Expand full comment

One point, re making ethical judgments, deserves immediate attention. Sure, some judgments require human intervention. But others, especially in the enforcement context, do not. In a non-medical example, I would love AI to render an immediate, binding judgment on, say, a US president, for all of his violations of conflict of interest laws and regulations. And that judgment would automatically trigger a Justice Department investigation and prosecution. All politics would be ignored. (Yes, politics will have been played out in setting up this arrangement.). In medicine--and my only experience is as an alert, sensing patient--there seem to be some specific categories of actions in diagnostics and treatment that are in fact are binary--no gray areas or wiggle room and not subject to endless debate, obfuscation, second-guessing. If you are a doctor, you do certain things with reasonable certainty that they are the right things to do in a particular circumstance. These will undoubtedly save some lives, some money, and obviate disciplinary actions and patient lawsuits, and they should also boost the chances for favorable outcomes. It seems to me that some aspects of clinical care, at least as practiced on me, does not have to be trial-and-error or subject to ethical conundrums or debates. That is, what to do is straightforward.

Expand full comment

Get 10 doctors in a room and you will get 10 different opinions on how to treat a disease. Medicine is still an art with a huge amount of leeway that no AI can do without being a sentient rational being. And if we have such an AI then this is another discussion.

Expand full comment