

Discover more from AI Supremacy
What Clinicians Really Think About AI
Recent Past: Technology in Healthcare Doesn’t Live Up to the Hype 🩺
Hello Everyone,
A lot of my readers ask about A.I. in healthcare or education when they respond to my welcome Email, so I’m especially on the lookout for guest contributors in those domains.
As more Physicians become more familiarized with Generative A.I. and new technology in the workplace and A.I.’s impact in research, I was looking for someone who had insights on how clinicians really see technology and AI in healthcare today in 2023. I was luck enjoy to find just the right person.
Sarah’s Substack deals with many of these topics around A.I.’s adoption in healthcare in the real world. To go deeper sign up:
From our Sponsor
Designing Large Language Model Applications | Download the early release chapters of this O’Reilly generative AI ebook, compliments of Mission Cloud.
Transformer-based language models are powerful tools for solving various language tasks. With this book, you'll learn the tools, techniques, and playbooks for building valuable products that incorporate the power of language models.
Can A.I. help with Clinician Burn-out?
Oracle among others, also has a voice-activated Healthcare digital assistant. Microsoft acquired Nuance for this very purpose.
If you are at all interested in A.I.’s adoption in healthcare, this guest post is a must read and will hopefully resonate with some of you who work in the healthcare industry. This is an incredibly detailed overview of opportunities, pitfalls and bottlenecks of adoption in clinical settings and some of the current issues to think about.
Recent Articles by Sarah:
Sign up for premium to get my best deep dives or start an exploratory free trial.
Subscribe as a group of colleagues or friends for a huge discount.
By Sarah Gebauer, late Summer, 2023.
Recent Past: Technology in Healthcare Doesn’t Live Up to the Hype
Imagine you’re a doctor and someone tells you:
“Hey, there’s this new technology that will improve efficiency, facilitate better patient care, and make your life easier.”
This new technology would make it easier to find crucial information about patients, give physicians personalized recommendations for patient care, and eliminate onerous piles of paper.
Now imagine that the federal government thought this was such a priority that it actually paid hospitals and practices to buy this software. Platforms were rolled out, rolled back, and replaced. If you were a medical resident during this time, you probably used a dozen different brands of this technology systems during four years of training.
Imagine that each of these rollouts was accompanied by a huge hospital investment and endless messaging about the importance of this new technology and how much better it would be for you and for your patients. Imagine that you were required to sit through a 4-8 hour training for each of these new systems while your actual work piled up.
Fast forward 10 years. A few big systems won the market and you stopped having to learn a new computer system every few months. No one is hopeful about the promise of this new technology anymore, but you and your colleagues are resigned to this new reality.
But the promised improvements never materialized. More information was being gathered but it never seemed to be used for anything. Certainly not to improve care for the patient in front of you. You watch your friends and colleagues become exhausted and lose their love of medicine during a national pandemic. Several of them leave the field to sell real estate.
And it’s at this point that someone tells you:
“Hey, there’s this new technology that will improve efficiency, facilitate better patient care, and make your life easier.”
That’s where most physicians are today. When electronic health records (EHRs) became widespread after the HITECH act was passed by Congress in 2009, it was supposed to usher in a new era of modernization in medicine. There were some physicians who retired rather than hassle with learning a new computer system, but most stayed in medicine, dutifully typing and clicking boxes.
Effects of Electronic Health Records on Physicians
Instead of the promised improvements, what actually happened was:
An increase in the information captured without a corresponding use of that data. We know more about overall hospital quality metrics but little of that is passed on to the physician in a way that can meaningfully affect practice
A “copy and paste” epidemic to the point that nearly half the information in an EHR is duplicated, and much of it is likely incorrect.
Many of the busy practices hired medical scribes to stand in the room during a patient visit to record what happened during the patient visit into the computer
Physicians spent more time charting than interacting with patients. A Harris survey of 500 primary care physicians found that:
The average physician spends 12 minutes talking to patients and 19 minutes charting
69% say the EHR detracts from time spent with patients
74% say EHRs increase their workload
An increase in physician burnout, with an expected shortage of 124,000 physicians by 2034
Note that I refer here to physicians because that’s the group I’m qualified to represent, but many of these burdens are experienced by all healthcare professionals including nurses, physical therapists, etc.
Now generative AI is having a moment, with many of the same promises we heard 15 years ago about efficiency, improving patient lives, and making healthcare more modern. Physicians want to believe that generative AI will actually help, but they’ve just finished a 10-15 year technology implementation that didn’t live up to the hype.
Short Audience Pulse 💓
The responses to this survey helps me figure out how much content to put behind a paywall:
I want to provide the best reading experience I can, but also be as accessible as possible while diversifying monetization in a way that allows me to keep doing what I’m doing.
This graphic from Deloitte details all the ways AI be used in healthcare
The Physician AI Triad
Physicians are now in what I call the Physician AI Triad. They’re cautious about this new technology given the history with EHRs, but they’re also generally a forward-thinking group and optimistic about how this time might be different, especially if it can actually help their patients. Resignation is also rampant. Physicians don’t believe they have power in the institution of medicine anymore, and they know that they’ll have to use whatever AI tools the administration buys.
Caution: Legal Risks Clinical Risks Workforce Risks
Optimism: Clinical Documentation, Billing documentation, Medical Education
Patient-focus: Diagnostic support, Access to Care, Personalization
Caution
Beyond just the risk of technology being more hype than substance, physicians have several very real concerns about the risks AI poses in clinical practice.
Legal risks
Physicians in the US bear a huge amount of medicolegal liability. Lawsuits are draining mentally and financially, and doctors fear them. It feels risky to put a computer into the workflow, especially one with a well-publicized (though possibly overblown) chance of hallucinations. Physicians have traditionally been responsible for the vast majority of professional liability compared to other team members, and they’re worried about being held responsible financially for what they see as an uncontrollable system. Case law hasn’t been settled for these situations, and physicians are afraid of having even more malpractice risk. Many feel they’re in a bind regardless of what they do:
Sued for not taking an AI recommendation and relying on clinical judgment
Sued for taking an AI recommendation rather than relying on clinical judgment
Unfortunately, doing the status quo and not using AI even when it might be helpful will be perceived as having less legal liability.
Clinical Risks
Physicians can easily point out where AI could go astray:
Wrong information: AI can make decisions based only on the information it has, for example. But what if a crucial piece of information is disclosed just as the patient is leaving the room and isn’t “heard” by an ambient scribe? How will the physician know what exactly has been captured and omitted?
Wrong workflow:, or the system might be too cumbersome to actually use. Or accessing the AI recommendations actually distracts from patient care and makes outcomes worse
Wrong person: The AI suggestions may go to the resident instead of the attending, or to the wrong doctor
Caveats to Clinical AI Risk
Not using AI could also cause harm by allowing medical errors that technology might catch before they affect the patient.
It’s important to note that AI has provided clinical suggestions since the 1970s in the form of EKG readings and more recently for patient flow and best practice advisory notices. For example, every time I see a patient with a history of post-operative nausea and vomiting, I get a pop-up box in the EHR reminding me to give at least two medications to decrease that risk during the current surgery.
Workforce risks
Physicians are concerned that AI may follow a similar arc to EHRs and contribute to physician burnout and worsen workforce shortages.
There’s some concern about physician de-skilling if physicians stop thinking and start deferring all decisions to EHRs
Despite Geoffrey Hinton’s famous proclamation that we should stop training radiologists, physicians are not concerned about their jobs being taken over by AI. They know how much human interaction is required for every physician, even those in radiology who are the most tech-enabled.
Optimism
Doctors, like everyone, have heard about AI and are curious about it. In a recent survey of primary care healthcare professionals, half of whom were physicians, 66% said they had not received training on AI, but 91% said they would like to. Doctors are really excited about AI’s ability to decrease their administrative burden. They are desperate to spend more time with patients and stop focusing on paperwork.
Decreased Clinical Documentation Burden
Because of the well-documented increase in charting both during the day and after hours with EHRs, physicians welcome anything that decreases that burden. And many of the uses intuitively make sense. The allure of not having to chart is hard to ignore.
Ambient Scribes
Ambient scribes listen in on patient visits using speech recognition and use natural language processing to package it into a formatted note. They basically take on the role of the human scribes that many physicians have started to employ and serve as the layer between the physician and recording the information in the EHR. My friends who’ve used this software say it’s generally very good: it produces a high-quality note and most of the vendors are easy to use.
Initial outcomes with ambient scribes are promising. Per the American Academy of Family Practice:
79% of users reported better documentation quality
70% saw reduced burnout and fatigue
81% of patients saw greater physician focus
However, ambient scribes won’t capture everything. The entire physician thought process of possible diagnoses and more scientific aspects won’t be recorded unless the physician speaks it during the encounter, and ambient scribes don’t capture the side conversations or the patient’s daughter who calls the surgeon’s cell phone in the evening.
There are many players in the market right now; most of them just write the note. In the future, as EHR integration improves, the hope is for them to take action also. For example, they could place preliminary orders for lab tests and medications mentioned during the visits that the physician could sign off on.
Epic, the largest EHR in the US, has partnered with Microsoft to provide DAX Express, which uses GPT-4 as its generative AI software.
One underappreciated aspect of ambient scribes may be its ability to capture and analyze large amounts of clinical data that we didn’t have access to previously. For example, the aforementioned terrible copy-and-pasted notes from EHRs are currently being used to train models; data from ambient scribes may prove more accurate, and possibly provide some unexpected insights. There’s a dearth of data in healthcare compared to other fields, and AI might help generate material with which to train future models.
Other clinical documentation tasks that generative AI could help physicians with include:
Discharge summaries
Shift hand-off notes
Rounding notes
Quality documentation
Reading and summarizing the faxes (the healthcare industry is responsible for 75% of the faxes sent!) and inserting the information into the appropriate area in the EHR. Currently faxes, which often contain important information, are scanned into a random spot and really hard to read.
Decreased Billing Documentation
AI could accurately capture time for billing. To get paid, physicians often have to tell insurers how long they spent with each patient and how much time they spent on charting to get reimbursed properly. You can imagine this is difficult to do accurately as you get interrupted, a consultant calls about the patient after you’ve finished your note and you have to log in to the patient’s chart again and edit the billing piece to reflect that time. A system that could capture physician time accurately and relieve doctors of having to think about billing at all sounds like a dream to most physicians.
AI could improve accuracy of patient complexity billing. Part of how much a doctor gets paid is based on how complicated a patient is. To accurately reflect this, physicians have had to enter each diagnosis into a box. This is tedious and often becomes outdated. AI could more accurately represent patient complexity.
AI could decrease time spent on prior authorization. Have you ever sat on the phone with an insurance company? Do you find it soul-crushing? Imagine if that was part of your job, and every week you spent at least a few hours writing letters to insurance companies justifying a drug or test (for free, since that time is not reimbursed). That’s what physicians and their staff do 13 hours/week on prior authorizations, and 93% say prior authorizations lead to delays in patient care. McKinsey estimates AI can automate 50-75% of prior authorizations.
Physicians also have to do what’s called a “peer-to-peer” conversation, in which you try to convince the physician from the insurance company that your patient should get a treatment or test, and the physician from the insurance company tries to convince you that you’re wrong, even though they have never seen the patient and may not even be in your specialty. It takes at least 30 minutes and makes you lose faith in humanity. I’m not sure if AI can help with that piece, but it can at least help find evidence to support your position and write the letter to the insurance company referencing medical literature. Indeed, this was one of the first ways physicians adopted generative AI. However, insurance companies are also using generative AI to deny claims, which is currently the subject of lawsuits and congressional action.
Improved Medical Education
Doctors are largely chosen and tested on book knowledge, but communication, empathy, and decision-making skills are crucial for physicians. I’m hopeful AI will lead to more coherent and focused teaching through applications including:
Individualized learning via chatbots
Standardized evaluation of patient encounters and notes
Evaluation of procedural skills for trainees and practicing physicians
Patient-focus
I’ve worked with hundreds of physicians in my career, and I can honestly say that all but a handful were truly motivated by serving patients. They went into medicine to help people, and they will do lots of things that aren’t in their best interest personally to accomplish that. They’ll stay late to call the family of a dying patient, work for free in the middle of the night, do research on the weekends to figure out a particularly challenging set of symptoms, and call colleagues around the country to find the best clinical trial for a patient with a rare cancer. These are not exceptional examples; I see physicians do this all the time. In the OR, where stakes are high and efficiency is prioritized, starting a contentious discussion with “I know we all want what’s best for the patient” always leads to a thoughtful decision.
All this to say that if a tool actually helps patients, physicians will be in favor of it. If studies show AI can improve outcomes, physicians will use it.
Diagnostic and decision support
Having AI help with diagnosis is less of a slam-dunk than most non-clinicians think. Many doctors are wary of a computer helping them make clinical decisions for a few reasons:
We’ve had clinical decision support in our EHRs for years and it’s been pretty lackluster. It’s usually very rules-based and often pops up when it doesn’t apply. This has led to “alert fatigue”, with physicians overriding 96% of drug interaction alerts (half of which are appropriate overrides). Another review showed minimal improvements to patient safety with physician alerts.
Diagnosis is the fun part of medicine: the part that engages physicians intellectually and provides fulfillment. Doctors recognize that AI may help them think of less common diseases, but it’s not part of their job they’re excited to give up. One study showed doctors doubted AI could perform better than clinicians, and another survey noted about half of physicians feel that AI can perform better than “many physicians” in clinical settings. The challenge is finding the physician who actually thinks he or she is in the group of “most physicians” who would benefit from AI assistance, similar to finding people who don’t think they’re above-average drivers.
Image diagnosis has the best data and the longest (though still relatively short) history. Radiologists are already using AI for diagnostic support with good results, and most studies show improved performance with AI and humans working together rather than AI as a replacement. Studies in dermatology, pathology, and other image-based fields are at the leading edge of this field.
Patient access to care
Chatbots have evidence to improve conditions like depression, smoking, and lifestyle interventions like activity and healthy eating. Doctors know they can’t be available for a patient all the time, and these can be a great tool for patients
AI could help increase access for patients in rural populations and in low resource countries where there are no radiologists, for example.
Care personalization
Many of our decisions about patient care are rules-based and focused on large populations. AI offers hope that we can give patients better information about them based on more factors.
This could include cancer prognosis and treatment, medication tailoring, and preventative care recommendations.
Personalization could also include remote patient monitoring, suggesting foods based on a person’s microbiome and genetics, or deciding when to do surgery.
Barriers
Clearly the possible benefits of AI in healthcare are great. But there are significant hurdles to overcome to truly revolutionize healthcare.
A hammer in search of a nail
Communication is still key
A lot of healthcare involves relating to patients and colleagues, which is why 70% of healthcare adverse events involve communication. Even as an anesthesiologist, I spend the majority of my day talking to people. I build rapport with patients in less than 10 minutes, answer questions, and talk to patients about their kids while I put needles into their necks and backs. I talk to surgeons about the details of a patient’s life that might change my anesthesia plan, if they live 6 hours away or have to take care of their elderly mother the next day, for example. These conversations are what I enjoy about being a physician, and AI is very far from replacing these kinds of interactions.
I don’t need a computer to tell me that
A lot of healthcare involves decisions that are pretty obvious. A person comes in after falling down on the ski slope and has bone sticking out of their leg? The next patient comes in with chest pain a month after his open heart surgery? The physicians in the emergency room don’t need a computer to tell them what to do next in these situations.
Explainability
The black box nature of AI algorithms is more of a problem in healthcare than in other fields. Physicians want to be able to understand and explain why they’re prescribing a specific medication that has risks including death. Chain of thought in AI is improving but the ethical commitment to understanding is higher in healthcare than other fields.
Bias
Data Bias
There’s a lot of data bias in healthcare. For example, the big dermatology databases have mostly images of white skin so are much better at skin cancer diagnosis in that population than in patients with darker skin.
Societal Bias
Healthcare reflects general societal biases, so there’s also systemic bias in the models. We don’t have a great solution to either of these problems yet, and it’s generally the patients who have been underrepresented who bear the negative effects of AI bias in healthcare.
Data issues
Healthcare data is siloed, often lives in institutions that lack adequate technical support and storage, and has legal protections like HIPAA that can make sharing data more challenging. This means there is much less high-quality data available for model training, and the models that are trained on publicly available data, which is often limited to narrow populations like Medicare beneficiaries or the mostly male patients at the VA hospitals.
Workforce training
To get people to actually use AI technology for clinical purposes, the healthcare workforce will need to learn when and how to use it. Most of workforce is inadequately prepared and doing it correctly will take time and money. Tech companies are already investing in upskilling their employees; ideally healthcare would do the same.
Legal and Regulatory Challenges
Resolving the legal and regulatory issues related to what AI should and shouldn’t do, what it can do with and without physician supervision, and who bears ultimate responsibility is crucial to moving forward with AI in medicine. Professional societies and quality groups can help during this gap by creating sensible guidelines that physicians can point to when explaining why they allowed, or didn’t allow, AI to be part of their practice pattern.
The Future of AI for Physicians
As we ready ourselves for another technological push full of hype, my hope is that we learn from the EHR implementation and pay closer attention to how AI affects the people interacting with it on a daily basis: the physicians, nurses, patients, and other healthcare professionals who form the basis of our system. And that this time, when we look back on the people telling us:
“Hey, there’s this new technology that will improve efficiency, facilitate better patient care, and make your life easier.”
We think, “this time they were actually right.”
Author Biography:
Dr. Gebauer is a physician, AI expert, and consultant with extensive administrative and quality experience. She is passionate about engaging physicians in healthcare AI and started a Slack group, video series, and website dedicated to the topic. She also writes a weekly Substack newsletter about the intersection of healthcare and AI. If you are a startup looking to work with her as consultant, contact her on LinkedIn.
Thanks for reading!
What Clinicians Really Think About AI
Just read Sarah Gebauer's article. Realistic, insightful and thought provoking. Just to comment (for example) that if genAI ambient scribes are fallible, so are stressed-out overworked human physicians. Questions arise around trust, and comparative benefit rather than absolutes.