AMA Update covers a range of health care topics affecting the lives of physicians, residents, medical students and patients. From private practice and health system leaders to scientists and public health officials, hear from the experts in medicine on COVID-19, medical education, advocacy issues, burnout, vaccines and more.
Featured topic and speakers
The use of generative AI, like ChatGPT, in medicine has the potential to unburden physicians and help restore the patient-physician relationship. However, regulatory uncertainty and liability concerns are barriers to adoption. American Medical Association President Jesse Ehrenfeld, MD, MPH, joins to discuss the many ways generative AI will change health care. AMA Chief Experience Officer Todd Unger hosts.
The AMA is your powerful ally in patient care. Join now.
Speaker
- Jesse Ehrenfeld, MD, MPH, president, AMA
Transcript
Unger: Hello and welcome to the AMA Update video and podcast earlier this year, we took a broad look at the impact that generative AI like ChatGPT might have on health care. And today we're going to dig a little bit deeper and explore how it might change how physicians practice and care for their patients. With me today to discuss that is AMA President Dr. Jesse Ehrenfeld in Milwaukee, Wisconsin. I'm Todd Unger, AMA's chief experience officer in Chicago. Welcome back, Dr. Ehrenfeld.
Dr. Ehrenfeld: Yeah, it's good to talk to you, Todd. Thanks for having me.
Unger: Well, a lot of the conversation around generative AI has focused on two things—number one, the ability to potentially diagnose patients, and second, to help alleviate a lot of the administrative burdens that physicians face. I'd like to start by getting your thoughts on AI's diagnostic capabilities. What is currently in the cards for it to be able to do and not do?
Dr. Ehrenfeld: Well, it's a great place to start. Even the most advanced algorithms in AI-enabled tools still can't diagnose and treat diseases. And that's really the wrong approach.
The probabilistic algorithms, they're just too narrow. They can't substitute for the judgment, the nuance or the thought that a clinician brings. And so I think there's a lot of opportunity to think about these tools as a copilot but not an autopilot particularly in the diagnostic realm.
And that's why the FDA's forthcoming regulatory framework for AI-enabled devices is proposing to be much more stringent on AI tools that make a diagnosis or recommend a treatment, especially if it's an algorithm that continues to adapt or learn over time, these so-called continuous learning systems.
Now, algorithms are great for solving a textbook patient or a very narrow clinical question. And that's why there was a nice description in the literature how ChatGPT could so-called pass the USMLE. The USMLE is full of such cases. I know. I write them.
But patients, they're not a standardized question stem. They're humans with thoughts, with emotions, with complex medical, social, psychiatric backgrounds. And I'll tell you, they rarely follow the textbooks. And it's that complicated person that requires the patient-physician relationship. And it's that sacred bond that emphasizes the individual.
Now, a lot of my patients, they may come in for similar surgeries, similar anesthesia care. I'm an anesthesiologist. But most of my patients have different goals. And a successful outcome is often not the same for any particular patient. So even if a computer could analyze all of this, the algorithm is still going to give you a generic answer.
Unger: What a great answer to point out the complexities that we're facing right now. Let's turn to the other part of that from where we started, which is the administrative burdens that are just so significant on physicians. Where do you see AI playing a role there?
Dr. Ehrenfeld: Well, there's a huge opportunity. And we know that based on our survey data about 20% of U.S.-based practices are already using AI. But this is where they're using it. They're using it to unburden these administrative problems. They're using it for supply chain management, scheduling, optimization.
Now, based on our technical and regulatory perspective, the AMA strongly feels that AI should be leveraged first to unburden physicians. In other words, use the AI to detether us from our computers, help bring us back to our patients and restore the patient-physician relationship.
This unburdening role is something that AI technologies, machine learning, automation, have been playing in medicine for years now. Practices that have embraced these technologies have seen some really impressive results.
And unburdening physicians is not only how AI can best contribute, but it's also one of the places where we know that there can be a big difference in terms of the ability of these tools to support practices where they need the most help. 1-in-2 physicians now say that they're burnt out. 1-in-5 physicians say that they're planning to leave the practice of medicine in the next two years.
Given all that, we have to have new solutions. We need new tools that take advantage of the latest generations of these AI models that can help us with our notes, help us with paperwork, help us with these other administrative requirements.
Unger: What a great priority to use AI to unburden physicians as you talked about it. And again, back to the complexities that you laid out, no matter how generative AI is used to ultimately assist physicians, there is one thing that is certain, that it eventually is going to make a mistake. And when it does, who's going to be liable for a mistake like that?
Dr. Ehrenfeld: It's a really important issue. And not all digital health innovations live up to their promises. And if I lose a patient because of an algorithm, that is the inherent question of liability for physicians. And where the liability is placed, how it is managed has a particular importance in health care.
Physicians and the health care industry are likely to be on the hook when things go wrong much more so for me than the developer or the innovator. There is an active current federal proposal that would hold physicians solely liable for the harm resulting from an algorithm if I rely on the algorithm in my clinical decision making.
We don't think that's the right approach. We think that the liability ought to be placed with the people who are best positioned to mitigate the harm. And that is likely going to be the developer, the implementer, whoever buys these things, often not the end user, the clinician.
Unger: What a huge point. Because if physicians are more likely to be held accountable for mistakes like that, I imagine that's going to have a big impact on how enthusiastic or hesitant a physician might be to use a technology like this in their practice.
Dr. Ehrenfeld: It'll kill the market. And liability is a potential barrier to the uptake of AI. And if I can't rely on the output of a system as an input into my decision making because I'm worried about liability concerns, then justifying use in my practice is going to be really, really difficult. So uncertainty within the regulatory system makes these questions of liability much, much more complex than you see immediately when you look at the surface.
We agree with the FDA and others that the existing regulatory paradigm for hardware medical devices just doesn't work. It's not well suited to appropriately regulate AI-based devices, software, software as a medical device. So we support the FDA's efforts to explore a new approach to regulate these tools. And we're certainly looking to partner with the FDA to make sure that whatever we do, we only have safe and effective products in the marketplace.
Unger: Now, we've talked a lot about how physicians might feel cautious or not about relying on generative AI. Do you have any sense for how patients are feeling about this technology?
Dr. Ehrenfeld: There's a lot of discomfort around the use of these tools among Americans with the idea of AI being used in their own health care. There was a 2023 Pew Research Center poll. 60% of Americans would feel uncomfortable if their own health care provider relied on AI to do things like diagnose disease or recommend a treatment.
So there has to be more done through regulation and with the developer community to strengthen trust in these tools. Trust is fundamental to what we do in health care. It is fundamental to the patient-physician relationship. And preserving patients trust in an increasingly digital world is absolutely crucial.
There are also concerns about the use or misuse of health data. And those extend to the AI space. We have a survey from 2021, an AMA survey, showed that 94% of patients want strong laws to govern the use of their health data. So when you think about all of these things together, it's not a slam dunk. We need to make sure that we preserve the trust of our patients and protect the privacy of their health data.
Unger: Absolutely. So given the complexities that we've talked about and the promise of this kind of technology, what advice do you have for physicians in terms about of how they can play a role in the kind of go-forward here to help AI reach its potential in medicine?
Dr. Ehrenfeld: Well, it's an exciting time. We're seeing all of this rapid development, a lot of hype, a lot of chatter, a lot of experimentation going on.
But during this particular period in history when AI tools, regulations are rapidly evolving, it is more important than ever for physicians to make their voices heard, especially on datas like—on issues like data accuracy, health equity, privacy, liability. We know that these algorithms can also be flawed. They can be based off of faulty data or studies that are eventually overturned.
The era of big-data personalized medicine is still in its infancy with the most large data sets consisting of only basic demographics, information, ICD codes. We're just now seeing the importance of collecting data on the social determinants of health.
Algorithms, however, that are based off of incomplete data can harm and even perpetuate systemic disparities. And when it comes to privacy, we know that we have to make sure, again, that we protect privacy to restore trust. So it's an exciting time. There's a lot ahead. But we've got to be careful as we step into this space.
Unger: Dr. Ehrenfeld, thank you so much. That's an amazing perspective. That's it for today's episode. We'll be back soon with another AMA Update. In the meantime, you can find all our videos and podcasts at ama-assn.org/podcasts. Thanks for joining us today. Please take care.
Disclaimer: The viewpoints expressed in this podcast are those of the participants and/or do not necessarily reflect the views and policies of the AMA.