Physicians have guarded enthusiasm for the use of augmented intelligence (AI) in health care, with many seeing it reducing documentation time and administrative burden. But they are also concerned about health care AI’s impact on the patient-physician relationship and patient privacy.
Nearly two-thirds of the 1,081 physicians responding to an AMA survey (PDF) said they see advantages to using AI, though only 38% said they were using it at the time the survey was administered last summer. Meanwhile, 41% said they were both equally excited and concerned about potential uses of AI—often called artificial intelligence—in health care.
“Whatever the future of health care looks like, patients need to know there is a human being on the other end helping guide their course of care,” said AMA President Jesse M. Ehrenfeld, MD, MPH. “That’s essential.”
Dr. Ehrenfeld, an anesthesiologist who co-chairs the Association for the Advancement of Medical Instrumentation’s AI committee, was the keynote speaker at the Healthcare Information and Management Systems Society’s AI in Healthcare Forum last month.
Transparency is key
The AMA is influencing the development of health care AI by developing standards and a common language and has released a set of AI Principles (PDF).
“The purpose of these principles is to provide continued guidance to physicians and developers on how to best engage with and design new AI-enabled technologies with the understanding that policy development related to AI will likely continue to develop given the rapid pace of change,” Dr. Ehrenfeld said.
“Above all else, health care AI must be designed, developed and deployed in a manner which is ethical, equitable, responsible and transparent,” he added.
Transparency is especially needed when insurers use AI or other algorithmic-based systems to make claim determinations or set coverage limits, the AMA says.
The use of these systems or programs must be disclosed to affected parties and that payers using automated decision-making systems should make statistics regarding systems’ approval, denial and appeal rates available on their website. Payers should also provide clear evidence that their systems do not discriminate, and prior to issuing an adverse determination, the treating physician must have the opportunity to discuss the medical necessity of the care directly with a human.
“We urge that payers’ use of automated decision-making systems do not reduce access to needed care, nor systematically withhold care from specific groups,” Dr. Ehrenfeld said.
“Steps should be taken to ensure that these systems are not overriding clinical judgement and do not eliminate human review of individual circumstances,” he added. “It is critical that clinical decisions influenced by AI must be made with specified human intervention points during the decision-making process.”
Dr. Ehrenfeld also discussed the importance of applying “an equity lens” in the development of AI tools from the beginning stages.
The AMA is calling for health care AI product developers to conduct post-market surveillance activities aimed at ensuring continued safety, performance, and equity and that this information be available to potential purchasers and physician users to appropriately evaluate the technology.
“It’s going to be on the manufacturers of these products to set things up to identify problems, and I would argue that to gain the trust of consumers they will need an easy way to report problems and get answers if there is a problem,” Dr. Ehrenfeld said.
“Now this may not fall into a regulatory framework under FDA [Food and Drug Administration] authority, but it could become a market differentiator to identify from all the products in the oncoming tidal wave,” he added.
How to build physician trust in AI
In order for physicians and other health professionals to put their trust in health care AI products, Dr. Ehrenfeld urged policymakers to:
- Provide clear and consistent regulatory guidance that ensures safety and performance.
- Show progress on pathways toward payment for high-quality, high-value AI.
- Limit physician’s liability exposure for AI performance.
- Make regulators and AI developers work together to build trust in AI data use.
According to the AMA survey, 78% of respondents said they wanted to see:
- Clear information and references to help explain how AI decisions are made.
- Demonstrated usefulness and efficacy among similar practices.
- Information about how the AI’s performance is monitored.
The AMA survey included responses from 524 physicians who described themselves as “tech adopters” and 556 who described themselves as “tech averse.” All together, these shares of the physician respondents thought these areas were where AI would be the most helpful:
- Diagnostic ability: 72%.
- Work efficiency: 69%.
- Clinical outcomes: 61%.
Also, 56% of respondents thought AI would be helpful in improving care coordination, patient convenience and patient safety. However, 39% were concerned about the impact to the patient-physician relationship and 41% with patient privacy.
Enthusiasm was high for these AI use cases:
- Documentation of billing codes, medical charts or visit notes: 54%.
- Automation of insurance prior authorization: 48%.
- Creation of discharge instructions, care plans or progress notes: 43%.
For physicians, residents and medical students to best engage in the development and deployment of AI tools, they need a foundational knowledge base. This can be found in “AI in Health Care,” a seven-part AMA Ed Hub™ online activity series. The first and second modules, “Introduction to Artificial Intelligence (AI) in Health Care” and “AI in Health Care: Methodologies,” are available, and new modules will be released in the coming months.