Digital

Make sure health AI works for patients and physicians

The AMA House of Delegates outlines steps that must be taken to ensure the technology remains an asset, even as health AI keeps evolving.

By
Andis Robeznieks , Senior News Writer
| 6 Min Read

AMA News Wire

Make sure health AI works for patients and physicians

Jun 12, 2025

Delegates meeting in Chicago this week acted to deepen the AMA’s work to ensure physician voices are integrated into the creation and refinement of all technological aspects of medicine—from telehealth to AI to EHRs.

The AMA is advocating for you

See our real-world impact on issues critical to patients and physicians.

In the fast-moving and promising arena of augmented intelligence (AI)—often called artificial intelligence—delegates at the 2025 AMA Annual Meeting took several actions to strengthen existing AMA policy (PDF) and ensure that the technology is “explainable,” validated, well defined and is not used to conduct medical research fraud.

“With the proliferation of augmented intelligence tools in clinical care, we must push for greater transparency and oversight so physicians can feel more confident that the clinical tools they use are safe, based on sound science, and can be discussed appropriately with their patients when making shared decisions about their health care,” said AMA Trustee Alexander Ding, MD, MS, MBA, a diagnostic and interventional radiologist. 

“The need for explainable AI tools in medicine is clear, as these decisions can have life-or-death consequences,” Dr. Ding added. “The AMA will continue to identify opportunities where the physician voice can be used to encourage the development of safe, responsible and impactful tools used in patient care.”

The latest AMA survey (PDF) of physicians shows that they are largely enthusiastic about health AI’s potential, with 68% seeing at least some advantage to the use of AI in their practice, up from 65% in 2023. Meanwhile, the share of physicians using some type of AI tool in practice rose from 38% in 2023 to 66% in 2024. 

Half the dues, all the AMA benefits!

  • Thousands of free CME opportunities to fulfill state requirements
  • A powerful voice fighting for you during uncertain times
  • Research, resources, events and more from the largest physician organization

However, there are still key concerns as physicians continue to explore how these tools will impact their practices. Implementation guidance and research, including clinical evidence, remain critical to helping physicians adopt AI tools.

On the issue of AI and medical research, an AMA Council on Science and Public Health report presented at the meeting notes that medical and scientific journals are experiencing an “arms race” between the software that can detect when a paper was produced using generative AI and the creators of such software.

“Research misconduct undercuts trust and has a corrosive impact on the practice of medicine,” the council’s report says. “While cases of fraud have happened infrequently in the past, rising rates of retractions have resulted in concerns that widespread access to AI tools which quickly generate text and images are causing fraud to become more commonplace.”

Stay up to date on AI

Follow the latest news on AI and its applications and effects for health care—delivered to your inbox.

Health care AI subscribe

The report notes the need to stay on top of the latest developments, but it explains that crafting meaningful policy in an ever-shifting environment is a also a difficult task.

Ultimately, the House of Delegates modified existing AMA policy to support:

  • Policies requiring authors to disclose the use of generative artificial/augmented intelligence programs to best allow for content to be reviewed for intentional and unintentional scientific misrepresentation.
  • Efforts to disseminate accurate and valid research findings, and to combat research and publication fraud, in the face of rapidly advancing technology. 

A separate AMA council report examines the concept of “explainability” in AI and machine-learning algorithms. Providing an explanation about how a conclusion was arrived at is seen as one way of alleviating the mistrust many have in the decisions derived from the use of these algorithms. 

“Ironically, the concept of explainability is hard to explain,” the report says. But “the appeal of explainability is clear—particularly in medicine, where decisions can have life or death consequences.”

To maximize the impact and trustworthiness of AI and machine-learning tools in clinical settings, delegates adopted new policy to recognize that:

  • Explainable AI with safety and efficacy data should be the expected form of AI tools for clinical applications, and exceptions should be rare and justified and require, at minimum, safety and efficacy data prior to their adoption or regulatory approval.
  • To be considered "explainable," an AI device's explanation of how it arrived at its output must be interpretable and actionable by a qualified human. Claims that an algorithm is explainable should be adjudicated only by independent third parties, such as regulatory agencies or appropriate specialty societies, rather than by declaration from its developer.
  • Explainability should not be used as a substitute for other means of establishing safety and efficacy of AI tools, such as through randomized clinical trials.
  • Concerns of intellectual property infringement, when provided as rationale for not explaining how an AI device created its output, does not nullify a patient’s right to transparency and autonomy in medical decision-making. While intellectual property should be afforded a certain level of protection, concerns of infringement should not outweigh the need for explainability for AI with medical applications.

Delegates also directed the AMA to “collaborate with experts and interested parties to develop and disseminate a list of definitions for key concepts related to medical AI and its oversight.”

From AI implementation to EHR adoption and usability, the AMA is fighting to make technology work for physicians, ensuring that it is an asset to doctors—not a burden. 

Health AI must be safe, validated

AI systems—including large language models and generative platforms—are increasingly deployed in clinical decision support, patient communications, education, documentation and public health messaging. And as AI-generated outputs become virtually indistinguishable from human content, the House of Delegates called for the creation of a safety infrastructure.

To this end, they directed the AMA to “recognize the need for clear disclosure” to physicians or other health professionals “whenever AI is used in the delivery of clinical care, in order to ensure the safe, transparent, and accountable use of AI-generated content in clinical and public-health settings.”

Also, the AMA will call for entities developing or deploying health AI systems—including generative AI, foundation models, neural networks and other machine-learning approaches—to: 

  • Establish and maintain a risk-based governance approach proportionate to the system’s intended use and potential harm.
  • Implement relevant security measures and privacy protections.
  • Provide for clinically useful transparency, such as clear labeling of AI-generated outputs for end users, including disclosure of the algorithm’s level of confidence in those outputs.
  • Implement risk management approaches throughout the AI lifecycle with particular emphasis on appropriate monitoring of the system for safety, clinical effectiveness, accuracy, and reliability, to help ensure ethical and regulatory alignment across all deployment contexts.

Delegates also noted the lack of a comprehensive framework for integrating evidence-based AI tools into clinical workflows to ensure effective implementation and to address potential risks of misinformation or misuse.

To this end, they directed the AMA to “collaborate with stakeholders, including physicians, academic institutions and industry leaders, to create a report by A-26 [the 2026 AMA Annual Meeting] with recommendations for how AI tools used in clinical-decision support convey transparency in the quality of medical evidence and the grading of medical evidence to physicians and advanced care practitioners so clinical recommendations can be accurately verified and validated.”

Explore further with this recent Leadership Viewpoints column by former AMA President Jesse M. Ehrenfeld, MD, MPH, on prioritizing humanity in our AI future.

Read about the other highlights from the 2025 AMA Annual Meeting.

Making technology work for physicians

FEATURED STORIES

Midsection of patient in hospital gown waiting in exam room

Senate budget-reconciliation bill risks worsening access to care

| 8 Min Read
Smiling patient sits across from doctor

How to give physicians autonomy—and protect them from burnout

| 5 Min Read
Figure using puppets to control up and down fluctuations of the two arrows charts

Physicians will get their day in court to challenge insurer price-fixing

| 5 Min Read
Arizona State Capitol

Arizona strengthens its support for physicians’ well-being

| 4 Min Read