Digital

FDA must guard against bias in AI, focus on patient outcomes

. 4 MIN READ
By
Andis Robeznieks , Senior News Writer

The AMA is cautioning the Food and Drug Administration (FDA) to recognize the risks of software as a medical device (SaMD) that uses a type of augmented intelligence (AI) called machine learning. The agency should incorporate a focus on patient outcomes as a “foundational requirement” of technology development, physicians say.

The AMA detailed its concerns in a letter to FDA Acting Commissioner Norman E. Sharpless, MD. The AMA’s comments came in response to an FDA discussion paper that outlined a proposed framework for regulating SaMD products based on AI and machine learning (ML).

Related Coverage

6 keys to improve AI for treatment choices

“Our vision is that with appropriately tailored regulatory oversight, AI/ML-based SaMD will deliver safe and effective software functionality that improves the quality of care that patients receive,” the FDA discussion paper says.

According to the paper, the FDA’s “traditional paradigm of medical device regulation” was not designed to handle products with the potential to continuously adapt and change, so a new approach is needed “that facilitates a rapid cycle of product improvement” while still providing effective safeguards.

The AMA views the FDA’s decision to specifically address ML systems as a “step forward,” but then offered suggestions for improvement based on a five-point policy adopted by the AMA House of Delegates. The policy calls for the AMA to promote development of “thoughtfully designed, high-quality, clinically validated health care AI” that:

  • Is designed and evaluated in keeping with best practices in user-centered design
  • Is transparent
  • Conforms to leading standards for reproducibility
  • Identifies and takes steps to address bias and avoids introducing or exacerbating health care disparities including when testing or deploying new AI tools on vulnerable populations
  • Safeguards patients’ privacy

With those points in mind, the AMA suggested the following in its response to the FDA.

Clarify whether the paper is specifically addressing ML. AMA Executive Vice President and CEO James L. Madara, MD, wrote that the FDA must specify whether the FDA discussion paper is exclusively addressing ML or “the full gamut of AI systems.”

“We are concerned that it will create confusion where there is already significant complexity,” Dr. Madara wrote.

Provide appropriate balance concerning benefits and risk. The FDA discussion paper selectively highlights AI benefits while minimizing or failing to mention risks. This includes the well-known hazard of introducing bias, a word—Dr. Madara noted—that is not mentioned anywhere in the paper.

An AMA report has previously noted that there is a popular tendency to see AI as a neutral or objective decision-support tool grounded in “a pristine mathematical process” that is independent of human judgment. But there is the potential to “invisibly and unintentionally” reproduce and normalize biases resulting in care models that “reflect the conditions only of the fortunate.”

Tie development goals to patient outcomes. Calling it a “fundamental principle missing throughout the document,” Dr. Madara noted how the FDA discussion paper’s list of good machine-learning practices does not include any reference to patient outcomes.

Standardize terminology. The AMA has been working with standard-setting bodies and others to standardize AI terminology to “drive consensus on shared definitions.” But some of the definitions used by the FDA in its paper “do not seem to comport with the definitions offered by experts who the AMA has consulted extensively over the past year,” Dr. Madara wrote.

Use a framework that addresses dimensions of ML risk. The paper notes that the FDA would continue to use the International Medical Device Regulatory Forum risk table that does not take in account ML-associated risks such as whether a fully autonomous system is being used without the ability of humans to intervene.

“We would stress that all of these should be directly addressed by the FDA as priority areas for developers as these represent a clear statement of what a key end-user requires as essential for the adoption of AI systems—particularly ML systems in clinical practice,” Dr. Madara’s letter says.

FEATURED STORIES