CHICAGO — As augmented intelligence tools continue to emerge in medical care, the American Medical Association (AMA) adopted policy during the Annual Meeting of its House of Delegates aimed at maximizing trust in and increasing transparency around how these tools arrive at their conclusions. Specifically, the new policy calls for explainable clinical AI tools that include safety and efficacy data. To be considered explainable, these tools should provide explanations behind their outputs that physicians, and other qualified humans, can access to interpret and act on when deciding on the best possible care for their patients.
Furthering the AMA’s support for more oversight and regulation of augmented intelligence (AI) and machine learning (ML) algorithms used in clinical settings, the new policy calls for requiring an independent third party, such regulatory agencies or medical societies, to determine whether an algorithm is explainable, rather than relying on claims made by its developer. The policy states that explainability should not be used as a substitute for other means of establishing safety and efficacy of AI tools, such as randomized clinical trials. Additionally, the new policy calls on AMA to collaborate with experts and interested parties to develop and disseminate a list of definitions for key concepts related to medical AI and its oversight.
“With the proliferation of augmented intelligence tools in clinical care, we must push for greater transparency and oversight so physicians can feel more confident that the clinical tools they use are safe, based on sound science, and can be discussed appropriately with their patients when making shared decisions about their health care,” said AMA Board Member Alexander Ding, M.D., M.S., M.B.A. “The need for explainable AI tools in medicine is clear, as these decisions can have life or death consequences. The AMA will continue to identify opportunities where the physician voice can be used to encourage the development of safe, responsible, and impactful tools used in patient care.”
The AMA Council on Science and Public Health report that served as the basis for this policy noted that when clinical AI algorithms are not explainable, the clinician’s training and expertise is removed from decision-making, and they are presented with information they may feel compelled to act upon without knowing where it came from or being able to assess accuracy of the conclusion. The report also noted that intellectual property concerns, when provided as a rationale for not explaining how an AI device created its output, should not nullify a patient’s right to transparency and autonomy in making medical decisions. To this end, the new policy states that while intellectual property should be afforded a certain level of protection, concerns of infringement should not outweigh the need for explainability for AI with medical applications.
Media Contact
About the American Medical Association
The American Medical Association is the physicians’ powerful ally in patient care. As the only medical association that convenes 190+ state and specialty medical societies and other critical stakeholders, the AMA represents physicians with a unified voice to all key players in health care. The AMA leverages its strength by removing the obstacles that interfere with patient care, leading the charge to prevent chronic disease and confront public health crises and, driving the future of medicine to tackle the biggest challenges in health care.