The American Medical Association (AMA) has been a leading voice in the need for new policy and guidance on augmented intelligence (AI) in health care to address rapid expansion, and is encouraged by the administration’s release of America’s AI Action Plan. Health care AI has a significant opportunity to transform patient care, improve health outcomes and reduce physician burden. The AMA agrees that a lack of public and professional trust can significantly hinder the adoption of health care AI, particularly in clinical settings.
As the administration continues to execute on America’s AI Action Plan, the AMA outlines several key considerations to ensure the development, deployment and use of AI in health care is transparent, responsible, ethical and equitable.
Physicians must be full partners at every stage of the AI lifecycle.
- It is imperative that physicians be full partners at every stage of the AI lifecycle including design, development, governance, rulemaking, standards-setting, post-market surveillance and clinical integration.
- Clinical experts are uniquely qualified to judge whether an AI tool is valid for a given indication, fits within the standard of care and supports rather than disrupts the patient-physician relationship. Any efforts to develop new federal policies on and approaches to AI oversight should be undertaken in a transparent manner prioritizing involvement from not just developer and industry stakeholders, but from end users, including physicians and patients.
- The AMA supports efforts to accelerate the development and adoption of national standards for AI systems to ensure the safety and performance of health care AI—with physicians meaningfully represented in the process.
- Concerns over AI liability continue to be a top issue for physicians and create hesitation around adoption. The AMA supports efforts to ensure that liability for AI is appropriately apportioned and limits physician liability for AI errors and performance issues.
- It is imperative that physicians be full partners at every stage of the AI lifecycle including design, development, governance, rulemaking, standards-setting, post-market surveillance and clinical integration.
A coordinated, transparent whole-government approach is necessary.
- The AMA strongly agrees that a coordinated, transparent regulatory approach is needed, especially one that provides clarity and consistency for developers, deployers and end users—very importantly, including physicians and patients.
- The FDA’s approach to regulating AI-enabled medical devices is still evolving given the new regulatory challenges posed by Software as Medical Device (SaMD). Stakeholders, including physicians, need to consider how best to modernize FDA authority to ensure that they are able to appropriately regulate AI-enabled medical devices to ensure ongoing safety and performance of these tools.
- Health care AI is unique in that its risks to the health and well-being of our patients are potentially high. Federal entities must act in concert to create a coordinated and coherent oversight ecosystem. Fragmented or duplicative rules slow innovation, confuse clinicians and leave critical gaps unaddressed.
- Additionally, tools and systems that can impact medical decision making or impact patient access to care should be subject to vigorous testing and appropriate oversight to mitigate patient harms. A coordinated effort that reflects ongoing efforts of testing AI and sharing data and results prior to public deployment is critical to ensuring that the significant risks to patient health and well-being are not an afterthought. The “try-first” mentality for applications of AI should be clarified and reserved for testing environments only as the risks to patient health and well-being are simply too significant in real-world patient care settings.
- The AMA encourages state and federal policymakers to work in close coordination to avoid fragmentation, creating a balanced regulatory environment that fosters innovation, and prioritizes safety, accountability and public confidence in AI systems. Federal regulation has the opportunity to establish clear national standards and risk mitigation frameworks, while states can play a key role in implementation, oversight and addressing region-specific concerns.
- The AMA strongly agrees that a coordinated, transparent regulatory approach is needed, especially one that provides clarity and consistency for developers, deployers and end users—very importantly, including physicians and patients.
Secure data that is free from bias will enhance trust.
- The handling of data will be critical to building trust around AI for both physicians and patients. That includes how data is used to develop and train AI models to what data is used and how potential bias is mitigated to ensure effective, safe AI tools.
- Privacy is paramount in health care. Without strong de-identification and consent safeguards, open data and open-source AI poses significant risk to compromising privacy. The AMA advocates for strong governance to ensure data privacy and security—including transparency to patients and physicians about how data is used.
- The AMA recognizes the value of high quality, usable data for the development of AI. Data disparity and cleanliness contributes to AI bias. However, data access, use and exchange must be comported with individuals’ preferences, and state and federal laws. Privacy preserving technical frameworks are lacking across electronic health records and health information exchanges. The AMA welcomes a focus on transparent AI and data.
- The call for “secure-by-design” AI and stronger cybersecurity for critical infrastructure aligns with AMA priorities to protect medical practices and patient data. The AMA encourages specific emphasis on and commitment to individual data protection, further ensuring protection for patients.
- Bias in AI is a widely recognized risk across all stakeholders. In health care, it could lead to patient harm. A foundational principle of the plan is ensuring that AI systems are free from ideological bias and designed to pursue objective truth, which the AMA agrees is crucial for trustworthy AI in sensitive areas such as health care and medical research. Eliminating references to misinformation and diversity, equity and inclusion in risk management frameworks may limit attempts to appropriately address AI bias and discrimination. Maximum effort to mitigate the risks of bias in AI systems is critical to enhance trust and improve care.
- The handling of data will be critical to building trust around AI for both physicians and patients. That includes how data is used to develop and train AI models to what data is used and how potential bias is mitigated to ensure effective, safe AI tools.
Upskilling the physician workforce is critical to advancing adoption.
- The AMA supports efforts to increase focus on educational efforts surrounding AI. We recognize that AI is playing an increasingly important role at all stages of the medical education continuum, both as a tool for educators and learners and as a subject of study in and of itself. The AMA is well-positioned to lead upskilling efforts with key initiatives focused on introductory AI concepts, establishing AI governance, and evaluating and implementing AI tools in practice (PDF).
- Investments in physician education, both in medical schools and for currently practicing physicians, will help increase understanding, allow for appropriate assessment of AI tools by physicians, and will help to enhance trust and comfort with adoption.
- Medical educators are enthused about the potential to leverage AI to improve medical training and promote precision education across the continuum of medical school, residency and practice.
- The AMA supports efforts to increase focus on educational efforts surrounding AI. We recognize that AI is playing an increasingly important role at all stages of the medical education continuum, both as a tool for educators and learners and as a subject of study in and of itself. The AMA is well-positioned to lead upskilling efforts with key initiatives focused on introductory AI concepts, establishing AI governance, and evaluating and implementing AI tools in practice (PDF).
The AMA remains committed to guiding the ethical, safe and effective integration of AI into health care, ensuring that all regulatory efforts prioritize the needs and safety of patients and physicians. The approach to AI oversight and regulation must ensure safety and performance above all else, and must not be sacrificed for the sake of speed to market.
Assurances of safety and performance, mitigation of bias, data privacy protections and appropriate apportionment of liability for AI errors are critical to building much-needed trust among patients and physicians and will be instrumental in ultimately increasing adoption.
Learn more about the AMA’s AI policy (PDF).