Health care AI must boost the quadruple aim to move forward

. 4 MIN READ
By
Andis Robeznieks , Senior News Writer

Use of augmented intelligence (AI) in health care is evolving rapidly, and issues regarding definitions of key terms, clinical efficacy and safety, equity, liability, usability and workflow integration are addressed in new AMA policy adopted at the 2019 AMA Annual Meeting. 

Related Coverage

Highlights from the 2019 AMA Annual Meeting

“Medical experts are working to determine the clinical applications of AI—work that will guide health care in the future. These experts, along with physicians, state and federal officials must find the path that ends with better outcomes for patients,” said Gerald E. Harmon, MD, former chair of the AMA Board of Trustees. “We have to make sure the technology does not get ahead of our humanity and creativity as physicians.” 

The AMA House of Delegates (HOD) broke new ground last year in adopting the AMA’s initial policies on AI, referred to as “artificial intelligence” in popular culture, but the report summarized the need for additional policy and to continue working with stakeholders and policymakers to ensure that the perspective of physicians is heard as the technology continues to develop. 

The AMA’s policy is based on the principle that AI should advance the quadruple aim—meaning that it “should enhance the patient experience of care and outcomes, improve population health, reduce overall costs for the health care system while increasing value, and support the professional satisfaction of physicians and the health care team.” 

The newly adopted policy calls on the AMA to advocate: 

  • Oversight and regulation of health care AI systems based on risk of harm and benefit accounting for a host of factors, including but not limited to: intended and reasonably expected uses; evidence of safety, efficacy and equity, including addressing bias; AI system methods; level of automation; transparency; and conditions of deployment. 
  • Payment and coverage for all health care AI systems conditioned on complying with all appropriate federal and state laws and regulations, including those governing patient safety, efficacy, equity, truthful claims, privacy and security as well as state medical practice and licensure laws. 
  • Payment and coverage for health care AI systems intended for clinical care conditioned on clinical validation, alignment with clinical decision-making that is familiar to physicians, and high quality clinical evidence. 
  • Payment and coverage for health care AI systems that: is informed by real world workflow and human-centered design principles; enables physicians to prepare for and transition to new care delivery models; supports effective communication and engagement between patients, physicians, and the health care team; seamlessly integrates clinical, administrative, and population health management functions into workflow; and seeks end-user feedback to support iterative product improvement. 
  • Payment and coverage policies that advance affordability and access to AI systems that are designed for small physician practices and patients and not limited to large practices and institutions. Government-conferred exclusivities and intellectual property laws meant to foster innovation, should be appropriately balanced with the need for competition, access and affordability. 

The AMA also will further advocate: 

  • That where a mandated use of AI systems prevents mitigation of risk and harm, the individual or entity issuing the mandate must be assigned all applicable liability. 
  • That developers of autonomous AI systems with clinical applications (screening, diagnosis, treatment) are in the best position to manage issues of liability arising directly from system failure or misdiagnosis and must accept this liability with measures such as maintaining appropriate medical liability insurance and in their agreements with users. 
  • Health care AI systems that are subject to nondisclosure agreements concerning flaws, malfunctions, or patient harm (referred to as gag clauses) must not be covered or paid and the party initiating or enforcing the gag clause assumes liability for any harm. 

The AMA also will work with national medical specialty societies and state medical associations to: 

  • Identify areas of medical practice where AI systems would advance the quadruple aim. 
  • Leverage existing expertise to ensure clinical validation and clinical assessment of clinical applications of AI systems by medical experts. 
  • Outline new professional roles and capacities required to aid and guide health care AI systems. 
  • Develop practice guidelines for clinical applications of AI systems.  
  • Collaborate with federal and state interagency to advance the broader infrastructural capabilities and requirements necessary for AI solutions in health care to be sufficiently inclusive to benefit all patients, physicians, and other health care stakeholders. 

The AMA also will oppose: 

  • Policies by payers, hospitals, health systems or governmental entities that mandate use of health care AI systems as a condition of licensure, participation, payment, or coverage.  
  • The imposition of costs associated with acquisition, implementation and maintenance of health care AI systems on physicians without sufficient payment. 

The HOD’s newly adopted policy also says “there should be federal and state interagency collaboration with participation of the physician community and other stakeholders in order to advance the broader infrastructural capabilities and requirements necessary for AI solutions in health care to be sufficiently inclusive to benefit all patients, physicians and other health care stakeholders.” 

Lastly, the AMA will advocate that AI be designed to enhance human intelligence and the patient-physician relationship rather than replace it. 

FEATURED STORIES