The technological capacity exists to use augmented intelligence (AI) algorithms and tools to transform health care, but real challenges remain in ensuring that tools are developed, implemented and maintained responsibly, according to a JAMA Viewpoint column, “Artificial Intelligence in Health Care: A Report From the National Academy of Medicine.”
“The challenges [to use of AI] are unrealistic expectations, biased and nonrepresentative data, inadequate prioritization of equity and inclusion, the risk of exacerbating health care disparities, low levels of trust, uncertain regulatory and tort environments and inadequate evaluation before scaling narrow AI,” the opinion piece concludes. AI is often called artificial intelligence.
The Viewpoint column was co-written by two co-authors of the National Academy of Medicine (NAM) report, AI in Healthcare: The Hope, The Hype, The Promise, The Peril. The 2019 NAM publication—a mix of caution and guarded optimism—presents what’s known about AI in the clinical setting and serves as a guide on how the field can move forward responsibly and in a way that benefits all patients.
The AMA experts offered guidance, advice and comments throughout the process . Learn more about the AMA’s commitment to helping physicians harness AI in ways that safely and effectively improve patient care.
The JAMA Viewpoint column was written by the lead authors of the report from Vanderbilt University and Stanford medical schools and the National Academy of Medicine.
7 key takeaways on health care AI
The NAM report recommends that people developing, using, implementing and regulating health care AI do seven key things.
Promote population-representative data with accessibility, standardization and quality is imperative. This is the way to ensure accuracy for all populations. While there is a lot of data now, there are issues with data quality, appropriate consent, interoperability and scale of data transfers.
Prioritize ethical, equitable and inclusive medical AI while addressing explicit and implicit bias. Underlying biases need to be scrutinized to understand their potential to worsen or address existing inequity and whether and how it should be deployed.
Contextualize the dialogue of transparency and trust, which means accepting differential needs. AI developers, implementers, users and regulators should collaboratively define guidelines for clarifying the level of transparency needed across a spectrum and there should be a clear separation of data, performance and algorithmic transparency.
Focus in the near term on augmented intelligence rather than AI autonomous agents. Fully autonomous AI concerns the public and faces technical and regulatory challenges. Augmented intelligence—supporting data synthesis, interpretation and decision-making by clinicians and patients—is where opportunities are now.
Develop and deploy appropriate training and educational programs. Curricula must be multidisciplinary and engage AI developers, implementers, health care system leadership, frontline clinical teams, ethicists, humanists, patients and caregivers.
Leverage frameworks and best practices for learning health care systems, human factors and implementation science. Health care delivery systems should have a robust and mature information technology governance strategy before embarking on a substantial AI deployment and integration.
Balance innovation with safety through regulation and legislation to promote trust. AI developers, health system leaders, clinical users, and informatics and health IT experts should evaluate deployed clinical AI for effectiveness and safety based on clinical data.
To learn more, read this JAMA article, “Artificial Intelligence: Promise, Pitfalls and Perspective.”