Augmented—or artificial—intelligence (AI) in health care can help manage and analyze data, make decisions, conduct conversations and likely to change physicians’ roles and everyday practices. It is key that physicians be able to adapt to changes in diagnostics, therapeutics and practices of maintaining patient safety and privacy. However, physicians need to be aware of ethically complex questions about implementation, uses and limitations of AI in health care.
“How Should Clinicians Communicate With Patients About the Roles of Artificially Intelligent Team Members?” This article responds to a hypothetical case involving an assistive AI surgical device and focuses on potential harms emerging from interactions between humans and AI systems. Informed consent and responsibility—specifically, how responsibility should be distributed among professionals, technology companies, and other stakeholders—for uses of AI in health care are also discussed.
“How Should AI Be Developed, Validated and Implemented in Patient Care?” Should an AI program that appears to have a better success rate than human pathologists be used to replace or augment humans in detecting cancer cells? This article suggests that some concerns—the “black-box” problem and automation bias (overreliance on clinical decision support systems)—are not significant from a patient’s perspective, but notes that expertise in AI is required to properly evaluate test results.
“Emerging Roles of Virtual Patients in the Age of AI.” Today’s web-enabled and virtual approach to medical education is different from the 20th century’s Flexner-dominated approach. Now, lectures get less emphasis while more emphasis is placed on learning via early clinical exposure, standardized patients and other simulations. This article reviews literature on virtual patients (VPs) and their underlying virtual reality technology, examines VPs’ potential through the example of psychiatric intake teaching, and identifies promises and perils posed by VP use in medical education.
“What Are Important Ethical Implications of Using Facial Recognition Technology in Health Care?” Applications of facial recognition technology (FRT) in health care settings have been developed to identify and monitor patients as well as to diagnose genetic, medical, and behavioral conditions. The use of FRT in health care suggests the importance of informed consent, data input and analysis quality, effective communication about incidental findings, and potential influence on patient-physician relationships. Privacy and data protection are thought to present challenges for the use of FRT for health applications.
In the journal’s February podcast, guests include:
- Kimberly Lomis, MD, vice president for Undergraduate Medical Education Innovations at the AMA and coordinates the work of the Accelerating Change in Medical Education Consortium.
- Christopher Khoury, vice president of the AMA’s Environmental Intelligence and Strategic Analytics unit.
In the episode, they discuss how educators can adapt to rapid change in the ethically complex field of AI and health care. Listen to previous episodes of the podcast, “Ethics Talk,” or subscribe in iTunes or other services.
The journal’s editorial focus is on commentaries and articles that offer practical advice and insights for medical students and physicians. Submit a manuscript for publication. The journal also invites original photographs, graphics, cartoons, drawings and paintings that explore the ethical dimensions of health or health care.
Upcoming issues of the AMA Journal of Ethics will focus on health care organizations and community development, and innovating nanoethics. Sign up to receive email alerts when new issues are published.