Digital

6 keys to improve AI for treatment choices

A flood of scientific discovery regarding new medications to treat illness and maintain patient wellness seems to roll in each day. Computer-based clinical decision-support systems can keep physicians up to date, but systems using augmented—or artificial—intelligence (AI) machine learning processes are still viewed with skepticism.

This includes machine learning, a subset of AI that leverages data to teach machines how to act—either by demonstration, imitation or giving examples.

These systems “have challenges of credibility and adoption,” according to two physician researchers writing in a JAMA Viewpoint essay. They also maintain that “complexity and lack of usability thwart use.”

AI holds promise for health care improvement, but physicians’ perspective is needed in its development. The AMA House of Delegates adopted policy in 2018 that seeks to seize the opportunities offered by AI to provide a transformative set of tools to help patients and physicians.

Related Coverage

Amazon’s health care push expands to machine learning for the EHR

AI policy adopted at the 2018 AMA Annual Meeting calls for identifying opportunities to integrate the perspective of practicing physicians into its development, design, validation, and implementation; and to encourage education for patients, physicians, medical students, other health care professionals, and health administrators to promote greater understanding of the promise and limitations of health care AI

The authors of the JAMA essay—Edward H. Shortliffe, MD, PhD, editor-in-chief of the Journal of Biomedical Informatics, and Martin J. Sepulveda, MD, formerly with IBM’s Watson Research Laboratory— further corroborated the needs identified in AMA policy, and wrote that these six elements are needed for physician acceptance of an AI-supported decision support system. 

Augmented Intelligence in Medicine

The AMA is committed to helping physicians harness AI in ways that improve patient care.

Learn More

No black boxes. Transparency is required. The user must understand the basis for any advice or recommendation given. 

Systems must save—not waste—time. Clinical support “must blend into the workflow.”

Complex systems are not usable systems. Systems should be easy to use so that major training is not needed to get results.

Relevance is needed. Clinical support should “reflect an understanding” of what physicians will be asking.

Delivery of information must be respectful. Advice should be given in a manner that respects the user’s expertise, “making it clear that it is designed to inform and assist but not to replace a clinician.” 

Advice should have strong scientific foundation. Clinical decision-support responses should be reproducible, reliable, usable and based on rigorously peer-reviewed scientific evidence. 

In evaluating clinical decision support, Drs. Shortliffe and Sepulveda recommend that, to drive product improvement, systems have a monitoring process that identifies near misses. They noted that some uncertainty is inevitable, but systems “must be designed to be fail-safe and to do no harm.” 

Clinical decision help not new idea 

The first scientific-literature reference for using computers to assist with complex clinical decision-making appeared 60 years ago, the authors wrote.  

Another recent JAMA Viewpoint on AI referenced a 1976 book on the “Post-Physician Era,” which included chapters on “The Demise of the Physician” and “The Myth of Physician Necessity.” 

But, despite the predictions made almost 40 years ago, “the mass extinction of physicians remains unlikely,” wrote the author of the essay, C. David Naylor, MD, with the University of Toronto’s Department of Medicine. 

Clinical imaging and AI big picture 

Dr. Naylor added that “deep learning,” where neural net technology mimics the visual cortex of mammals, has been rapidly applied in image-intensive specialties such as radiology, pathology and image-guided surgery.  

Deep learning “also exemplifies a broader trend: the convergence of health and data sciences,” he wrote, adding that it also has the potential to empower patients and streamline the routine work of clinical staff. 

Rather than dehumanize medicine, Dr. Naylor noted that AI will give physicians the opportunity to focus on activities that “are uniquely human.” 

“Combined with wearables, remote monitoring, and digital consultations, deep learning and other machine-learning techniques can bypass the time-honored model of intermittent data collection and interpretation at the clinical encounter,” he wrote.