Digital

What it takes for doctors to trust AI-triggered sepsis alerts

. 4 MIN READ
By
Timothy M. Smith , Contributing News Writer

Besides being difficult to diagnose, sepsis develops rapidly, leaving physicians little time to get patients started on lifesaving antibiotics. In theory, augmented intelligence (AI) could help close this gap in care, but for years, sepsis detection tools using AI—often called artificial intelligence—have produced statistically minimal results.

Help Move Medicine

Medicine doesn’t stand still, and neither do we. AMA members don’t just keep up with medicine—they shape its future.

For a trio of studies published in Nature Medicine, senior author Suchi Saria, PhD, and her colleagues looked at a new AI sepsis early detection tool, the Targeted Real-time Early Warning System (TREWS) that was developed by Johns Hopkins and Bayesian Health.

Leveraging eight years of research, the machine-learning system combines patient medical history and current symptoms with laboratory results and alerts doctors when patients are at risk for sepsis. Researchers found TREWS had dramatically higher sensitivity than other AI tools and also had greater adoption by physicians and other clinicians, resulting in big reductions in median time to first antibiotic order.

Learn more about artificial intelligence versus augmented intelligence and the AMA’s other research and advocacy in this vital and emerging area of medical innovation.

Related Coverage

To see success with health care AI, target the quadruple aim

The researchers deployed the tool at three community hospitals and two academic hospitals in Maryland and the District of Columbia over a two-year period. In that time, the tool produced alerts on nearly 32,000 cases. Clinicians evaluated nearly 90% of them, confirming 38% as sepsis. In addition, of the 9,805 sepsis cases identified retrospectively, the tool correctly identified 82%.

A statement from Johns Hopkins noted that previous attempts to use electronic tools to detect sepsis were accurate just 2%–5% of the time.

“This is a breakthrough in many ways," said study co-author Albert W. Wu, MD, director of the Center for Health Services and Outcomes Research at Johns Hopkins Bloomberg School of Public Health. “Up to this point, most of these types of systems have guessed wrong much more often than they get it right. Those false alarms undermine confidence.”

Find out why, to identify health care AI physicians can trust, it’s important to answer these three questions.

Besides identifying sepsis cases, the value of TREWS is in slashing the time it takes to get a diagnosis. On average, possible sepsis was detected hours earlier than traditional methods.

Adjusting for patient presentation and severity, the study notes, patients with sepsis whose alert was confirmed within three hours had their median time to first antibiotic order reduced by nearly two hours, compared with patients whose alert was either dismissed, confirmed more than three hours after the alert or never addressed.

Read about why success with health care AI comes down to teamwork.

Related Coverage

Success with health care AI comes down to teamwork

“One of the AMA’s principles on AI is that algorithm developers should be able to explain their tool to others,” said Kathleen Blake, MD, MPH, a senior adviser at the AMA. “The authors of this study were very transparent. They also asked: Why was our tool adopted? They postulate that it was because they told users what factors triggered the algorithm to produce an alert; they opened the proverbial ‘black box’ in a way that people could understand.”

Another potential impediment to the adoption of AI is that clinicians already feel badgered by alerts in their EHRs.

“What I really like about the researchers’ approach is that they understood that the last thing clinicians need is more alerts,” Dr. Blake said. “In this case, if the alert was triggered, clinicians were much more likely than in previous studies to address it and decide if the person actually did have sepsis, or not.”

That’s because they came to trust the alerts.

“The rate of return on implementation of the algorithm was much higher,” she said.

Two other studies published by Saria and her colleagues in Nature Medicine explore clinicians’ experiences with the machine-learning system and factors driving adoption of the system by physicians and other clinicians.

Learn more about the AMA's commitment to helping physicians harness health care AI in ways that safely and effectively improve patient care.

FEATURED STORIES