Artificial intelligence (AI) is behind many of our daily interactions, from email communications to online purchases. AI is also shaping the ways that physicians and other health care professionals are making decisions and delivering care, according to expert panelists at the 2017 Health Datapalooza conference in Washington.
The panelists presented a range of ways that AI—the use of computerized automation and neural network-based algorithms to accomplish tasks once performed by human beings—has already begun to affect health care. These include simple analytics and data interpretation as well as complex genomic sequencing and patient-care prioritization.
“It’s really about selecting narrower tasks that computers can do better than humans,” said Christopher Khoury, vice president of the AMA’s environmental intelligence and strategic analytics unit. He was one of several high-tech experts to participate in the panel, titled, “At the Intersection of AI and Human Intelligence: The Future of Health Care Delivery.”
A software algorithm’s ability to discover and learn patterns in clinical data or processes, such as reading scans and classifying patients by various criteria, is an example of how high technology can augment physicians.
Beyond this, AI is more and more frequently used in complex clinical decision support.
“We’re seeing examples of AI-enabled decision support that use the a body of medical evidence and are able to create complex models based on individual patient information and previous medical evidence,” said Khoury, a capability that he and other panelists referred to as “augmented decision making for physicians.”
The potential uses of AI have major implications for patient safety, operational efficiency and resource prioritization—and may even alleviate some of the causes of physician burnout. But Khoury emphasized, we shouldn’t rush to adopt the technology until it has been proven.
“There needs to be a good evidence base, first and foremost, and that’s an understandable adoption barrier. Physicians want to know that there’s clinical and analytical validity to any kind of new tool or platform they might be adopting,” said Khoury, who earned a master’s degree in biomedical engineering and pharmaceutical sciences.
Early physician involvement key
The panel also included Eric Just, senior vice president for product development at Health Catalyst, which builds analytics and decision support tools for health systems; Chris Mansi, MD, a neurosurgeon and the co-founder and CEO of Viz, a deep-learning medical imaging company; and Eric Williams, vice president of data science and analytics at Omada Health, a company using machine learning to track patient behavior changes in the face of chronic disease. These experts emphasized that while AI can be a boon to health care efficiency, it is essential to maintain transparency and include physicians early in the development process.
“Transparency is a big deal,” said Just. “It is not enough to show [physicians] a risk score. You have to show risk factors so they can see the reasons why.”
There are multiple rationales for involving physicians early on in the development of AI-enabled health care solutions. Physician input can help identify key checkpoints where human input can improve algorithm performance and also boost physician confidence in the conclusions generated. Another relates to how humans handle anomalies or conflicting pieces of clinical information that software might overlook in favor of the dominant pattern.
“That strategic insight a physician can have—the ability to handle edge cases that break the rules and the patterns—is really important and we can’t lose that,” said Khoury.
Khoury also noted that physicians might be better equipped to identify biases than programmers working off of decontextualized data. As AI technologies become more sophisticated and begin to operate on “deep learning” protocols—which means they can learn to reprogram themselves without human intervention—it’s important that we avoid passing along unconscious human biases, including gender and workplace roles. Khoury cited materials published by Science that showed how machines can learn biases based on the language content commonly found on the web.
Panelists noted that the best use of AI likely resides in striking a happy medium.
“The way we think about it is to build a system that finds the most effective balance between human and artificial intelligence,” said Williams, referring to tests at Omada Health designed to determine when AI or intervention from non-physician health coaches results in better care.
Still, there are areas of health care that AI already seems suited to address.
“There are real barriers to being able to manage population health through our health systems,” said Khoury. “Some of those are scale challenges, some of them are technological, some come from existing shortages or limited access to care and coverage. These technologies, if they’re successful, have the potential to help overcome these barriers.”
Another example of an innovative AI-enabled health care solution is the Human Diagnosis Project, or Human Dx for short. By combining the collective intelligence of physicians with machine learning, Human Dx intends to enable more accurate, affordable and accessible care for all. Research on its application to clinical decision making is currently underway in partnership with some of the world's leading medical institutions.
AMA has voiced its support for Human Dx as part of the application Human Dx submitted to the John D. and Catherine T. MacArthur Foundation’s 100&Change competition. The winner of the competition will receive a $100 million grant to fund a single proposal that, according to the foundation’s website, “promises real and measurable progress in solving a critical problem of our time.” Human Dx has since been announced as one of eight semifinalists from a pool of nearly 1,900 applicants.