Digital

To ID health care AI doctors can trust, answer these 3 questions

By
Timothy M. Smith Contributing News Writer
| 4 Min Read

The practice of medicine has changed dramatically of late, and one reason is the introduction of new technologies such as augmented intelligence (AI). AI’s use has grown exponentially in recent years, but consensus on guiding principles for its development and deployment in health care has yet to emerge, raising questions about which AI systems are appropriate for any given practice.

Membership brings great benefits

AMA membership offers unique access to savings and resources tailored to enrich the personal and professional lives of physicians, residents and medical students.

An open-access peer-reviewed essay published in the Journal of Medical Systems presents a framework for developers and users of health care AI—often called artificial intelligence—looking at the issue through multiple lenses, including health equity and the patient-physician relationship. Making the case for an evidence-based, ethical approach to AI, the authors focused on the quadruple aim: better patient experience, improved population health, lower overall costs and improved work-life satisfaction for physicians and other health professionals.

Learn more about artificial intelligence versus augmented intelligence and the AMA’s other research and advocacy in this vital and emerging area of medical innovation.

Why AI is so momentous

Clearly defining roles and responsibilities among developers, health systems and physicians “is central to putting the ethics-evidence-equity framework into practice,” wrote the authors, who developed the framework during their tenure at the AMA.

More to the point, AI could help solve a lot of the issues that have arisen as medical knowledge has grown. One of those has to do with the sheer volume of information that physicians have to assimilate.

“Augmented intelligence is an absolute necessity in the information age,” said Kathleen Blake, MD, MPH, one of the paper’s authors and a senior advisor at the AMA. “AI is not just information in a box. It is applied in practice to care for real people—your mother, my father, our grandparents, our kids, ourselves.”

“There is no way humanly possible that one person can carry around in their brain and analyze all of that information,” said Dr. Blake, a cardiac electrophysiologist by training. “We used to think we could. Boy, was that wrong."

The paper suggests physicians and health system leaders must answer three questions in the affirmative when considering deployment of any given health care AI tool.

Related Coverage

Why talk of AI’s transforming health care is premature

Does it work?

This overarching question is actually comprised of several more specific questions. The first: Does the AI system meet expectations for ethics, evidence and equity? The second: Can it be trusted as safe and effective?

The paper lays out more specific criteria for AI users and AI developers, including:

  • Was the AI system developed in response to a clearly defined clinical need identified by physicians?
  • Does it address this need?
  • Has it been validated analytically and scientifically?

Does it work for patients?

Like the first, this second question has many parts. If an AI system has been shown to improve care for a patient population, and if you have the resources and infrastructure to implement it in an ethical and equitable manner, you should ask a number of questions, such as:

  • Has it been validated in a population and health care setting that reflects my practice?
  • Is continuous performance monitoring in place in my practice to identify and communicate changes in performance to the developer?
  • Can it be integrated smoothly into my practice; will it improve care, and will it enhance my relationship with patients?

You might also ask if it has been developed and tested in varying populations to identify hidden biases.

Related Coverage

4 essentials to develop a successful health care AI solution

Does it improve health outcomes?

This is perhaps the most important question. There are numerous specific inquiries within in it, including:

  • Does this AI system demonstrate a positive impact on health outcomes, such as quality of life measures?
  • Does it minimize harm to patients?
  • Does it add value to the patient-physician relationship?

“For some, the term ‘artificial’ might imply that it is the patient with the condition who is working alone, interacting with the machine,” Dr. Blake said. “What we all know deep in our hearts as people needing care is that we want to be treated as individuals. So, while we don't want to go back to the old days—when none of this tremendous wealth of information could be captured and used—we also don't want to go to the time when no one, no other human being, is working with us to improve our health.”

Learn more about the AMA's commitment to helping physicians harness health care AI in ways that safely and effectively improve patient care.

FEATURED STORIES

Woman handing an insurance card to a doctor who is reviewing paperwork

AMA report: Health insurance giants tighten grip on U.S. markets

Dec 16, 2025
Patients in a waiting room at a doctor's office

What to expect from the 2026 Medicare Physician Fee Schedule

| 7 Min Read
Row of blocks with businesspeople with one being taken away

4 “Big, Beautiful Bill” changes that will reshape care in 2026

| 6 Min Read
Wood poles with question mark symbols

PAs push to enshrine “physician associate” term in law

| 6 Min Read