It is important for health care organizations to standardize the intake and evaluation process for the augmented intelligence (AI) tools that they adopt.
Doing so is a way to ensure safety, help make sure resources are used efficiently, prevent duplicative efforts and maintain consistency within an organization. That is especially true given that health care AI—commonly called artificially intelligence—continues to play a rapidly increasing role in a wide range of areas.
These include everything from clinical duties to administrative activities; helping with tasks such as summarizing medical notes; detecting and classifying the likelihood of future adverse events; and predicting patient volumes and associated staffing needs.
The AMA defines AI as augmented intelligence to emphasize that AI’s role is to support health care professionals, not replace them.
Having physician voices at the table—people who understand the workflow and the nuances of patient care—is critical when assessing health AI tools, said Margaret Lozovatsky, MD, chief medical information officer and vice president of digital health innovations at the AMA. As a pediatrician, she encountered a situation that demonstrates just how important those voices are.
Dr. Lozovatsky was caring for a 6-month-old patient with respiratory syncytial virus (RSV) who was doing fine, but needed to be hospitalized for a couple of days just for oxygen. The patient was admitted shortly after her hospital implemented a new AI tool to assess patient mortality risk, and when reviewing the young patient's chart, she was shocked to see the system had flagged an 85% risk of mortality. From her clinical perspective, she knew the AI’s assessment was inaccurate.
“We worked through it and it turned out the tool was never tested on pediatric patients. They forgot to make sure that it was only being displayed for adults,” she said. “Imagine the potential risk that could have happened if I use that value to change the way I care for this patient. This is why you need clinicians in the conversation.”
The AMA has a toolkit to help organizations go through the proper steps to ensure they have the right intake and evaluation processes for new AI tools, including a working group with physicians and other health professionals.
The AMA STEPS Forward® “Governance for Augmented Intelligence” toolkit, developed in collaboration with Manatt Health, is a comprehensive eight-step guide for health care systems to establish a governance framework to implement, manage and scale AI solutions.
The foundational pillars of responsible AI adoption are:
- Establishing executive accountability and structure.
- Forming a working group to detail priorities, processes and policies.
- Assessing current policies.
- Developing AI policies.
- Defining project intake, vendor evaluation and assessment processes.
- Updating standard planning and implementation processes.
- Establishing an oversight and monitoring process.
- Supporting AI organizational readiness.
From AI implementation to EHR adoption and usability, the AMA is fighting to make technology work for physicians, ensuring that it is an asset to doctors.
The initial evaluation
When a health care organization is considering a new AI tool, a project sponsor or clinical champion should have an intake form they can fill out. According to the toolkit, that typically includes, among other things, information explaining:
- Who is sponsoring the project.
- What the business case is for the project, including identifying the problem, what the impact will be and the ideal launch time.
- Resources that are needed.
- An assessment of the vendor, including understanding where the data comes from, data privacy and security protocols and interoperability with existing systems.
Bringing a tool to fruition
If the initial evaluation is positive, the project sponsor or clinical champion—with support from a project-management office—should collaborate with stakeholders to develop a detailed proposal. Some of the activities the toolkit outlines for this phase include:
- Clinical validation. Evaluating the clinical effectiveness of the AI tool in real-world scenarios and the ability to integrate with clinical workflows.
- Financial assessment. Evaluating the financial implications of the tool and ensuring that the investment aligns with the system’s budget.
- Legal and compliance assessment. Making sure the tool complies with federal and state laws and incorporating the new tools into periodic HIPAA risk analyses.
- Risk and safety review. Evaluating potential risks and mitigation strategies and establishing use and monitoring standards.
Find out how participants in the AMA Health System Member Program are using AI to make meaningful change.
In addition to fighting on the legislative front to help ensure that AI technologies are designed, developed and deployed in a manner that is ethical, equitable, responsible, accurate and transparent, the AMA has adopted policy (PDF) with particular emphasis on:
- Health care AI oversight.
- When and what to disclose to advance AI transparency.
- Generative AI policies and governance.
- Physician liability for use of AI-enabled technologies.
- AI data privacy and cybersecurity.
- Payer use of AI and automated decision-making systems.
Learn more with the AMA about the emerging landscape of health care AI. Also, explore how to apply AI to transform health care with the “AMA ChangeMedEd® Artificial Intelligence in Health Care Series.”