Take some time to celebrate getting a new augmented intelligence (AI) tool up and running, but keep in mind that your health care organization’s work is not done yet.
“Technology is changing very quickly, clinical guidelines are changing, the way we do our work is going to shift because of these new tools. So, there has to be a way to continue to measure the success of these implementations over time,” said Margaret Lozovatsky, MD, who is chief medical information officer and vice president of digital health innovations at the AMA.
With nearly two-thirds of physicians reporting that they used AI tools in 2024 —nearly doubling the number who used it the prior year—it’s important for organizations to monitor health care AI tools to:
- Ensure accountability.
- Improve transparency and trust
- Diminish bias.
- Identify emerging risks or data security concerns.
- Enhance performance.
- Stay compliant with regulatory changes.
- Assess the value of the tools to support business decision-making.
The AMA STEPS Forward® “Governance for Augmented Intelligence” toolkit can help health systems and practices establish an oversight and monitoring process to accomplish this important ongoing supervision. The toolkit is a comprehensive eight-step guide for health care systems to establish a governance framework to implement, manage and scale AI solutions.
The foundational pillars of responsible AI governance are:
- Establishing executive accountability and structure.
- Forming a working group to detail priorities, processes and policies.
- Assessing current policies.
- Developing AI policies.
- Defining project intake, vendor evaluation and assessment processes.
- Updating standard planning and implementation processes.
- Establishing an oversight and monitoring process.
- Supporting AI organizational readiness.
Often called artificial intelligence, the AMA defines AI as augmented intelligence to emphasize that AI’s role is to help health care professionals, not replace them.
From AI implementation to EHR adoption and usability, the AMA is fighting to make technology work for physicians, ensuring that it is an asset to doctors.
5 steps to a monitoring process
AI tools need to be regularly monitored to ensure that they are performing properly and to identify and resolve any risks the technology may pose. For example, consider a mortality index that is incorporated into an AI tool.
If the inputs of the mortality index change, it will completely change the way that the information is handed to the clinicians and they are going to make clinical decisions based on that output. Dr. Lozovatsky noted that it is critical for organizations to have a process to monitor these tools regularly and have the right experts evaluate them. Physicians and other health professionals need cognitive-computing experts that understand how to set up this monitoring.
The AMA toolkit lays out five monitoring steps and considerations that health care organizations should make for each step. The steps are to:
- Develop and oversight and monitoring plan. Identify outcome metrics to track the tool’s effectiveness; include routine checks on data output quality; algorithm performance, user satisfaction and more.
- Assign a multidisciplinary team. Designate accountability for monitoring AI tools. The team should include, at minimum, a clinical champion, a data scientist or statistician familiar with the tool and an administrative leader. Encourage users to provide feedback.
- Review guidelines and regulatory changes. Check the latest research guidelines, best practices and regulations related to AI and data privacy. Use AI best established practices, such as this document from the Food and Drug Administration (PDF).
- Execute the monitoring process. Create processes, including roles, responsibilities and tracking methods. Establish pathways to receive feedback from AI users and then identify potential issues. Regularly audit the tool's output and more.
- Communicate results. Establish a route to let AI tool developers know concerns. To build trust, let AI tool users know about what monitoring has shown about the technology.
Learn more about AI
Find out how participants in the AMA Health System Member Program are using AI to make meaningful change. That includes the Permanente Medical Group, as explored in a recent episode of the “AMA STEPS Forward Podcast.”
In addition to fighting on the legislative front to help ensure that health AI technology is designed, developed and deployed ethically, responsibly and transparently, the AMA has adopted policy (PDF) with particular emphasis on:
- Health care AI oversight.
- When and what to disclose to advance AI transparency.
- Generative AI policies and governance.
- Physician liability for use of AI-enabled technologies.
- AI data privacy and cybersecurity.
- Payer use of AI and automated decision-making systems.
Explore the emerging landscape of health care AI. Also, find out how to apply AI to transform health care with the “AMA ChangeMedEd® Artificial Intelligence in Health Care Series.”