Augmented intelligence (AI)—often called artificial intelligence—is moving into everyday clinical practice, rapidly reshaping how doctors document care and interact with patients. In the exam room, however, the stakes are high. Missteps with AI-powered tools can undermine trust, create legal risks or even confuse patients about their own diagnoses.
Jennifer Bryan, MD, a family physician in Flowood, Mississippi, at Hattiesburg Clinic, has spent the past several years experimenting with AI—from ambient documentation tools to EHR-embedded suggestions—and believes doctors must be proactive about how they use these technologies with patients.
“When I started practicing medicine, we used notecards, pen and paper, and what was in the chart was what I put in the chart. Then, as time went on, with more EHRs and being able to do more, I found myself typing a lot for years,” said Dr. Bryan, who is the immediate past president of the Mississippi State Medical Association and member of the AMA Council on Science and Public Health.
Hattiesburg Clinic is part of the AMA Health System Member Program, which provides enterprise solutions to equip leadership, physicians and care teams with resources to help drive the future of medicine.
“I’ve used various ambient AI technologies, and they are absolutely a huge time-save, so I loved that. Then there’s AI inside the EHR that’s been baked in” for some time, she said. “Then in my personal life, I started using multiple AI services that are out there in the world. I have a professional platform that I subscribe to at a high level for privacy and security reasons and really vet some thoughts and some high-level things that I’m working on. But I’ve used many large language models and just general AI that’s out there.”
“It almost sounds cliché to say, but AI is not going away. We hear that in almost every AI talk,” Dr. Bryan said. “Humans generally are resistant to change in general, but also this is here, and it’s been here, and we owe it to our patients to leverage the tools that we have to deliver them the best medical care possible.”
“I strongly believe that physicians should always lead the team and should always lead the medical care of a patient and should govern the AI and how it is able to support us,” she said. “We are seeing great benefits from things like clinical trial matching or new drug research or looking at diagnoses that were subtle but there in the record.”
Lean in and learn the technology
AI can save time and reduce documentation burdens, but physicians should actively learn how it works in their clinical setting. As Dr. Bryan emphasized, “lean in, use it, but do it with eyes wide open.”
“Lean in and understand it because it absolutely can free you from a lot of the drudgery of either dictating your entire day again or typing everything out,” she said. “Lean in and learn the tech, and it can make your practice better, leaner, more efficient and you can ultimately see more patients and do better financially with it.
“However, you must always do it understanding that the liability is on your back,” Dr. Bryan added.
Nearly two-thirds of physicians, 66%, surveyed by the AMA (PDF) in 2024 reported using health care AI. That rate marks a big jump from the 38% of physicians who said they used it in 2023.
Always verify AI-generated content
Tools that listen to and transcribe encounters can cut hours of documentation work. But Dr. Bryan stressed that physicians must carefully review output. AI notes, diagnoses and codes are not infallible. Physicians remain legally and ethically responsible for what they sign. That means reviewing documentation carefully, cross-checking suggested diagnoses and not accepting codes blindly.
“Ambient AI is pretty good, but it’s still got things you need to check and things you need to edit out because at the end of the day when you sign the note, 100% of the responsibility is on the physician who signs the note,” she said. “Look back in the record and if you don’t recognize the diagnosis and it doesn’t make sense, don’t automatically accept it and assume that it’s right.”
“When you use ambient AI, which is the big timesaver, make sure you read your notes and ensure the accuracy of what happened in the exam room and in the encounter,” Dr. Bryan said. “It may take 15 minutes to review your notes from the day, but that’s better than signing off on errors that could cause a headache down the road.”
From AI implementation to digital health adoption and EHR usability, the AMA is fighting to make technology work for physicians, ensuring that it is an asset to doctors.
That includes the AMA STEPS Forward® “Governance for Augmented Intelligence” toolkit, a comprehensive eight-step guide for health care systems to establish a governance framework to implement, manage and scale AI solutions.
Margaret Lozovatsky, MD, the AMA’s chief medical information officer and vice president of digital health innovations, explored this topic further in the recent webinar, “Establishing an AI Governance Framework,” available now on demand.
Prioritize transparency with patients
If AI tools are being used during an encounter—especially ambient scribing—physicians should explain the technology, secure consent and reassure patients about HIPAA compliance. Patients deserve to know when AI is part of their care.
The “challenges are explaining it to the patient, because it’s on my cellphone, which I have never taken into the exam room before,” said Dr. Bryan. “And now I walk in with my phone, and I ask for their consent and I explain what it is so that they can understand that it’s HIPAA-compliant and that takes extra time. It’s worth it, but for each new patient there’s an educational process on what this is.”
Demand traceability and explainability
“When we’re talking about what physicians can do, demand transparency and insist that your health system or technology partner clearly labels AI-generated content,” said Dr. Bryan. At the same time, “be wary of platforms that obscure AI involvement because the lack of transparency increases liability and can erode patient trust.”
“Patients and physicians should know exactly who or what authored clinical content,” said Dr. Bryan.
Protect patient trust
Emphasize to patients that their physician—not an algorithm—remains in charge of their care. Encourage patients to review their own records and ask questions if something looks off.
“Patients have the right to be informed when AI contributes to their medical care documentation,” said Dr. Bryan. “They should feel empowered to ask questions and clarify concerns about the role of AI.”
“Physicians remain in charge of their patients’ medical care and human capabilities are enhanced but not replaced,” she said.
Engage in advocacy and feedback
Physicians should work with professional associations and tech vendors to push for better safety, transparency and shared responsibility. Feedback from physicians is essential for refining AI tools.
“Go engage actively with your state and national medical associations like the AMA to stay informed on evolving policy and regulatory standards,” said Dr. Bryan. “AI is here to stay, but it must serve physicians not the other way around.”
The AMA has developed new policy (PDF) that addresses the development, deployment and use of health care AI, with particular emphasis on:
- Health care AI oversight.
- When and what to disclose to advance AI transparency.
- Generative AI policies and governance.
- Physician liability for use of AI-enabled technologies.
- AI data privacy and cybersecurity.
- Payer use of AI and automated decision-making systems.
Be alert to bias and inequity
Physicians should “stay vigilant to protect their practice,” said Dr. Bryan. “Because AI systems are trained on historical data, that can reinforce bias and inequity.”
That is why it is important for physicians to “remain vigilant and advocate for equitable AI standards,” she said.
Learn more with the AMA about advancing health care AI through ethics, evidence and equity.
Remember: AI is a tool, not a colleague
AI can help physicians exercise their medical judgment, but it cannot replace the doctor. The physician’s clinical reasoning remains central.
“Regardless of the AI involvement, the physician carries final responsibility. AI is the tool, not a colleague,” Dr. Bryan said. “So, protect yourself by validating any AI generated diagnosis, recommendation or documentation you sign off on.”
Find out how participants in the AMA Health System Member Program are using AI to make meaningful change. And learn more with the AMA about the emerging landscape of health care AI.