Deepfake “doctors” are a problem—here are 7 keys to stopping them

There’s a need for clear, enforceable protections against unauthorized AI generated deepfakes that threaten patient safety. Learn more with the AMA.

By
Tanya Albert Henry Contributing News Writer
| 4 Min Read

Aiming to protect patients and the broader healthcare system from unauthorized augmented intelligence (AI)-generated deepfakes, the AMA has created a comprehensive policy framework to help lawmakers and industry leaders combat threats the technology poses to Americans’ health.

AMA Advocacy Impact Report
Learn how the AMA is leading national initiatives to build a health system that is more effective, sustainable and centered on patients.

AI-generated deepfake “doctors” that impersonate physicians, manipulate the public and often endorse unproven treatments now garner millions of views on social media. The content endorses products for their creators’ financial gain; meanwhile, patients can be exposed to content that can cause serious harm and the integrity of the greater healthcare system suffers.

A new framework from the AMA Center for Digital Health and AI lays out seven key ways (PDF) the nation can modernize physician-identity protections and close legal gaps so that patient safety, professional integrity and public trust in healthcare are protected.

“AI deepfakes that impersonate physicians are not just scams—they are a public health and safety crisis,” said AMA CEO and Executive Vice President John Whyte, MD, MPH. “When bad actors exploit a doctor’s identity, they undermine patient trust and can steer people toward harmful, unproven care. We need strong action by federal and state lawmakers to protect physicians’ identities, ensure transparency and stop this fraud.”

From AI implementation to digital health adoption and EHR usability, the AMA is fighting to make technology work for physicians, ensuring that it is an asset to doctors. That includes recently launching the AMA Center for Digital Health and AI to give physicians a powerful voice in shaping how AI and other digital tools are harnessed to improve the patient and clinician experience. 

The essential membership for physicians
Access thousands of free CME activities to fulfill state requirements. Elevate your career with leadership opportunities, events, resources and training.

7 principles to improve policy 

AMA policy recognizes that there are documented advantages of deepfake technology for medical education, training and patient engagement, but the policy states that there is a significant regulatory void that can result in harmful consequences. 

The policy supports relevant organizations—including healthcare professionals, technology developers, government regulators, social media platforms and the public—to formulate comprehensive federal legislation and regulations regarding deepfake technology to uphold the integrity of the medical profession against malpractice, increase awareness of the risks associated with deepfake content and safeguard patient well-being across all communities.

“Safeguarding professional integrity is essential to preserving trust and delivering high-quality care in a rapidly evolving digital landscape,” Dr. Whyte said.

Here are the seven key policy principles the AMA says need to be part of a framework to address hazards of AI-generated deepfake doctors, along with proposed protections:

Physician identity is a protected right 

A physician’s name, image, likeness, voice and digital replicas are protected. Health institutions, vendors and third-party apps must explicitly recognize that these are not transferable assets and may only be used with affirmative, informed consent. 

Prohibit deceptive medical impersonation 

Without informed, affirmative consent, using a physician’s identity in AI-generated or materially altered content that falsely conveys the physician’s endorsement, authorship or medical judgment and is likely to mislead a reasonable patient. This should be prohibited and treated as a deceptive practice. 

Using a physician’s identity in AI-generated or manipulated content should require affirmative, informed opt-in consent and not consent that is implied, inferred or obtained through general terms of service, employment agreements or blanket media releases. The consent must specify the use, audience, purpose and duration. It also must be revocable if risks or the physician’s role change.

Health care AI lean promo
Stay up to date on AI
Follow the latest news on AI, its applications and effects for healthcare.

Mandatory labeling and transparency 

AI-generated or materially altered content depicting a physician must be clearly and conspicuously labeled in plain language and include a digital watermark. Patients interacting with an AI-generated health professional must be altered to that fact before the interaction begins. 

Shared responsibility for preventing impersonation 

Platforms, hospitals, health systems and AI vendors share in the responsibility and should require safeguards that include clear and conspicuous labeling of AI-generated or manipulated content, rapid reporting and takedown mechanism for health-related deepfakes and prohibiting AI-generated content from using health professional titles. 

Enforcement and practical remedies 

Physicians must have access to a clear, workable process to document identity misuse; trigger takedown and escalation procedures; and seek institutional remedies or legal relief when necessary. Institutions and platforms must preserve audit logs noting how the AI-generated content was created, modified, distributed or interacted with; cooperate with investigations; and provide transparent escalation pathways with defined timelines.

Minimize administrative burden 

Identity protection should be the default and there should be no undue administrative burden put on physicians for them to be protected. Physicians should not bear ongoing monitoring or enforcement responsibilities. Also, the consent process must be standardized, reusable and supported by institutions and platforms.

In addition to detailing its comprehensive policy framework on deepfake “doctors,” the AMA also recently urged Congress to strengthen safeguards for AI chatbots.

Making technology work for physicians

FEATURED STORIES

Doctor with text bubbles and graphic elements

Deepfake “doctors” are a problem—here are 7 keys to stopping them

| 4 Min Read
Light bulb with

How to bring physician well-being initiatives to life

| 14 Min Read
Healthcare worker in empty hospital operating room

These physician specialties score highest on resident well-being

| 11 Min Read
Train passenger distressed by motion sickness

What doctors want patients to know about motion sickness

| 10 Min Read