The AMA is urging Congress to take action to better safeguard people who use augmented intelligence (AI) chatbots for mental health care and the organization says it looks forward to collaborating with Congress to “help ensure AI-enable tools develop in a manner that prioritizes patient safety, clinical integrity and public trust.”
In letters to co-chairs of the Senate Artificial Intelligence Caucus (PDF), the Congressional Digital Health Caucus (PDF) and Congressional Artificial Intelligence Caucus (PDF), AMA CEO and Executive Vice President John J. Whyte, MD, MPH, outlined four key areas for Congress to consider.
Boosted transparency
The AMA says this is critical to help consumers understand the technologies they are engaging with and help them use the tools in the appropriate context. To promote this, Congress should:
- Require AI chatbots to clearly and meaningfully let users know they are interacting with a machine, not a human and also disclose whether there is any human oversight of the chatbot.
- Prohibit AI-enabled chatbot technology from holding itself out to be a licensed clinician or from claiming to provide the same services or have the same capabilities as a licensed clinician.
- Direct the Federal Trade Commission to determine if developers fail to provide necessary transparency and allow the agency to take enforcement actions, such as penalties.
Targeted regulations
The evolving technology needs a regulatory framework that addresses gaps in a system that was not designed for adaptive, generative AI systems, the AMA says, recommending that Congress, among other things, should:
- Establish clear boundaries that prohibit AI chatbots from diagnosing or treating mental health conditions, such as offering an anxiety or depression diagnosis or recommending medications.
- Direct federal agencies to develop a modernized, risk-based regulatory approach that adequately ensures health care AI’s safety and efficacy. Any approach should address generative AI.
- Require all chatbots to reliably identify suicidal ideations and risks for self-harm and provide immediate referrals to suicide prevention hotlines and/or recommendations for further medical care.
- Legislate guardrails for stronger scrutiny for technologies that impact children and adolescents, including managed age-appropriate testing and validation.
Limited advertising
The AMA tells lawmakers that advertising, sponsorship bias and any monetization model that influences the substance or delivery of care-related guidance should have no place inside chatbots that people use for mental health support. To make chatbots safer for users, the AMA says Congress should consider these principles and others:
- Strongly discourage advertising within chatbots offering mental health support.
- Prohibit advertising targeted toward minors or within chatbots engaging with minors.
- Ensure that advertising that does appear is conspicuously disclosed so that any reasonable person can clearly identify it as a paid promotion.
- Prohibit sharing information with third-party tracking and advertising entities, with even more heightened protections for children and teens.
Protected privacy and mandated cybersecurity
Many who interact with chatbots for mental health support treat the interactions as private; however, in reality, the conversations can be retained, logged or inadvertently revealed. To address privacy and cybersecurity concerns, the AMA urges Congress to:
- Require developers and deploying entities to implement safeguards that prevent unauthorized disclosure of sensitive information.
- Establish that chatbots that could reasonably be used for mental health support follow privacy design principles.
- Place meaningful limits on the collection and retention of sensitive information, create a meaningful ability to delete conversation history and establish safeguards that prevent disclosure through chatbot responses or stored histories.
- Require safeguards that prevent unauthorized connections to other services, accounts or applications, as well as unintended sharing, or expansion of permissions without an individual’s express approval.
As patients nationwide have difficulty accessing mental health care because of availability or affordability, there’s “potential value that well-designed, purpose-built AI tools can bring to mental health care when deployed responsibly,” the AMA tells lawmakers.
But, as congressional hearings last year highlighted, there are troubling reports of chatbots encouraging self-harm and suicide and testimony raised questions about data privacy, mental health impacts and the need for appropriate safety guardrails.
“AI-enabled tools may help expand access to mental health resources and support innovation in health care delivery, but they lack consistent safeguards against serious risks, including emotional dependency, misinformation, and inadequate crisis response,” Dr. Whyte said in an AMA statement. “With thoughtful oversight and accountability, policymakers can support innovation and ensure technologies prioritize patient safety, strengthen public trust, and responsibly complement—not replace—clinical care.”
In a separate announcement, the AMA also unveiled a comprehensive policy framework to establish clear, enforceable protections for physicians against unauthorized AI-generated “deepfakes.”
From AI implementation to digital health adoption and EHR usability, the AMA is fighting to make technology work for physicians, ensuring that it is an asset to doctors. That includes recently launching the AMA Center for Digital Health and AI to give physicians a powerful voice in shaping how AI and other digital tools are harnessed to improve the patient and clinician experience.