The AI Chatbot is In
December 18, 2025FDA has authorized more than 1,200 artificial intelligence (AI)-based digital devices for marketing. To date, none of these has been indicated to address mental health. Currently there are 57.8 million adults who have a diagnosed mental illness and many do not have access to high quality and effective treatments. The use of generative AI digital devices, such as chatbots and virtual companions could close the gaps for those in need of mental health services. Recognizing the growing demand for “AI therapists,” the November meeting of FDA’s Digital Health Advisory Committee focused on FDA’s regulatory approach for generative artificial intelligence-enabled digital mental health medical devices, taking into consideration their benefits, risks to health, and risk mitigations, including premarket evidence and post market monitoring.
Generative AI is defined as an AI model that “emulates the structure and characteristics of input data in order to generate derived synthetic content.” The goal of generative AI is to create content that mimics a human providing a more personalized experience for the user. Generative AI can create original digital content, including video, audio, or text.
The Advisory Committee identified several potential benefits to generative AI devices, including its use in triaging patients, enabling more immediate response to patients, expanding access to care in underserved areas, and enabling personalized treatment. The Advisory Committee also raised concerns about novel risks that could be introduced by generative AI devices, including bias, “hallucination” (when the model generates inaccurate or misleading results) and “sycophancy” (when the model seeks to please the user at the expense of accuracy). Without a healthcare professional’s oversight (human-in-the-loop), to identify errors and to detect patient cues missed by the device, generative AI devices could adversely affect a patient’s mental health rather than lead to improvement.
The Advisory Committee noted that there are many digital mental health technologies on the market today, including some that fall outside of FDA regulation, such as general wellness devices. General wellness devices that encourage healthy eating habits, track sleep, and guide meditation can be effective tools for those with and without a mental illness. This has created a source of confusion to users of those products who may not be aware of the difference between an FDA regulated device and a general wellness product.
Generative AI devices that treat or diagnose a psychiatric condition or substitute for a mental healthcare provider are the type of generative AI products which would fall under FDA’s purview. “Therapeutic” generative AI devices are those that are intended to aid in the treatment of a psychiatric condition. Today therapeutic mental health devices are usually used under the supervision of a healthcare provider. “Diagnostic” generative AI devices are those used in the assessment, evaluation, and diagnosis of a patient. Patients use diagnostic devices today to inform the healthcare provider’s diagnosis. There are non-AI digital mental health devices that have been authorized in diagnosis and treatment and most are for prescription use only.
The focus of the Advisory Committee was on determining the data and monitoring that should be required to establish a reasonable assurance of safety and effectiveness for these devices.
Speakers from FDA discussed the need for robust real-world evaluation strategies to ensure AI-enabled devices are safe and effective after deployment. They also noted the importance of a risk-based approach given the range of devices that may be introduced—from low risk general wellness applications to devices providing therapy for psychiatric disorders— and the level of autonomy of the device (with or without human oversight). FDA speakers proposed the inclusion of a predetermined change control plan (PCCP) and a performance monitoring plan in premarket submissions as potential strategies to mitigate risks arising from changes in AI-enabled software performance over time, and use of double blind, randomized, placebo controlled trials to mitigate risks associated with placebo response in patients with psychiatric disorders.
Stakeholders presented a variety of perspectives to the Advisory Committee on the benefits and risks of generative AI mental health devices and recommended evidence that should be generated to support their use. They identified increased access to mental health treatment as a key benefit of the technology, as many patients with mental health conditions are unable or unwilling to obtain diagnosis and/ or treatment from a healthcare professional. Stakeholders also noted that generative AI-based tools could have higher empathy—some models have shown effectiveness of generative AI in decreasing loneliness and restoring relationships— and reduced cost as compared to traditional human based provider care.
The committee discussed risks and challenges of using LLMs (large language models) including misdiagnosis, especially given overlap of symptoms across mental health disorders; misuse and over reliance; adverse impact on vulnerable populations or on patients with psychosis; bias; and ethical dilemmas involving data privacy and consent. They proposed a variety of risk-mitigation guard rails, such as:
- Human oversight and appropriate training;
- Transparency that the output is from AI;
- Access to emergency services;
- Verification and validation, including randomized clinical studies;
- Limiting the device to prescription use only;
- Considering Class III designation; and
- Need for post market monitoring.
The committee deliberated on what additional mitigations may be required for over-the-counter (OTC) use. For OTC use, panelists were concerned that patients could self-diagnose and select an appropriate application based on their level of illness. They also felt strongly that a clinician should be in the loop and available to act in an emergency and to identify adverse events.
The committee also discussed the use of such technology for pediatric patients. One committee member noted that some of the mental health crisis is related to smart phone applications that have been inappropriately regulated with no consequence and was concerned with use of another app to address the problem, with the risk that the mental health app could be addicting itself. For pediatric use, committee members were so concerned that they did not know what to say. The committee discussed the need to ensure the device is developed for a specific age group and there may be different approaches required for use by a child vs an adolescent. Education and training for parents, patients, and healthcare providers were especially important for the pediatric population.
Surprisingly, FDA did not have many questions for the committee. They noted that there is a request for comments on AI-enabled Medical Devices, which we previously blogged about here.
This advisory committee meeting highlighted challenges for digital therapeutics for mental health as well as generative AI. Paired together, there are a number of challenges that both FDA and developers will need to carefully navigate to ensure safeguarding for patients. Digital mental health devices are as complex as the patient population they are intended to help and ensuring reasonable assurance of safety while promoting innovation will require regulatory approaches focused on total product lifecycle balancing pre and post market regulatory requirements.