
OR WAIT null SECS
As mental health AI tools rise, the FDA weighs benefits and risks, emphasizing oversight and performance testing. Eriksson discusses AI in psychiatric care.
As the US Food & Drug Administration (FDA) evaluates the safety and clinician utility of generative AI in mental health, questions are emerging about its potential role in psychiatric care.
On November 20, 2024, the FDA convened its Digital Health Advisory Committee (DHAC) to examine how generative AI may influence the safety and effectiveness of medical devices, particularly digital mental health products. This was the second DHAC meeting dedicated to AI-enabled content in medical devices, following the committee’s broader 2024 discussion on total product life cycle considerations for generative AI.
The new report, released ahead of the agency’s upcoming work in this space, outlines the regulatory challenges presented by patient-facing AI systems that generate new content, update frequently, and may provide therapeutic guidance without clinician oversight. The FDA noted a rapid rise in AI therapists and mental health chatbots designed to offer behavioral interventions, diagnostic suggestions, or therapy-like conversations, which introduces novel risks for patients.
Shortly after the meeting, HCPLive spoke with Hans Eriksson, MD, PhD, a psychiatrist and Chief Medical Officer at HMNC Brain Health, on his thoughts regarding AI use in the psychiatry field. He said there are 2 ways you can use AI tools in development: to evaluate individual patient characteristics or to analyze populations and create algorithms that assign patients to different treatment groups. Ultimately, AI tools can help eliminate the standard practice of trial and error when prescribing medication.
“There are lots of different biologics, and unless we can pinpoint these biologics, it's very difficult to find the right intervention,” he said.
The FDA recognized that generative AI offers substantial public health benefits, including improving access to care. However, AI also comes with risks: output errors, patients’ misinterpreting what the chatbot gives them, and a physician or healthcare provider not understanding how to monitor the technology.
The agency also stated that AI tools should have detailed submissions with intended use, indications, use cases, the care environment, and standardized model cards. They added that devices must undergo rigorous performance testing using tailored metrics, such as the evaluation of repeatability, reproducibility, measurement of uncertainty, hallucination rates, error rates, and stress testing across intended user populations and settings. The FDA said that benchmarking is important for comparing device capabilities and performance against established standards.
After the tool hits the market, the FDA said there should be automated auditing and quality assurance checks to ensure consistency across multiple sites. There also needs to be new tools to evaluate opaque foundation models.
The FDA emphasized the importance of maintaining human oversight and adequate training, real-world transparency, informed use, shared responsibility, and risk management frameworks.
“[I] have not been directly interacting with [FDA’s AI report], so the interactions we've had with the FDA have been with projects that are not necessarily including the AI tools, but I think from the conversations, it becomes evident that they are very interested in the field, and they are aware that it's a very rapid development,” Eriksson said.
References