
OR WAIT null SECS
As patients increasingly turn to ChatGPT for mental health support, psychiatrists and ethicists weigh the clinical risks, ethical concerns, and what clinicians should do next.
Watch the full expert dynamic feature above.
In your clinic, you may hear patients say, “Chat told me this” or “Chat told me that.” Numerous individuals use ChatGPT, colloquially referred to as “Chat,” to get quick, convenient mental health advice. This can be dangerous, depending on what the chatbot says, potentially providing patients with advice you would never, in a million years, give.
“More often, it's actually deleterious where information is cherry-picked, and the context is not always understood,” said Richard Miller, MD, a staff psychiatrist at Elwyn Adult Behavioral Health in Cranston, RI. “Patients are kind of going on an extended trip…to get the information that they're looking for…. maybe [a] particular answer. It's not always accurate.”
Despite the danger, AI can also be helpful in the psychiatry field. AI can provide a new perspective and can augment, not replace, treatment.
The HCPLive Editorial Team put together a feature examining the benefits, dangers, and ethics surrounding AI use in psychiatry. Millions of Americans struggle to access mental health care, with waitlists sometimes lasting 3 to 6 months.1 These individuals, desperate for mental health advice, turn to AI. As Darlene King, MD, from the University of Texas Southwest Medical Center, said, using AI as a therapist is like getting mental health advice from a friend, who can very well give bad advice.
Dominic Sisti, PhD, from Penn Medicine, discussed the ethical concerns of AI in psychiatry. Vulnerable populations may be susceptible to abusing AI and can even go as far as asking ChatGPT ways to self-harm. “How would these AI platforms [respond] to a patient who is indicating they have suicidal ideation or behavior?” he asked. “How might these platforms recognize elevated risk for self-harm or harm to others?
Sisti stressed the need for guardrails on age restrictions, redirection protocols, and HIPAA compliance standards. He also said there is a need for platforms to distinguish between an investigator asking about suicide methods versus someone in crisis.
Other than mental health advice, AI can assist in screening. In 2024, a study showed the promise of Sonde Health’s mental fitness vocal biomarker tool to identify depression from the sound of a voice.2 Another study showed the potential of MoodCapture, a smartphone application using AI to detect the onset of depression based on facial cues alone.3
More recently, a study showed AI can predict treatment response in major depressive disorder (MDD). In the phase 2b OLIVE trial assessing BH-200 (nelivaptan), a selective vasopressin V1b receptor antagonist, a proprietary genetic tool stratified patients by vasopressin-related biomarkers.4 Patients with lower peripheral but higher central vasopressin activity had a stronger antidepressant response, supporting biomarker-guided patient selection.
AI can also help with psychiatrists’ day-to-day responsibilities. For instance, Suki AI, Heidi, and DeepScribe, among others, are designed for clinician documentation.5,6,7
As AI becomes more commonplace, experts recommend asking about its use non-judgmentally during routine history-taking. Although potentially dangerous, AI can be beneficial if used with a “human in the loop,” according to Sisti.
Experts:
References
Related Content: