
OR WAIT null SECS
Sisti discusses ethics, safety, and guardrails as patients use ChatGPT for mental health support and clinicians navigate AI’s role in psychiatry.
As consumer-facing artificial intelligence tools become more accessible, some patients are increasingly turning to platforms such as ChatGPT for mental health support. This trend raises ethical, clinical, and safety considerations for psychiatrists navigating conversations with patients who may be using AI in place of, or alongside, traditional psychotherapy.
In this Q&A, Dominic Sisti, PhD, from Penn Medicine, discusses access gaps driving adoption, potential benefits, safety concerns, and the guardrails needed for responsible integration of AI into mental health care.
Watch the full feature on the benefits and dangers of AI in psychiatry, featuring Sisti and practicing psychiatrists here.
HCPLive: From an ethics standpoint, how should clinicians interpret the growing trend of patients using tools like ChatGPT as a form of informal psychotherapy?
Sisti: It's important that clinicians recognize that the prolific use of general-purpose AI for psychotherapy is a sign that there just really isn't enough access to mental health care. People are turning to these tools because they can't get appointments with psychotherapists, or if their insurance policies don't cover the costs. AI is a very easy, convenient solution when someone feels like they need to talk to someone.
Unfortunately, these…AI platforms aren't really trained to treat people in a way like a psychiatrist might, and so there's very important questions around whether they are safe. First off, how would these AI platforms [respond] to a patient who is indicating they have suicidal ideation or behavior?
There's a lot that needs to be worked out still, but I am actually optimistic that AI can provide a level of support to people who otherwise don't have access to mental health care. It's just that these…AI platforms aren't designed for that. There are platforms now that have been trained and are being designed with this in mind, and they seem to be safer; we'll see.
HCPLive: What point does AI-generated mental health guidance cross the threshold from information into something that resembles clinical care?
Sisti: If the AI is pushing back suggestions about medications, about psychotherapy, modalities, etc, it might cross that line. I could see an environment that is HIPAA compliant, where the AI has been trained to provide an elevated level of screening, or even some therapeutic modality, could work well.
Right now, the general AIs don't really have guardrails in place, as far as I know, but there are ones that are… being developed by researchers and psychiatrists…who are trying to train up these platforms to respond appropriately and with evidence-based suggestions. I do believe, however, there really should be a human in the loop on all these types of suggestions so that we can be sure that the individual who's using the platform is getting good information.
HCPLive: What are some examples of these guardrails that are in research?
Sisti: For example, figuring out ways to redirect a person who may have suicidal ideation or behavior to appropriate resources. How can we design AIs to recognize that elevated risk and prompt the user to reach out to a loved one, or to call a Crisis Text Line?
There's no real regulatory framework around how these guardrails should be put in place. We need to have [an] age restriction….so that kids aren't in there talking about their own mental health challenges and maybe suicidal behaviors without someone being notified of that.
HCPLive: How should clinicians respond when patients disclose they are following advice generated by AI systems?
Sisti: I think in a non-judgmental way. I do think that these platforms offer patients another outlet for processing emotions, and depending on the severity level, it might be perfectly appropriate that they're using the platform, or it might be helpful. It really depends on the case.
If a patient shows up and says, “I've been using ChatGPT exclusively, and it's told me everything I do is right,”…there's probably some counseling that needs to happen there. I do think that when patients have experience with… human psychotherapy…an AI agent could actually add a little bit of another perspective because the patient really understands that it's a machine.
HCPLive: What are the most pressing ethical risks associated with patients using AI as a therapist, particularly for individuals with serious mental illness?
Sisti: We're just learning now about all the different potential risks to patients who have mental disorders such as schizophrenia, bipolar disorder, or depression. There's been an uptick in what's been called AI-induced psychosis. I think that there is the risk of a patient… having dangerous thoughts and behaviors, and the AI doesn't actually respond correctly, then the AI might actually offer suggestions for killing oneself. We've seen that already.
HCPLive: How should clinicians counsel patients about privacy risk when using consumer-facing AI tools for mental health discussions?
People using them [must be] aware that their data is not really secure. If they're uploading medical information or saying things that are super personal, all those chats can be discovered. If it's a HIPAA-compliant agent, things are a little bit better in terms of privacy.
HCPLive: And what are the most significant gaps in current regulation when it comes to AI and psychiatry?
Sisti: If you prompt the AI the right way, you can actually get it to tell you methods of suicide, which should be impossible. In this context, if you're a scholar and you're trying to study suicide, or you're [a] suicidologist, that's different. We need AIs to be able to distinguish between requests for scholarly information versus requests for someone who's really desperate and is looking for that information to use.
HCPLive: What recommendations do you think should be in place?
There’s ways that you can infer age without even asking for age. Being able to figure out if it's a child or a minor and adjusting the chat to an appropriate level, then quickly bringing a human [in] the loop, is critical.
HCPLive: If you could establish one guiding principle for clinicians navigating AI in psychiatry today, what would it be?
If a psychiatrist is working with a patient who is using AI, the guiding principle…is the principle of beneficence. This is one of the four principles of Biomedical Ethics, which is essentially a crystallization of the Hippocratic oath to do no harm, to try to do good.
If the clinician thinks that the AI is adding value to the treatment, then maybe supplementary therapy using an AI seems okay. Those are the questions that will be answered as we learn more about the safety and efficacy, and hopefully, these platforms can be designed in a way that they do add value and are safe and effective.