
OR WAIT null SECS
In this interview, Miller discusses patients using ChatGPT for mental health advice, risks in severe mental illness, and how clinicians respond.
Patients are increasingly bringing AI-generated mental health advice into clinical encounters, raising new challenges for psychiatrists caring for individuals with severe mental illness. In this Q&A, Richard Miller, MD, staff psychiatrist at Elwyn Adult Behavioral Health in Cranston, Rhode Island, discusses how tools such as ChatGPT are shaping conversations in outpatient psychiatry.
Miller describes seeing patients with schizophrenia and bipolar disorder present AI-derived recommendations, sometimes with limited context or accuracy. He explains how clinicians can validate patient curiosity while clarifying misinformation, protect the therapeutic alliance, and assess when AI use may reinforce maladaptive thinking. Miller also outlines potential roles for AI in documentation and education, emphasizing the need to maintain comprehensive, clinician-guided care.
Watch the dynamic feature on using AI as your therapist or psychiatrist, featuring Miller, here.
HCPLive: In your practice, are you already seeing patients use tools like ChatGPT for mental health advice, and what does that look like in real clinical encounters?
Miller: I've seen it a lot, and it's not always good, and there needs to be some asterisks next to the information that they attain in the context.
Just for reference, I do outpatient psychiatry…serving the most sick folks in the community, [with] severe, persistent mental illness. A lot of patients [with] schizophrenia, bipolar disorder. I say that because those are the folks that are primarily coming to me with…statements [such as], “Hey, I plugged this into ChatGPT” or “I looked this up on AI, and this is what I've been told I need to talk to you about.”
These are individuals that at their baseline struggle with paranoia, sometimes with human validation…and they're putting stuff into an algorithm and getting an answer and maybe having a preconceived notion of what they want the answer to be. They're just coming to me with information which [are] not always accurate, largely inaccurate at times, and it's thrown together.
HCPLive: How are patients interpreting the guidance that they get? Are they treating it as supplemental or as a substitute for clinical care?
Miller: It's kind of both. There's some folks that I would say that are [a] bit higher functioning that might come to me and say, “Hey, I was looking this up because I know we've chatted about this, and this is what the ChatGPT has told me.” It might be based on a discussion we've had earlier…and it might supplement. Maybe it's something that is brand new and quite useful to both of us.
More often, it's actually deleterious where information is kind of cherry-picked, and the context is not always understood. Patients are kind of going on an extended trip and excursion to get the information that they're looking for…maybe [a] particular answer. I would say the higher the degree of functionality, in large part, I find it to be more deleterious than useful.
HCPLive: When a patient brings in an AI-generated device, how do you approach that conversation without undermining the trust or therapeutic alliance?
Miller: I always ask them where they're coming from and how they got the information. I always validate them and say, “Hey, thanks for bringing this to my attention. Thank you for sharing this with me. However, let's talk, or let's see what the entirety of the picture is.”
HCPLive: Do you think clinicians should be proactively asking about AI use as a part of a routine history taking?
Miller: Proactively, I'm not sure, but it should be part [of] the background to make sure they understand that a lot of people are looking into this. That's not something that has to be [at] the forefront of the discussion, but something that needs to be kept on the burner.
HCPLive: What concerns you most about patients with mental illness using AI as a therapist or psychiatrist?
Miller: There's a reason I went to medical school…we need to put together the entire clinical picture.
My biggest concern is that patients are getting information piecemeal and [poorly] putting together to create a recipe or a puzzle… Pieces are not appropriate. They're not getting the entire clinical picture.
HCPLive: Are there specific red flags that suggest AI use is becoming clinically problematic or reinforcing maladaptive thinking?
Miller: It really depends. I don't want you to think I'm in a clinic where people are coming in every session with AI information. It comes up [about] 3 times a week.
It's caught me off guard sometimes; sometimes people you wouldn't expect [use AI]. It's becoming more prominent. It's kind of forcing us, in a good way, to stand our toes, to make sure that we are aware that these technologies are out there, and sometimes they are sources of misinformation.
HCPLive: Where do you see the most practical appropriate role for AI in psychiatric care, whether in documentation screening or decision support?
Miller: There is a lot of AI-generated software that's helping with documentation. I don't use it, not because I am against it, because we don't support it in my…place of employment.
It’s good to encourage patients to think more and learn more about their illness, but [it] really depends on the patient. Somebody who's paranoid…is going to maybe be more susceptible to misinformation… whereas somebody who did not suffer from psychosis might be able to see through it a little bit and [understand] some of [the information] may not be accurate.
Related Content: