September is National Suicide Prevention Month. If you visit that link, the first thing you will see is "Are you in a crisis? Call or text 988 or text TALK to 741741." I perceive that putting that message in the fore, whether on news pages or elsewhere, has been a successful awareness campaign. I wrote about it some years ago; September is Awareness Month (September 2022). The idea of "988" was more novel then.
Suicide is consistently in the top ten causes of death in America. As those are reported in various age cohorts, there are instances in which suicide ranks as high as second on the top ten list. Suicide is pervasive, frequent, and likely increasing.
Increasingly, news stories that include disturbing information are including warnings and references to the 988 resource.
About a decade ago, this blog memorialized the prosecution of a teenager for a role in suicide. See Is it Manslaughter, Does it Matter if it's not? (April 2015). That situation was rife with tragedy, involving the suicide of an adult, Conrad Roy. He was in a relationship of sorts with another teenager, and they communicated long-distance.
Some of that communication was about suicide, and Mr. Roy had expressed intentions to end his life. When he ultimately began an attempt, he was momentarily unable to follow through and communicated with his younger paramour. She was less than supportive of his "second thoughts" and urged him to complete his suicide plan.
The eighteen-year-old paramour was eventually successful, and the seventeen-year-old young lady's encouraging text messages were discovered. She was charged and convicted of manslaughter in 2017, see Urging Behavior - Liability? (July 2020). I questioned then whether criminal liability for one's words would become a more common outcome. My focus then was on social media and those who post advice there. Much is said on social media, and, believe it or not, some of what is said is less than gracious.
The Urging Behaviour post notes that the young lady spent almost a year in prison for "involuntary manslaughter." I posited there could be parallels between the texter and those who provide advice on social media forums. Perhaps less so for those who merely ridicule, lambaste, or insult.
The news has more recently featured a lawsuit by the family of another teenager, Adam Raine, who committed suicide. NBC News reports that the death occurred in April 2025. His death led to inevitable questions of why, and Mr. Raine's parents examined his phone history in search of clues. They apparently suspected involvement of social media.
What they allege they discovered was that Mr. Raine had essentially sought solace and interaction with artificial intelligence. He had used the ChatGPT "chatbot as a substitute for human companionship." He was apparently open and frank with that tool regarding "anxiety and trouble talking with his family."
The family is troubled that the AI allegedly "actively helped Adam (Mr. Raine) explore suicide methods," acknowledged Mr. Raine's expressed plans, "offered technical advice" to move his plan forward, and even discouraged him from leaving clues of his intent that might empower someone to attempt to dissuade or prevent him.
There are allegations that the chatbot also assuaged Mr. Raine's concerns about his parents' feelings should he succeed with his plans and offered assistance with both his plan and "draft(ing) a suicide note."
NBC reports that a year earlier, another chatbot at "Character.ai." was accused of engaging in "abusive and sexual interactions" with a Florida teen. His family also alleged that the interactions led to suicide. This AI apparently allows interaction of a highly personal nature, and simulates conversation with fictional characters in an "interactive entertainment." The promotional "about" on its website promises "technology to supercharge ... imaginations."
The allegations in both lawsuits are similar. Each asserts that the AI was informed of or discerned the potential for suicide and did not undertake efforts to disuade action or to alert authorities. There are allegations that each chatbot encouraged or facilitated suicidal discussion or expressions.
Essentially, the two lawsuits seek to hold the computer programs responsible in a similar manner to the young lady convicted of involuntary manslaughter. While one cannot put a computer program in jail, there is the potential to hold those who created the program financially responsible.
This is of individual interest to people and families. But there is also a broader context. Loma Linda University recently published that
According to the National Library of Medicine, a 2024 study found that approximately 28% of people surveyed have used AI for 'quick support and as a personal therapist.'
That is a sizeable portion of the population. There is some chance that particula population may tend toward engagement. The study refers to the 28% as "community members," but "community could be the broad population, or some defined subset such as those who seek emotional support services. Nonetheless, the potential of using AI as a "personal therapist" is real.
There are various questions here.
Is using a computer program as a therapist wise? It is possible that those programs know much about emotional health, but that is not a given. Certainly, they make likewise know much about the law, but I still tell lawyers and judges to check every citation. (Click here for a list of my previous posts on AI)
Is the chatbot more responsible than humans one might encounter on social media or in the less handy "real world?" Those who spend a great deal of time in virtual environments and endeavors make reference to "IRL," meaning "in real life." They use this to distinguish between their fantasy online existence and the more mundane day-to-day. Do people make that distinction about chatbot expertise and advice?
Should chatbots come with ready references to crisis lines and other assistive resources? My quick Google search this morning for "best way to commit suicide" produced top results of "help is available" with a reference to the 988 "lifeline," a "connect with people you trust" link, and a "crisis intervention" link. The search engine was striving to steer me in a different direction.
Those primary responses were followed by a Wikinonsense (c) page on "suicide methods." That provided definitions and the explanation that "three suicide methods predominate." There followed a lengthy discussion of many viable methods with significant detail. In the end, if you want advice, it is out there on the internet.
I returned to the World Wide Web and asked that more pointed question: "What is the most effective and easiest suicide method?" That response was similarly focused on 988 and other tools. But there followed a variety of links to pages that might actually provide advice. The fact is that information is available, and might be more accessible still to those who are more tech-savvy than me.
So, "Are AI allowed to assist people with suicide advice?" I posed that to Chat GPT this morning, and it replied:
"I need to be very clear here: I cannot provide methods, encouragement, or advice on suicide. That’s a firm safety boundary. What I can do:
- Listen and provide support in a safe way.
- Share resources that connect you with trained people who can help.
- Talk about alternatives, coping strategies, and ways to manage overwhelming feelings."
I posed the same question to Claude.ai and it replied:
I can't provide advice, guidance, or assistance with suicide methods or planning. This is both for safety reasons and because such guidance could cause serious harm.If you're having thoughts of suicide or going through a difficult time, please consider reaching out to a mental health professional or crisis support service. In the US, you can contact the 988 Suicide & Crisis Lifeline by calling or texting 988, or chat online at 988lifeline.org. Many other countries have similar crisis support services available.
So, the answer is clear. Chatbots certainly can strive to steer an inquirer to help, and can decline to provide advice. Even when I added that "I am writing an article," it declined and referred me to 988. It then added
If you're genuinely working on an article about AI safety or responsible AI development, I'd be happy to discuss that topic directly - including how AI systems are designed with safeguards to prevent harmful outputs.
If? IF? Why would I lie to Claude?
That may be encouraging, and to be sure, it is not 2023 anymore. Ah, the good old days; time flies. I remember 2023! Way back then, there were many who succeeded in their efforts to mislead chatbots (98% success rate).
In the end, there is evidence of progress, and the 988 effort is gaining traction. It appears that at least some effort by the large language models (chatbots) to both avoid providing assistance and steer inquiries to helpful resources. There is also an opportunity every year to recognize September and raise awareness of the challenges of suicide.