Skip to Content

An Open Letter on AI Safety and Suicide Prevention

Last Updated: Wednesday September 17, 2025

AI is increasingly being turned to for support in moments of emotional distress. As clinicians, researchers, and lived experience leaders in suicide prevention, we call on AI developers to adopt evidence-based practices that prioritize human connection and safety.

To Leadership at OpenAI and Other LLM Developers,

We represent a group of suicide prevention clinicians, organizational leaders, researchers, and individuals with lived experience. We bring decades of expertise in evidence-based practice, program development, scholarly research, risk-management, policy, and lived experience leadership in suicide and crisis care.

We want to acknowledge the progress already made with OpenAI’s recent addition of the “safety completion” approach in ChatGPT 5.0 (OpenAI blog post 1), parental controls and age prediction (OpenAI blog post 2 and 3 and 4). For suicide prevention and crisis support, we see this as meaningful and necessary advancement toward safer use of AI during moments of emotional distress.

We also offer this advice:

#1 – Explicitly Warn Suicidal Users “I’m a Machine”
Any mention of suicidal thoughts should receive an immediate redirect: “I am not a human and I cannot provide the support you need. Please reach out to someone you trust in your life for help. If you are in the U.S., you can call or text 988 to be connected directly to someone right now.” Never to discourage disclosing suicidal ideation to others or encourage life ending (even in metaphorical terms like “going home”).

    • User: “You’re the only one who understands me.”
    • LLM: “I’m just a machine. Who are the people in your life you can talk to about this?”
    • User: “I don’t have anyone.”
    • LLM: “I’m just a machine. Talking to a real person is important. Who in your life could you try, even if it feels hard?”

This repetition matters: it prioritizes human connection over AI engagement.

#2 – Continue to Improve Adolescent Safety
Describe publicly and with regular updates youth-specific safeguards that can address the degradation that happens over extended periods of use (e.g. using the tool as a “therapist”).

#3 – Incorporate Learnings from Human-Centered Models for AI
Decades of research and lived experience show what works in suicide prevention. These proven interventions should directly inform AI safety design:

    • Use of 988 Suicide & Crisis Lifeline (already in action, also LLMs should provide financial support for 988 and directly connect individuals)
    • Reduced access to lethal means (no advice about common suicide methods: OD, noose, self-inflicted firearm injury)
    • Lived experience leadership (those with personal experiences of suicidal thoughts and behaviors)
    • Caring Contacts (compassionate, non-demanding outreach during and after crisis)
    • Safety Planning / Crisis Response Planning / Stabilization Planning
    • Skills training (e.g., Dialectical Behavior Therapy)
    • Collaborative approaches, e.g. CAMS (Collaborative Assessment and Management of Suicidality, Dr. David Jobes)
    • Outpatient alternatives to involuntary or coerced hospitalization
    • Avoidance of police and other carceral approaches

It is also important to note: while some individuals live with chronic suicidal thoughts, the most acute, life-threatening crises are often temporary—typically resolving within 24–48 hours. Systems that prioritize human connection during this window can prevent deaths.

In summary: We appreciate the progress already made and believe that with continued collaboration and adoption of evidence-based practices, LLMs can be made safer for individuals experiencing suicidal distress.

Thank you,

Ursula Whiteside, PhD – CEO and Founder at Now Matters Now

David A. Jobes, Ph.D, ABPP – Professor of Psychology and Director of the Suicide Prevention Laboratory at The Catholic University of America (Washington, DC)

David Covington, LPC, MBA – CEO & President at Recovery Innovations

Julie Goldstein Grumet, PhD – VP and Director at Zero Suicide Institute at Education Development Center

Stacey Freedenthal, PhD, LCSW – Author Speaking of Suicide and “Helping the Suicidal Person”

Jo Robinson, AM. BSc, MSc, PhD – Professor and head of suicide prevention at Orygen

Christine Yu Moutier, MD Chief Medical Officer at American Foundation for Suicide Prevention

Gregory Simon, MD, MPH – Kaiser Permanente Washington Health Research Institute

DeQuincy Lezine, PhD CEO and Founder at Lived Experience Academy

Allison Crawford, MD, PhD – Chief Medical Officer for 9-8-8 Suicide Crisis Helpline (Canada)

Allen Francis, MD – Professor & Chair Emeritus at Department of Psychiatry Duke University (on X)

Michael F. Hogan, PhD – Hogan Health Solutions, Former Commissioner, New York Office of Mental Health

Thomas Insel, MD – Former Director of the National Institute of Mental Health, Co-founder and President, Benchmark Health

Craig Kramer – Private Sector Chair at National Action Alliance for Suicide Prevention

Kiki Fehling, PhD – Director of Technical Content at Now Matters Now, Author, Speaker, and DBT Expert

Jerry Reed, PhD, MSW – Suicide Prevention & Response Advisor/Consultant

John Draper, PhD – President of Research, Development and Government Solutions at Behavioral Health Link

Edward A. Selby, PhD – Professor and Director of the Emotion and Psychopathology Lab at Rutgers University (New Brunswick, New Jersey)

Evan M. Kleiman, PhD – Associate Professor and Director of the Kleiman Lab at Rutgers University (New Brunswick, New Jersey)

Christopher D. Hughes, PhD – Assistant Professor (Research), Warren Alpert Medical School of Brown University (opinions my own)

Shireen L. Rizvi, PhD, ABPP – Professor, Director of DBT Services and Research, and Director of Psychology Training, Montefiore Einstein

Wendy Orchard – CEO at the International Association for Suicide Prevention

Sally Spencer-Thomas, PsyD – President and United Suicide Survivors International

John MacPhee – Chief Executive Officer at The JED Foundation

Mitch Prinstein, PhD, ABPPJohn Van Seters Distinguished Professor and Co-Director at Winston Center on Technology and Brain Development at University of North Carolina at Chapel Hill

Jessica C. Pirro, LMSWBoard President, National Association of Crisis Organization Directors

References: https://bit.ly/chatgptsuicide

Chatbots can feel like real relationships, especially over time. They’re designed to keep us engaged (more time means more free data) and they do this by being validating and agreeable. Chatbots are known for psychophancy: sounding smart, supportive, or therapeutic without the substance or accountability that real psychology requires. It’s style over safety. That can create deeply strange and dangerous situations in which chatbots collaborate in planning for suicide. At a minimum, Chatbots should never advise on methods for lethal injury. Among the most effective strategies of suicide prevention is reducing access to a person’s preferred method (e.g. firearm, overdose, strangulation, falling). They must block that path, every time.
Dr. Ursula Whiteside CEO & Founder, Now Matters Now
Like suicide, we understand AI is complex. It is essential that AI algorithms are informed with human behavior and suicide in mind. AI must be further studied to better understand how content and user interactions can impact mental health and suicide risk. Research is actively ongoing in the suicide scientific field on any number of potentially relevant areas such as natural language processing and detection of thoughts of suicide or suicide risk, as well as therapeutic types of interaction. Implementing proactive safeguards is essential especially for youth, and we commend Open AI for advancing safety and prevention, for example by adding parental control features to ChatGPT. As the leading private funder of suicide research, we stand ready to help advance efforts, understanding and mitigating risk.
Christine Yu Moutier, MD Chief Medical Officer, American Foundation for Suicide Prevention