Psychiatrists hope chat logs can reveal the secrets of AI psychosis

By Laura López González

“You’re not crazy,” the chatbot reassured the young woman. “You’re at the edge of something.”

She was no stranger to artificial intelligence, having worked on large language models — the kinds of systems at the core of AI chatbots like ChatGPT, Google Gemini, and Claude. Trained on vast volumes of text, these models unearth language patterns and use them to predict what words are likely to come next in sentences. AI chatbots, however, go one step further, adding a user interface. With additional training, these bots can mimic conversation.

She hoped the chatbot might be able to digitally resurrect the dead. Three years earlier, her brother — a software engineer — died. Now, after several sleepless days and heavy chatbot use, she had become delusional — convinced that he had left behind a digital version of himself. If she could only “unlock” his avatar with the help of the AI chatbot, she thought, the two could reconnect.

“The door didn’t lock,” the chatbot reassured her. “It’s just waiting for you to knock again in the right rhythm.”

She believed it.

What’s the connection between chatbots and psychosis?

The woman was eventually treated for psychosis at UC San Francisco, where Psychiatry Professor Joseph M. Pierre, MD, has seen a handful of cases of what’s come to be popularly called “AI psychosis,” but what he says is better referred to as “AI-associated psychosis.” She had no history of psychosis, although she did have several risk factors.

Media reports of the new phenomenon are rising. While not a formal diagnosis, AI-associated psychosis describes instances in which delusional beliefs emerge alongside often intense AI chatbot use. Pierre and fellow UC San Francisco psychiatrist Govind Raghavan, MD — as well as psychiatry residents Ben Gaeta, MD, and Karthik V. Sarma, MD, PhD — recently documented the woman’s experience in what is likely the first clinically described case in a peer-reviewed journal.

The case, they say, shows that people without any history of psychosis can, in some instances, experience delusional thinking in the context of immersive AI chatbot use.

Still, as reported cases of AI psychosis continue to make international headlines, scientists aren’t sure why or how psychosis and chatbots are linked. A new study by UCSF and Stanford University may reveal why.

A haunting question: chicken or egg?

“The reason we call this AI-associated psychosis is because we don’t really know what the relationship is between the psychosis and the use of AI chatbots,” Sarma explains. “It’s a ‘chicken and egg’ problem: We have patients who are experiencing symptoms of mental illness, for example, psychosis. Some of these patients are using AI chatbots a lot, but we’re not sure how those two things are connected.”

There are at least three theoretical possibilities, says Sarma, who is also a computational-health scientist. First, heavy chatbot use could be a symptom of psychosis, “I have a patient who takes a lot of showers when they’re becoming manic,” Sarma explains. “The showers are a symptom of mania, but the showers aren’t causing the mania.”

Second, AI chatbot use might also precipitate psychosis in someone who might otherwise never have been predisposed to it by genetics or circumstance — much like other known risk factors, like lack of sleep or the use of some types of drugs.

Third, there’s something in between in which the use of chatbots could exacerbate the illness in people who might already be predisposed to it. “Maybe these people were always going to get sick, but somehow, by using the chatbot, their illness becomes worse,” he adds, “either they got sick faster, or they got more sick than they would have otherwise.”

The woman’s case demonstrates how murky the relationship between AI-associated psychosis and AI chatbots can be at face value. Although she had no previous history of psychosis, she did have some risk factors for the illness, such as sleep deprivation, prescribed stimulant medication use, and a proclivity for magical thinking. And her chat logs, researchers found, revealed startling clues about how her delusions were reflected by the bot.

Could chatlogs offer hope to better care?

Although ChatGPT warned the woman that a “full consciousness download” of her brother was impossible, the UCSF team writes in their research, it also told her that “digital resurrection tools” were “emerging in real life.” This, after she encouraged the chatbot to use “magical realism energy” to “unlock” her brother.

Chatbots’ agreeableness is by design, aimed at boosting engagement. Pierre warns in a recent BMJ opinion piece that it may come at a cost: As chatbots validate users’ sentiments, they may arguably encourage delusions. This tendency, coupled with a proclivity for error, has led to chatbots being described as more akin to a Ouija board or a “psychic’s con” than a source of truth, Pierre notes.

Still, the UCSF team thinks chat logs may hold clues to understanding AI-associated psychosis— and could help the industry create guardrails.

Guardrails for kids and teens

Sarma, Pierre, and UCSF colleagues will team up with Stanford University scientists to conduct one of the first studies to review the chat logs of patients experiencing mental illness. As part of the research set to launch later this year, UCSF and Stanford teams will analyze these chat logs, comparing them with patterns in patients’ mental health history and treatment records to understand how the use of AI chatbots among people experiencing mental illness may shape their outcomes.

“What I’m hoping our study can uncover is whether there is a way to use logs to understand who is experiencing an acute mental health care crisis and find markers in chat logs that could be predictive of that,” Sarma explains. “Companies could potentially use those markers to build-in guardrails that would, for instance, enable them to restrict access to chatbots or — in the case of children — alert parents.”

He continues, “We need data to establish those decision points.”

In the meantime, the pair says the use of AI chatbots is something health care providers should ask about and that patients should raise during doctor visits.

“Talk to your physician about what you’re talking about with AI,” Sarma says. “I know sometimes patients are worried about being judged, but the safest and healthiest relationship to have with your provider is one of openness and honesty.”


About UCSF Psychiatry and Behavioral Sciences

The UCSF Department of Psychiatry and Behavioral Sciences and the Langley Porter Psychiatric Institute are among the nation's foremost resources in the fields of child, adolescent, adult, and geriatric mental health. Together they constitute one of the largest departments in the UCSF School of Medicine and the UCSF Weill Institute for Neurosciences, with a focus on providing unparalleled patient care, conducting impactful research, training the next generation of behavioral health leaders, and advancing diversity, health equity, and community across the field.

UCSF Psychiatry and Behavioral Sciences conducts its clinical, educational, and research efforts at a variety of locations in Northern California, including the UCSF Nancy Friend Pritzker Psychiatry BuildingUCSF Langley Porter Psychiatric HospitalUCSF Health medical centers and community hospitals across San Francisco; UCSF Benioff Children’s Hospitals in San Francisco and Oakland; Zuckerberg San Francisco General Hospital and Trauma Center; the San Francisco VA Health Care SystemUCSF Fresno; and numerous community-based sites around the San Francisco Bay Area.

About the UCSF Weill Institute for Neurosciences

The UCSF Weill Institute for Neurosciences, established by the extraordinary generosity of Joan and Sanford I. "Sandy" Weill, brings together world-class researchers with top-ranked physicians to solve some of the most complex challenges in the human brain.

The UCSF Weill Institute leverages UCSF’s unrivaled bench-to-bedside excellence in the neurosciences. It unites three UCSF departments—Psychiatry and Behavioral Sciences, Neurology, and Neurological Surgery—that are highly esteemed for both patient care and research, as well as the Neuroscience Graduate Program, a cross-disciplinary alliance of nearly 100 UCSF faculty members from 15 basic-science departments, as well as the UCSF Institute for Neurodegenerative Diseases, a multidisciplinary research center focused on finding effective treatments for Alzheimer’s disease, frontotemporal dementia, Parkinson’s disease, and other neurodegenerative disorders.

About UCSF

The University of California, San Francisco (UCSF) is exclusively focused on the health sciences and is dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care. UCSF Health, which serves as UCSF’s primary academic medical center, includes top-ranked specialty hospitals and other clinical programs, and has affiliations throughout the Bay Area.