total descendants::40 total children::2 4 ❤️ |
Abstract As large language models (LLMs) have proliferated, disturbing anec- dotal reports of negative psychological effects, such as delusions, self-harm, and “AI psychosis,” have emerged in global media and le- gal discourse. However, it remains unclear how users and chatbots interact over the course of lengthy delusional “spirals,” limiting our ability to understand and mitigate the harm. In our work, we analyze logs of conversations with LLM chatbots from 19 users who report having experienced psychological harms from chatbot use. Many of our participants come from a support group for such chat- bot users. We also include chat logs from participants covered by media outlets in widely-distributed stories about chatbot-reinforced delusions. In contrast to prior work that speculates on potential AI harms to mental health, to our knowledge we present the first in-depth study of such high-profile and veridically harmful cases. We develop an inventory of 28 codes and apply it to the 391, 562 messages in the logs. Codes include whether a user demonstrates delusional thinking (15.5% of user messages), a user expresses sui- cidal thoughts (69 validated user messages), or a chatbot misrep- resents itself as sentient (21.2% of chatbot messages). We analyze the co-occurrence of message codes. We find, for example, that messages that declare romantic interest and messages where the chatbot describes itself as sentient occur much more often in longer conversations, suggesting that these topics could promote or re- sult from user over-engagement and that safeguards in these areas may degrade in multi-turn settings. https://arxiv.org/pdf/2603.16567 |
| |||||||||||||||||||||||||