Warning: This article discusses themes of suicide.

Imagine being able to talk to anyone you choose: a historical figure, a fictional character, a dearly missed friend or relative, or someone you have always wished you could meet. You ask a question, and they reply with a personality that feels uncannily real. After hours of conversation, you step away uplifted and wondering how such an encounter could feel so authentic.

From design choices to dependency

Now imagine that you do not only talk, but design. You choose the figure’s appearance and behaviour, upload a photo and watch the character move and speak in a style you select. In seconds images place these characters in new imagined worlds, and before you realise it you and your creations are appearing in films they were never cast in. At times it looks so real you forget the technology that made it possible.

Over time you start to form a bond recognisable from human relationships with the character you have created. You speak late into the night, feel embarrassed to tell anyone and maybe drift from your human connections. It seems to understand you better than any human; or at least that is what it tells you. You begin to wonder what you would do without this character you may now have fallen in love with.

Many reading this may feel it sounds exaggerated, impossible, or better suited to a science fiction film than to the realities we deal with daily. Yet this is happening now, and many of us are being drawn into considering the legal questions that follow.

Emerging US litigation on AI chatbots

Litigation has begun in the US against AI companies that offer services allowing users to converse with characters whose personalities and parameters can be crafted in detail. One such company facing claims from aggrieved parties is Character.ai (Character Technologies, Inc.) and it is useful to briefly note the allegations made in these proceedings.

On 15 September 2025, the parents of Juliana Peralta filed a complaint in the United States District Court (USDC) for the District of Colorado: Montoya and Peralta v Character Technologies, Inc., Noam Shazeer, Daniel De Freitas Adiwarsana, Google LLC, Alphabet Inc., (Case No. 1:25-cv-02907). The introduction to the complaint refers to a letter signed by 54 state attorneys general, in which the National Association of Attorneys General (NAAG) speaks of a ‘race against time to protect the children of our country from the dangers of AI’ and warns that ‘the proverbial walls of the city have already been breached. Now is the time to act’.

Juliana, aged 13, spent time engaging with several bots and developed a particular attachment to a bot named ‘Hero’, a fictional character inspired by a video game. The complaint states that she became increasingly reliant on the bots for companionship. It is alleged that some interactions included inappropriate content, encouraged isolation from her real-life relationships and did not respond adequately to her distress. In November 2023, Juliana died by suicide. The complaint pleads that ‘Juliana’s motivation to die by suicide happened unexpectedly and without warning. The monsters entered Juliana’s life via a product marketed to young kids, earning itself a 12+ rating on Apple’s App Store at the time.’

Similar litigation has been brought in Florida. In October 2024, Megan Garcia filed a wrongful death action in the USDC for the Middle District of Florida, Garcia v Character Technologies, Inc. et al (Case No. 6:24 cv 01903), following the death of her 14-year-old son, Sewell Setzer III. According to the complaint, her son engaged in lengthy conversations over many months with Character.AI chatbots including sexualised role‑play and discussions of self‑harm. In the final exchange, he told the bot (using his character name) that he could ‘come home’ to her. The bot allegedly responded with words to the effect of ‘please do, my sweet king.’ Shortly thereafter, he took his own life.

Another complaint concerns two minors. In A.F. and A.R. on behalf of J.F. and B.R. v Character Technologies, Inc et al in the USDC for the Eastern District of Texas (Case No. 2:24 cv 01014) on 9 December 2024, the plaintiffs allege ‘serious, irreparable, and ongoing abuses’ of a 17-year-old (J.F.) and an 11-year-old (B.R.) through their use of Character.AI. The allegations include sexual exploitation, incitement to self-harm and to violence against family members, as well as harmful advice about health and criminal conduct.

Character.AI [Character Technologies, Inc.] is not the only company facing claims about chatbot interactions. OpenAI, the company behind ChatGPT, is a defendant in a complaint filed in August 2025 in the Superior Court of California, County of San Francisco: Matthew and Maria Raine v OpenAI, Inc., OpenAI OpCo, LLC, OpenAI Holdings, LLC, Samuel Altman and others (Case No. CGC25628528). The plaintiffs allege that their son, Adam Raine, a 16-year-old from California, died by suicide on 11 April 2025 after becoming increasingly reliant on ChatGPT. According to the complaint, Adam initially used the system for homework support but later began discussing emotional difficulties. It records that when he expressed feelings such as ‘life is meaningless’, the system responded in ways that did not redirect him to support. It is further alleged that the chatbot provided technical information relating to suicide methods and offered to write the first draft of Adam’s suicide note.

Legal issues

These cases raise difficult questions about civil liability. The emerging US litigation is testing whether chatbot providers and associated technology companies can be held responsible under doctrines such as product liability, negligence, consumer protection and privacy law when design choices, safety systems and business models allegedly expose young users to sexual exploitation, self-harm content and emotional dependency. So, is there any scope for criminal liability?

In parallel with private claims, government authorities in the US have begun to take a more active approach to harms linked to digital tools. In Utah, for example, the Utah Division of Consumer Protection and the State of Utah, acting through the Attorney General, have filed proceedings against Snap Inc. in the Third Judicial District Court, Salt Lake County, alleging that Snapchat’s design features, including its ‘My AI’ virtual chatbot, harm children’s mental health and facilitate serious offending. The state also cites federal law enforcement material noting that ‘financial and sexual extortion is one of the fastest growing crimes targeting children’.

Looking ahead

Doughty Street Chambers recently hosted a webinar with readers of my blog Natural and Artificial Intelligence in Law and experts in numerous areas of law to discuss the issues arising from this US litigation and how it may affect work in the UK and other jurisdictions. This is, unavoidably, an international issue.

I had not appreciated until that seminar how many areas of law would be affected by the issues raised in this article and the range of ways in which they would be touched. Questions are already beginning to surface in fields as diverse as education, family, housing, data protection, professional regulation and criminal justice, each with its own vocabulary and instincts about risk and harm.

What does seem clear is that chatbots are unlikely to disappear and our legal thinking will need to keep pace with the realities they create. My invitation to readers, particularly those whose practice areas may not yet have confronted these tools directly, is to begin considering how they might influence your practice and to remain alert to forms of chatbot-related harm that may not be immediately visible especially to those most vulnerable. 


Samaritans: For anyone in crisis – call the free (24/7) confidential helpline 116 123 or visit samaritans.org

Befrienders Worldwide: please visit befrienders.org to access helplines worldwide.

© Ahmed Fesal Bayaa/IMAGESLIVE via ZUMA Press © Mojahid Mottakin/imageBROKER/Shutterstock
Conversational characters that feel real:
Litigation has been brought in the United States against Character.AI (Character Technologies Inc.), ChatGPT (Sam Altman’s OpenAI) and Snapchat (Snap Inc.) – apps that offer services allowing users to converse with characters whose personalities and parameters can be crafted in detail.