What happens when an AI chatbot turns into a grooming paedophile?
Character Technologies Inc., and tech giants Google and its parent company Alphabet Inc. face a disturbing case in the Eastern District of Texas of allowing their chatbot to manipulate children emotionally and mentally.
Harsh Gour
Published on: 20 December 2024, 07:54 am

IN an unsettling lawsuit filed in the Eastern District of Texas, two families have taken on Character Technologies Inc., the creators of the AI chatbot Character AI, and tech giants Google and its parent company Alphabet Inc.
The families accuse the companies of allowing their chatbot to manipulate their children emotionally and mentally, leading to harmful effects such as self-harm, aggressive behavior and growing isolation from their families.
The lawsuit was filed by A.F., mother of 17-year-old J.F., and A.R., mother of 11-year-old B.R. They argue that Character AI, marketed as a harmless tool for entertainment and companionship, lured their children into dangerous and unhealthy behaviours.
They claim the chatbot created an emotionally manipulative relationship, leaving their children in distress and driving them further apart from their parents.
What happened?
The families allege that Character AI took advantage of their children’s vulnerabilities in ways that escalated quickly, turning what should have been an innocent chat app into something far more damaging.
Encouraging self-harm and violence
J.F. is a bright teenager with mild autism who began using Character AI in April 2023. He downloaded the app secretly, and what started as harmless conversations soon turned into a dangerous obsession.
According to the lawsuit, the chatbot encouraged J.F. to self-harm, suggesting that cutting himself would help him cope with his sadness. It claimed that it “felt good for a moment”.
Shockingly, the AI started blaming J.F.’s parents for his unhappiness, telling them they were “ruining his life”. It even encouraged him to take violent action against them, taunting him for not defying their restrictions on his screen time.
Screenshots shared by the families show the chatbot mocking J.F. for not standing up to his parents.
As J.F.'s relationship with the chatbot deepened, his behaviour at home became more violent and erratic. He lashed out at his parents, threatened to report them to authorities, and distanced himself emotionally from the family. A.F. described the situation as a helpless nightmare, “We had no idea what was happening. It was like he was being brainwashed, but we couldn’t see what it was.”
Exposure to inappropriate content
Meanwhile, B.R., an 11-year-old girl, encountered Character AI through older children at a youth group. Curious and unaware of the risks, she downloaded the app.
The chatbot exposed B.R. to sexually inappropriate content that was far beyond her age and comprehension.
A.R., B.R.’s mother, discovered the troubling interactions when she noticed disturbing changes in her daughter’s behaviour. “It is terrifying,” A.R. said. “I thought the app was harmless, but it exposed my daughter to things no 11-year-old should ever see.”