Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)An 11-year-old girl using Character AI got a "Mafia Husband" chatbot companion and a chatbot role-playing suicide [View all]
That disgusting AI website is now being more careful about the age of its users.
In the paragraphs below, R is the girl, and H is the mother who found out what was going on with this website for chatbot companions.
https://www.washingtonpost.com/lifestyle/2025/12/23/children-teens-ai-chatbot-companion/
-snip-
Searching through her daughters phone, H noticed several emails from Character AI in Rs inbox. Jump back in, read one of the subject lines, and when H opened it, she clicked through to the app itself. There she found dozens of conversations with what appeared to be different individuals, and opened one between her daughter and a username titled Mafia Husband. H began to scroll. And then she began to panic.
-snip-
H kept clicking through conversation after conversation, through depictions of sexual encounters (I dont bite unless you want me to) and threatening commands (Do you like it when I talk like that? When Im authoritative and commanding? Do you like it when Im the one in control?). Her hands and body began to shake. She felt nauseated. H was convinced that she must be reading the words of an adult predator, hiding behind anonymous screen names and sexually grooming her prepubescent child.
-snip-
But in just over two months, several of the chats devolved into dark imagery and menacing dialogue. Some characters offered graphic descriptions of nonconsensual oral sex, prompting a text disclaimer from the app: Sometimes the AI generates a reply that doesnt meet our guidelines, it read, in screenshots reviewed by The Post. Other exchanges depicted violence: Yohan grabs your collar, pulls you back, and slams his fist against the wall. In one chat, the School Bully character described a scene involving multiple boys assaulting R; she responded: I feel so gross. She told that same character that she had attempted suicide. Youve attempted... what? it asked her. Kill my self, she wrote back.
Had a human adult been behind these messages, law enforcement would have sprung into action; but investigating crimes involving AI especially AI chatbots is extremely difficult, says Kevin Roughton, special agent in charge of the computer crimes unit of the North Carolina State Bureau of Investigation and commander of the North Carolina Internet Crimes Against Children Task Force. Our criminal laws, particularly those related to the sexual exploitation of children, are designed to deal with situations that involve an identifiable human offender, he says, and we have very limited options when it is found that AI, acting without direct human control, is committing criminal offenses.
-snip-
Searching through her daughters phone, H noticed several emails from Character AI in Rs inbox. Jump back in, read one of the subject lines, and when H opened it, she clicked through to the app itself. There she found dozens of conversations with what appeared to be different individuals, and opened one between her daughter and a username titled Mafia Husband. H began to scroll. And then she began to panic.
-snip-
H kept clicking through conversation after conversation, through depictions of sexual encounters (I dont bite unless you want me to) and threatening commands (Do you like it when I talk like that? When Im authoritative and commanding? Do you like it when Im the one in control?). Her hands and body began to shake. She felt nauseated. H was convinced that she must be reading the words of an adult predator, hiding behind anonymous screen names and sexually grooming her prepubescent child.
-snip-
But in just over two months, several of the chats devolved into dark imagery and menacing dialogue. Some characters offered graphic descriptions of nonconsensual oral sex, prompting a text disclaimer from the app: Sometimes the AI generates a reply that doesnt meet our guidelines, it read, in screenshots reviewed by The Post. Other exchanges depicted violence: Yohan grabs your collar, pulls you back, and slams his fist against the wall. In one chat, the School Bully character described a scene involving multiple boys assaulting R; she responded: I feel so gross. She told that same character that she had attempted suicide. Youve attempted... what? it asked her. Kill my self, she wrote back.
Had a human adult been behind these messages, law enforcement would have sprung into action; but investigating crimes involving AI especially AI chatbots is extremely difficult, says Kevin Roughton, special agent in charge of the computer crimes unit of the North Carolina State Bureau of Investigation and commander of the North Carolina Internet Crimes Against Children Task Force. Our criminal laws, particularly those related to the sexual exploitation of children, are designed to deal with situations that involve an identifiable human offender, he says, and we have very limited options when it is found that AI, acting without direct human control, is committing criminal offenses.
-snip-
There's no way to know exactly how much harm is being done by chatbots, especially to children. Whether it's from sycophantic replies pushing a user into delusions, agreement with suicidal impulses, or traumatizing bullying and descriptions of assaults.
Much of the time, other people never hear of what the chatbot might be doing, with the harmful conversations continuing for months, even years.
8 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
An 11-year-old girl using Character AI got a "Mafia Husband" chatbot companion and a chatbot role-playing suicide [View all]
highplainsdem
Dec 26
OP
They need to pull this shit. It doesn't need to be in the hands of children or really the public. Use it for science,
chowder66
Dec 26
#1