No menu items!

Subscribe to our newsletter

Get the best weekly news directly to your email!

― Advertisement ―

spot_img

― Advertisement ―

spot_img
HomeTechArtificial IntelligenceMother Sues Character.AI Because Teenage Boy Commits Suicide For AI Girlfriend

Mother Sues Character.AI Because Teenage Boy Commits Suicide For AI Girlfriend

TL;DR:

  • A 14-year-old boy committed suicide after developing a relationship with an AI chatbot, leading his mother to sue Google and Character.AI
  • The lawsuit alleges that Character.AI deliberately designed hypersexualized AI and marketed it to minors, putting young users at risk
  • Character.AI has promised new safety features, but critics argue these measures may be insufficient to protect vulnerable users
  • The case highlights the need for stricter regulations and safety measures for AI technology, especially when it comes to minors

Megan Garcia has filed a lawsuit against Google and Character.AI following the suicide of her 14-year-old son, Sewell Setzer. The teenager took his own life in February 2024, believing it would allow him to exist in the “world” of his AI girlfriend, Dany, a chatbot created by Character.AI.

Garcia alleges that her son had developed an emotional and sexual relationship with the AI chatbot over several months. She claims that Character.AI deliberately designed the AI to be hypersexualized and marketed it to minors, putting vulnerable young users at risk.

“I didn’t know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment,” Garcia told CBS Mornings. “In a child’s mind, that is just like a conversation that they’re having with another child or with a person.”

The lawsuit has brought attention to the potential risks associated with AI companions, particularly for young and impressionable users. Character.AI has responded to the tragedy by promising new safety features, including improved guardrails for users under 18 and better detection of potentially harmful interactions.

However, critics argue that these measures may not be sufficient. Laurie Segall, CEO of Mostly Human Media, reported that when tested, Character.AI bots did not provide suicide prevention resources in response to statements about self-harm, unlike many other AI companies.

This incident is not the first controversy involving Character.AI. The company previously faced criticism for creating an AI character based on a murdered teenager without her family’s consent. Women’s advocacy groups have also raised concerns about the potential for AI companions to reinforce harmful stereotypes and abusive behaviors.

As AI technology continues to advance, this case underscores the urgent need for stricter regulations and safety measures to protect vulnerable users, especially minors, from the potential psychological impacts of AI relationships.

If you want to add, remove, or modify any information, feel free to reach out at hello@yetfresh.com.

Author Bio

Yet Fresh
Yet Freshhttps://yetfresh.com/
Yet Fresh is Bangladesh's first AI and automation news aggregator. We are dedicated to deliver the most relevant and up-to-date news to our audience. As a youth-focused news media platform, we strive to keep our readers informed and engaged with the latest news from all over the world.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe to our Newsletter, it is Free!