Character AI is a platform that allows users to create different AI characters and talk to them over calls and texts.
They are now facing at least two lawsuits with plaintiffs accusing the company of contributing to a teen’s suicide and exposing a 9-year-old to “hypersexualized content,” as well as promoting self-harm to a 17-year-old user.
Amid these ongoing lawsuits and widespread user criticism, the Google-backed company announced new teen safety tools today: a separate model for teens, input and output blocks on sensitive topics, a notification alerting users of continuous usage, and more prominent disclaimers notifying users that its AI characters are not real people.
“That’s why we’ve rolled out a suite of new safety features across nearly every aspect of our platform, designed especially with teens in mind. These features include modifications to our Large Language Model (LLM), improvements to our detection and intervention systems for human behavior and model responses, and additional features that empower teens and their parents. This suite of changes results in a different experience for teens from what is available to adults — with specific safety features that place more conservative limits on responses from the model, particularly when it comes to romantic content.” – Character AI
- The company developed a teen version of its large language model that will make bots’ responses more “conservative.”
- It’s beefing up its content triggers and dispatching more pop-ups for the National Suicide Prevention Lifeline when users mention self-harm.
- Like TikTok, it’s adding a you’ve-been-scrolling-for-way-too-long notification at the one-hour mark to attempt to bring down the average user’s 93 minutes of chatting per day.
TechCrunch – > Read more here