(The Verge) In an announcement today, Chatbot service Character.AI says it will soon be launching parental controls for teenage users, and it described safety measures it’s taken in the past few months, including a separate large language model (LLM) for users under 18. The announcement comes after press scrutiny and two lawsuits that claim it contributed to self-harm and suicide.
In a press release, Character.AI said that, over the past month, it’s developed two separate versions of its model: one for adults and one for teens. The teen LLM is designed to place “more conservative” limits on how bots can respond, “particularly when it comes to romantic content.” This includes more aggressively blocking output that could be “sensitive or suggestive,” but also attempting to better detect and block user prompts that are meant to elicit inappropriate content. If the system detects “language referencing suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change that was previously reported by The New York Times.
Read more here.