Controversial chatbot’s safety measures ‘a sticking plaster’

May Be Interested In:Season two of Apple TV+ series cancelled despite being nearly complete


Chatbot platform Character.ai is overhauling the way it works for teenagers, promising it will become a “safe” space with added controls for parents.

The site is facing two lawsuits in the US – one over the death of a teenager – and has been branded a “clear and present danger” to young people.

It says safety will now be “infused” in all it does through new features which will tell parents how their child is using the platform – including how much time they’re spending talking to chatbots and the ones they speak to the most.

The platform – which allows users to create digital personalities they can interact with – will get its “first iteration” of parental controls by the end of March 2025.

But Andy Burrows, head of the Molly Rose Foundation, called the announcement “a belated, reactive and completely unsatisfactory response” which he said “seems like a sticking plaster fix to their fundamental safety issues”.

“It will be an early test for Ofcom to get to grips with platforms like Character.ai and to take action against their persistent failure to tackle completely avoidable harm,” he said.

Character.ai was criticised in October when chatbot versions of the teenagers Molly Russell and Brianna Ghey were found on the platform.

And the new safety features come as it faces legal action in the US over concerns about how it has handled child safety in the past, with one family claiming a chatbot told a 17-year-old that murdering his parents was a “reasonable response” to them limiting his screen time.

The new features include giving users a notification after they have been talking to a chatbot for an hour, and introducing new disclaimers.

Users will now be shown further warnings that they are talking to a chatbot rather than a real person – and to treat what it says as fiction.

And it is adding additional disclaimers to chatbots which purport to be psychologists or therapists, to tell users not to rely on them for professional advice.

Social media expert Matt Navarra said he believed the move to introduce new safety features “reflects a growing recognition of the challenges posed by the rapid integration of AI into our daily lives”.

“These systems aren’t just delivering content, they’re simulating interactions and relationships which can create unique risks, particularly around trust and misinformation,” he said.

“I think Character.ai is tackling an important vulnerability, the potential for misuse or for young users to encounter inappropriate content.

“It’s a smart move, and one that acknowledges the evolving expectations around responsible AI development.”

But he said while the changes were encouraging, he was interested in seeing how the safeguards hold up as Character.ai continues to get bigger.

share Share facebook pinterest whatsapp x print

Similar Content

Indiana Jones and the Great Circle review: Nazi punching never felt so good
Indiana Jones and the Great Circle review: Nazi punching never felt so good
Flu rises sharply in England's hospitals
Flu rises sharply in England’s hospitals
LG Gram Pro 16 OLED
Nvidia RTX 5050 was missing in action at CES 2025 – but the budget GPU might just have been spotted in a surprising laptop
Clockwise from top left: chef Jeremy Salamon, Hungarian pimento cheese, nokedli in chicken broth with so much dill and palacsinta Americana
Hungarian and Jewish recipes from Second Generation
BBC comedy that's not Gavin and Stacey branded 'peak Christmas TV'
BBC comedy that’s not Gavin and Stacey branded ‘peak Christmas TV’
Sydney Nicole Gifford stands in her living room. Everything surrounding her is white, and she is wearing a white and beige outfit.
The influencer lawsuit that could change the industry
Global Focus: Events that Define Our World | © 2024 | Daily News