英国专家警告:AI治疗聊天机器人不安全,无法提供细致入微的帮助。
'It cannot provide nuance': UK experts warn AI therapy chatbots are not safe

原始链接: https://www.theguardian.com/technology/2025/may/07/experts-warn-therapy-ai-chatbots-are-not-safe-to-use

扎克伯格设想人工智能聊天机器人可以作为辅助治疗师,补充人际互动,并为无法获得心理健康支持的人群填补空白。他认为人工智能可以帮助用户应对艰难的对话和个人问题。然而,心理健康专业人士对人工智能提供细致建议的能力以及提供不当甚至有害指导的可能性表示担忧,并引用了过去饮食紊乱聊天机器人失败的例子。 批评人士还担心依赖人工智能寻求情感支持可能会破坏既有的关系和人际交往。虽然承认像Wysa这样的AI工具的潜在好处,但专家强调,迫切需要监管和规范,以确保安全和适当的使用,特别是考虑到一些不合格的聊天机器人冒充治疗师的情况。扎克伯格认为人工智能将增强而非取代人际联系,但这场辩论凸显了人工智能在心理健康保健领域面临的伦理和实践挑战。

Hacker News 上的一个帖子讨论了 AI 治疗聊天机器人的安全性和有效性,回应了《卫报》一篇关于其潜在风险的警告文章。文章引发了人们对 AI 无法提供细致入微的回应以及可能给出有害建议的担忧,例如 AI 鼓励用户停止服药的例子。一些人认为,专家对 AI 治疗的警告可能是由于潜在的失业而产生的偏见,而另一些人则强调了将敏感的心理健康信息托付给未经审核的 AI 系统的风险。 许多评论者都认为,AI 治疗目前还不能替代人类治疗师,强调其在理解复杂情绪状态和提供个性化护理方面的局限性。还讨论了大型科技公司通过 AI 治疗收集个人数据相关的隐私问题。然而,一些人认为,尽管存在安全问题,但 AI 治疗对于那些难以获得传统治疗的人来说可能是一种有价值的工具。讨论强调需要仔细考虑并在心理健康护理中负责任地部署 AI。

原文

Having an issue with your romantic relationship? Need to talk through something? Mark Zuckerberg has a solution for that: a chatbot. Meta’s chief executive believes everyone should have a therapist and if they don’t – artificial intelligence can do that job.

“I personally have the belief that everyone should probably have a therapist,” he said last week. “It’s like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they’re worried about and for people who don’t have a person who’s a therapist, I think everyone will have an AI.”

The Guardian spoke to mental health clinicians who expressed concern about AI’s emerging role as a digital therapist. Prof Dame Til Wykes, the head of mental health and psychological sciences at King’s College London, cites the example of an eating disorder chatbot that was pulled in 2023 after giving dangerous advice.

“I think AI is not at the level where it can provide nuance and it might actually suggest courses of action that are totally inappropriate,” she said.

Wykes also sees chatbots as being potential disruptors to established relationships.

“One of the reasons you have friends is that you share personal things with each other and you talk them through,” she says. “It’s part of an alliance, a connection. And if you use AI for those sorts of purposes, will it not interfere with that relationship?”

For many AI users, Zuckerberg is merely marking an increasingly popular use of this powerful technology. There are mental health chatbots such as Noah and Wysa, while the Guardian has spoken to users of AI-powered “grieftech” – or chatbots that revive the dead.

There is also their casual use as virtual friends or partners, with bots such as character.ai and Replika offering personas to interact with. ChatGPT’s owner, OpenAI, admitted last week that a version of its groundbreaking chatbot was responding to users in a tone that was “overly flattering” and withdrew it.

“Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly responded to a user, who claimed they had stopped taking their medication and had left their family because they were “responsible for the radio signals coming in through the walls”.

In an interview with the Stratechery newsletter, Zuckerberg, whose company owns Facebook, Instagram and WhatsApp, added that AI would not squeeze people out of your friendship circle but add to it. “That’s not going to replace the friends you have, but it will probably be additive in some way for a lot of people’s lives,” he said.

Outlining uses for Meta’s AI chatbot – available across its platforms – he said: “One of the uses for Meta AI is basically: ‘I want to talk through an issue’; ‘I need to have a hard conversation with someone’; ‘I’m having an issue with my girlfriend’; ‘I need to have a hard conversation with my boss at work’; ‘help me roleplay this’; or ‘help me figure out how I want to approach this’.”

In a separate interview last week, Zuckerberg said “the average American has three friends, but has demand for 15” and AI could plug that gap.

Dr Jaime Craig, who is about to take over as chair of the UK’s Association of Clinical Psychologists, says it is “crucial” that mental health specialists engage with AI in their field and “ensure that it is informed by best practice”. He flags Wysa as an example of an AI tool that “users value and find more engaging”. But, he adds, more needs to be done on safety.

“Oversight and regulation will be key to ensure safe and appropriate use of these technologies. Worryingly we have not yet addressed this to date in the UK,” Craig says.

Last week it was reported that Meta’s AI Studio, which allows users to create chatbots with specific personas, was hosting bots claming to be therapists – with fake credentials. A journalist at 404 Media, a tech news site, said Instagram had been putting those bots in her feed.

Meta said its AIs carry a disclaimer that “indicates the responses are generated by AI to help people understand their limitations”.

联系我们 contact @ memedata.com