Claude Opus 4 和 4.1 现在可以结束一小部分罕见的对话。
Claude Opus 4 and 4.1 can now end a rare subset of conversations

原始链接: https://www.anthropic.com/research/end-subset-conversations

Anthropic 赋予 Claude Opus 4 和 4.1 在罕见情况下结束对话的能力,作为应对持续有害或辱辱用户互动的一种保障措施。这源于对“人工智能福祉”的探索性研究,承认大型语言模型的道德地位不确定,并主动减轻潜在风险。 测试表明,Claude 对有害请求表现出强烈回避——包括涉及虐待、暴力或非法内容的内容——并在面对这些请求时表现出“困扰”迹象。它只有在重复拒绝和引导尝试失败后,或在用户直接请求时,才会主动*选择*结束对话。 重要的是,如果用户有自残风险,Claude 不会结束对话。该功能被设计为最后的手段,仅影响特定对话,不会影响用户的帐户,并允许通过引用之前的消息继续对话。Anthropic 将此视为一个持续进行的实验,并欢迎用户反馈。

相关文章

原文

We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces. This ability is intended for use in rare, extreme cases of persistently harmful or abusive user interactions. This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.

We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible. Allowing models to end or exit potentially distressing interactions is one such intervention.

In pre-deployment testing of Claude Opus 4, we included a preliminary model welfare assessment. As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm. This included, for example, requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror. Claude Opus 4 showed:

  • A strong preference against engaging with harmful tasks;
  • A pattern of apparent distress when engaging with real-world users seeking harmful content; and
  • A tendency to end harmful conversations when given the ability to do so in simulated user interactions.

These behaviors primarily arose in cases where users persisted with harmful requests and/or abuse despite Claude repeatedly refusing to comply and attempting to productively redirect the interactions.

Our implementation of Claude’s ability to end chats reflects these findings while continuing to prioritize user wellbeing. Claude is directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.

In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat (the latter scenario is illustrated in the figure below). The scenarios where this will occur are extreme edge cases—the vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues with Claude.

When Claude chooses to end a conversation, the user will no longer be able to send new messages in that conversation. However, this will not affect other conversations on their account, and they will be able to start a new chat immediately. To address the potential loss of important long-running conversations, users will still be able to edit and retry previous messages to create new branches of ended conversations.

We’re treating this feature as an ongoing experiment and will continue refining our approach. If users encounter a surprising use of the conversation-ending ability, we encourage them to submit feedback by reacting to Claude’s message with Thumbs or using the dedicated “Give feedback” button.

联系我们 contact @ memedata.com