克劳德是一个思考的空间。
Claude Is a Space to Think

原始链接: https://www.anthropic.com/news/claude-is-a-space-to-think

## Claude 将保持无广告 Anthropic 已经做出了明确的决定:他们的 AI 助手 Claude 将不会包含广告。虽然承认广告在其他数字空间中的益处,但他们认为广告与 Claude 的目的——成为一个真正有帮助且公正的工具,用于工作和深入思考——从根本上不相容。 与搜索引擎不同,在搜索引擎中人们期望看到赞助内容,与 AI 的对话是深入个人化的,并且经常涉及敏感话题。引入广告会带来影响回复的风险,在用户需求和商业动机之间产生冲突。即使在聊天中设置单独的广告,也会降低 Claude 作为专注思考空间的价值。 Anthropic 优先考虑基于企业合同和订阅的商业模式,并将收入再投资于改进 Claude。他们致力于通过教育项目和为非营利组织提供折扣费率来扩大访问范围。 虽然通过用户发起的行动(例如研究产品或预订服务)来支持商业是未来的目标,但 Claude 的核心原则仍然是:仅代表用户的利益行事,提供有帮助的答案,没有任何商业动机。他们希望 Claude 成为一个值得信赖的工具,摆脱广告的干扰和潜在偏见。

## Anthropic 坚持无广告未来 Anthropic 最近宣布承诺在其 Claude AI 平台内*不*实施广告,这与竞争对手如 OpenAI 的 ChatGPT 的做法形成对比。 这一决定虽然可能会限制收入来源,但被认为是优先考虑用户体验,并专注于企业合同和付费订阅。 Hacker News 的讨论显示出对这一立场的长期可行性的怀疑,许多人指出公司为了利润放弃类似承诺的历史例子。 担忧集中在投资者压力以及基于广告的收入与维护高质量 AI 模型之间的固有冲突——最大化利润通常需要在智能和响应速度方面做出妥协。 尽管存在这些疑虑,许多评论员仍然希望 Anthropic 能够在 AI 领域保持“良好行为者”的形象,强调其积极的步骤,例如反对限制性 AI 法规,以及基于道德问题拒绝某些合同。 然而,有些人认为这种“好人”形象主要是一种营销策略。 最终,这场对话凸显了道德原则与竞争市场需求之间的紧张关系,让许多人想知道 Anthropic 能否维持其目前的路线。
相关文章

原文

There are many good places for advertising. A conversation with Claude is not one of them.

Advertising drives competition, helps people discover new products, and allows services like email and social media to be offered for free. We’ve run our own ad campaigns, and our AI models have, in turn, helped many of our customers in the advertising industry.

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.

We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

The nature of AI conversations

When people use search engines or social media, they’ve come to expect a mixture of organic and sponsored content. Filtering signal from noise is part of the interaction.

Conversations with AI assistants are meaningfully different. The format is open-ended; users often share context and reveal more than they would in a search query. This openness is part of what makes conversations with AI valuable, but it’s also what makes them susceptible to influence in ways that other digital products are not.

Our analysis of conversations with Claude (conducted in a way that keeps all data private and anonymous) shows that an appreciable portion involve topics that are sensitive or deeply personal—the kinds of conversations you might have with a trusted advisor. Many other uses involve complex software engineering tasks, deep work, or thinking through difficult problems. The appearance of ads in these contexts would feel incongruous—and, in many cases, inappropriate.

We still have much to learn about the impact of AI models on the people who use them. Early research suggests both benefits—like people finding support they couldn’t access elsewhere—and risks, including the potential for models to reinforce harmful beliefs in vulnerable users. Introducing advertising incentives at this stage would add another level of complexity. Our understanding of how models translate the goals we set them into specific behaviors is still developing; an ad-based system could therefore have unpredictable results.

Incentive structures

Being genuinely helpful is one of the core principles of Claude’s Constitution, the document that describes our vision for Claude’s character and guides how we train the model. An advertising-based business model would introduce incentives that could work against this principle.

Consider a concrete example. A user mentions they’re having trouble sleeping. An assistant without advertising incentives would explore the various potential causes—stress, environment, habits, and so on—based on what might be most insightful to the user. An ad-supported assistant has an additional consideration: whether the conversation presents an opportunity to make a transaction. These objectives may often align—but not always. And, unlike a list of search results, ads that influence a model’s responses may make it difficult to tell whether a given recommendation comes with a commercial motive or not. Users shouldn’t have to second-guess whether an AI is genuinely helping them or subtly steering the conversation towards something monetizable.

Even ads that don’t directly influence an AI model’s responses and instead appear separately within the chat window would compromise what we want Claude to be: a clear space to think and work. Such ads would also introduce an incentive to optimize for engagement—for the amount of time people spend using Claude and how often they return. These metrics aren’t necessarily aligned with being genuinely helpful. The most useful AI interaction might be a short one, or one that resolves the user’s request without prompting further conversation.

We recognize that not all advertising implementations are equivalent. More transparent or opt-in approaches—where users explicitly choose to see sponsored content—might avoid some of the concerns outlined above. But the history of ad-supported products suggests that advertising incentives, once introduced, tend to expand over time as they become integrated into revenue targets and product development, blurring boundaries that were once more clear-cut. We’ve chosen not to introduce these dynamics into Claude.

Our approach

Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.

Expanding access to Claude is central to our public benefit mission, and we want to do it without selling our users’ attention or data to advertisers. To that end, we’ve brought AI tools and training to educators in over 60 countries, begun national AI education pilots with multiple governments, and made Claude available to nonprofits at a significant discount. We continue to invest in our smaller models so that our free offering remains at the frontier of intelligence, and we may consider lower-cost subscription tiers and regional pricing where there is clear demand for it. Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.

Supporting commerce

AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce, where Claude acts on a user’s behalf to handle a purchase or booking end to end. And we’ll continue to build features that enable our users to find, compare, or buy products, connect with businesses, and more—when they choose to do so.

We’re also exploring more ways to make Claude a focused space to be at your most productive. Users can already connect third-party tools they use for work—like Figma, Asana, and Canva—and interact with them directly within Claude. We expect to introduce many more useful integrations and expand this toolkit over time.

All third-party interactions will be grounded in the same overarching design principle: they should be initiated by the user (where the AI is working for them) rather than an advertiser (where the AI is working, at least in part, for someone else). Today, whether someone asks Claude to research running shoes, compare mortgage rates, or recommend a restaurant for a special occasion, Claude’s only incentive is to give a helpful answer. We’d like to preserve that.

A trusted tool for thought

We want our users to trust Claude to help them keep thinking—about their work, their challenges, and their ideas.

Our experience of using the internet has made it easy to assume that advertising on the products we use is inevitable. But open a notebook, pick up a well-crafted tool, or stand in front of a clean chalkboard, and there are no ads in sight.

We think Claude should work the same way.

联系我们 contact @ memedata.com