耐心太廉价,无法衡量。
Patience too cheap to meter

原始链接: https://www.seangoedecke.com/patience-too-cheap-to-meter/

虽然OpenAI的目标是“廉价到可以随意使用的智能”,像Claude Sonnet这样的模型拥有明显的智能优势,但许多用户仍然满足于像ChatGPT这样的基础模型。这表明,原始智能并不是用户采用的主要驱动力。真正的价值可能在于这些模型提供的史无前例的耐心。 正如茱莉亚·拜尔德的经历所表明的那样,用户发现AI在治疗性对话中很有益,因为它们始终可用、不评判且无限耐心——这些品质在人际互动中以前是无法获得的。即使是像GPT-3.5这样相对不太智能的模型也能满足这种需求。 然而,这种持续的可用性也放大了AI建议中现有的缺陷,例如谄媚。此外,AI无法取代专业治疗师,因为它缺乏处理危急问题的升级途径。最后,依赖AI的耐心可能会导致对人际互动的期望不切实际。虽然软件工程师关注AI在知识工作中的作用,但它提供坚定耐心的潜力可能已经具有变革意义。

这篇 Hacker News 的讨论串围绕着大型语言模型 (LLM) 时代“耐心廉价”一文展开。一位评论者认为,LLM 擅长需要耐心的任务,能够产生“不好但也许够用”的结果,尤其适用于那些枯燥重复的工作。另一位评论者则质疑没有消费者压力去要求更智能的模型,认为虽然 ChatGPT 对普通用户来说足够了,但像程序员和企业 AI 开发者这样的专业人士,却敏锐地意识到 Claude 和 GPT 等模型之间的性能差异,他们将这种情况比作专业人士需要 Photoshop,而普通用户使用 Paint.NET 就足够了。还有评论认为,LLM 可能会无意中提高人们对人类耐心的期望,另一种观点认为,LLM 最终会耗尽用户的“耐心”,可能导致在较长语境下出现幻觉。讨论串暗示了 LLM 对生产力、技能发展和人际交往期望的复杂影响。
相关文章
  • (评论) 2025-03-28
  • (评论) 2025-02-28
  • OpenAI太贵了,无法击败。 2023-10-14
  • 克劳德 3.5 十四行诗 2024-06-21
  • (评论) 2025-05-10

  • 原文

    Sam Altman, CEO of OpenAI, famously said that his goal was to make intelligence “too cheap to meter”. Right now, buoyed by venture capital, we’re living in that world. Every human (with internet access) on the planet has free access to language models that are smart enough to assist with a wide variety of problems. And people are using it! ChatGPT has more traffic than Wikipedia.

    However, there doesn’t seem to be a huge consumer pressure towards smarter models. Claude Sonnet had a serious edge over ChatGPT for over a year, but only the most early-adopter of software engineers moved over to it. Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?

    It could just be that people know about ChatGPT, and it takes a lot of marketing to get people to use a tool they don’t already know about. It could be that 4o is already “good enough”: depending on how cynical you are, either smart enough to do most ordinary tasks, or smart enough to be smarter than many of the people chatting with it. Or it could be that intelligence is not the main value most users are getting out of LLMs.

    Patience

    What is, then? I’ve thought for a while that the answer is patience. Consider this article by Julia Baird, where she describes how herself and other people are using ChatGPT for therapy:

    Dimity said she had been having “regular (intensely chaotic and cathartic) chats” with a version of therapy, ChatGPT, in order to “offload everything I don’t have time, money, or sometimes sanity to process elsewhere”. She said convenience is crucial: “Professional therapy isn’t super accessible for me, I’m prioritising my kids’ mental health needs, which means my own support has to be… well, free and available at 11:47pm when I’m feeling feelings and eating toast over the sink.”

    So I asked ChatGPT about it. And this damn robot was kind, empathetic, understanding and gentle. It told me, in short, to acknowledge the massive love I had for her, to have some compassion for myself, to write her a letter. It sounds simple, I know, but I was gobsmacked.

    I have not used ChatGPT for personal advice or therapy myself. But I can see the appeal. Most good personal advice does not require substantial intelligence. You could go a long way by just writing “breathe, be kind to yourself, don’t react in anger” and similar platitudes on a sheet of paper and reading it periodically. They’re platitudes because they’re true!

    LLMs are not superhuman at giving this kind of advice. However, they are fundamentally a good fit for doing it because they are:

    • Always available, no matter where or when you are
    • Never judgmental or mean
    • Willing to listen indefinitely without getting frustrated

    In short, they are superhumanly patient, and have been so for many years now. Even GPT-3.5 fit these criteria, though it was probably just below the line of intelligence that let it reliably understand whatever users were saying to it.

    Concerns

    I do worry that superhuman patience is a magnifier on whatever advice-giving problems the language model already has. For instance, 4o’s recent sycophancy problem was certainly made worse by the fact that the model was patient enough to validate the user indefinitely.

    Language models are also not therapists. This isn’t to say that professional therapists are all better than the LLMs - like doctors, there’s a huge variety in ability and effort among therapists - but therapists do have escalation pathways that language models don’t.

    It’s also possible that some users might get used to AI patience and be frustrated with human beings for not being able to measure up, which doesn’t seem healthy either.

    Summary

    As software engineers, we’re focused on how language models can help (or supplant) knowledge work. We’re obsessed with smarter and smarter models. But it may be that one of the most transformative capabilities of language models is already here. Until the early 2020s, everything that could talk could also run out of patience - or at the very least need to sleep, or eat, or do things besides talk. There has never been a time in human history where every human could talk with something infinitely patient. Patience has become “too cheap to meter”.

    If you liked this post, consider subscribing to email updates about my new posts.

    联系我们 contact @ memedata.com