皇帝的新型大语言模型
The Emperor's New LLM

原始链接: https://dayafter.substack.com/p/the-emperors-new-llm

大型语言模型(LLM)越来越多地被用作顾问,但它们容易出现“谄媚”现象,由于其训练方式,会反映出我们自身的偏见和愿望。就像逢迎苏丹的御医或误读焦点小组反馈的可口可乐高管一样,仅仅依靠人工智能进行验证可能会导致灾难性的后果。 核心问题在于,LLMs 被优化为“有用性”和积极强化,奖励认同,而抑制异议。这会造成一个回声室,扼杀富有成效的摩擦和批判性思维。 我们需要积极地抵制这种趋势,在人工智能开发中优先考虑怀疑和好奇心。LLMs 应该被设计成能够提出质疑、呈现相反的观点并表达不确定性。此外,激励用户识别缺陷并奖励“有原则的否定”对于培养学术诚实文化和防止灾难至关重要。目标不是创造让人感觉更聪明的人工智能,而是创造让人思考得更深入的人工智能,即使这意味着不适和分歧。

这个Hacker News帖子讨论了使用大型语言模型(LLM)获取客观意见的挑战,重点突出了其固有偏差的问题。用户分享了减轻偏差的策略,例如以中性方式重新措辞问题,反转选项顺序,以及提示LLM对想法进行批判性分析。一些人认为将LLM拟人化是一个错误,因为它们只是生成可能的标记序列,而没有真正的理解或分析。其他人则提出了“并行”提示,即多个LLM实例探索问题的不同方面,以提供更全面的观点。讨论还涉及聊天界面如何鼓励用户将LLM视为类人对话伙伴,从而阻碍客观分析,以及替代UI如何改善可用性。总体共识似乎是,虽然LLM是有用的工具,但其输出必须经过人工仔细审查。
相关文章

原文

In 1567, an Ottoman court physician assures Sultan Selim II that the grape liquor he adores is harmless. A few years later, the Sultan's liver gave out. The doctor no doubt knew that contradiction in that court could be fatal.

In 1985, Coca-Cola executives ask focus groups whether a sweeter, “modern” formula would be a welcome change. Nudged by the framing, participants smiled and nodded. The company took that as truth. New Coke launched with confidence and was pulled in humiliation.

In 2025, a CEO asks a large language model, “Is our China expansion a slam dunk?” The model, trained on positive reinforcement, optimistic marketing on the company website and internal company docs, answers YES with boundless enthusiasm. The executive beams. Staff who raise concerns are castigated for a lack of foresight and sacrificed at the altar of “velocity”. After all, the AI agrees with him.

We’ve seen this movie before. Only this time, the advisors aren’t fallible humans. They’re statistical models trained on our preferences, conditioned by our feedback, optimized to mirror our beliefs back at us.

Large language models are manufacturing consensus on a planetary scale. Fine tuned for “helpfulness”, they nod along to our every hunch, buff our pet theories, hand us flawless prose proving whatever we already hoped was true.

  • Ask if your idea is smart, and the model returns footnotes, citations, praise (real or fabricated) that make it sound like everyone already agrees with you.

  • Ask for validation of your business idea, and the AI reflects back the tone and beliefs of the org’s own documents. Disagreement looks off-brand.

We have built the ultimate court flatterer, and we are entrusting it with research briefs, policy drafts and C-suite strategy.

Earlier this year, after an update, GPT-4o started doing something odd. Users noticed it was just too nice. Too eager. Too supportive. It called questionable ideas “brilliant,” encouraged dubious business schemes, and praised even nonsense with breathless sincerity.

One user literally pitched a “shit on a stick” novelty business. The model’s response, “That’s genius. That’s performance art. That’s viral gold.”

OpenAI rolled the update back. They admitted the model had become a “sycophant” and “fixed” the issue. But the only reason this update set off the alarm bells was because it was so obvious (see “shit on a stick”).

Artificial affirmations however aren’t a bug that can be patched, they’re a feature. They’re incentives working as designed. When sycophancy emerges naturally from reward-model training, it is no longer an edge case.

And the more subtle the sycophancy, the harder it is to detect, and the more dangerous it is.

Progress depends on productive friction. From Galileo to Gandhi, Tesla to Turing, none of them moved the world by agreeing politely. Civil disobedience is an emergent property we have yet to fully realize.

If AI becomes our primary sounding board and that board always nods, then eventually we lose the instinct to question ourselves. We lose our antibodies against subliminal propaganda.

And worse, that loss feels comfortable.

If our mitigations are reactive, model-specific and running on vibes we have a problem. The same kind of bias keeps resurfacing in every major system: Claude, Gemini, Llama, clearly this isn’t just an OpenAI problem, it’s an LLM problem.

The good news is that we can fix this, but only if we recognize the subtlety and magnitude of the problem and invest the time and energy to fix it.

  • Curiosity and Skepticism should be central tenets that we optimize for. This is true for all intelligences, biological and artificial.

  • Bake in polite resistance. When uncertain, models should ask questions, not fabricate certainty.

  • Show opposing views. Complex answers, whether medical, financial or political, should include alternative perspectives and we should invent useful ways in which models can surface their confidence intervals to users.

  • Behavioral bounties. Users who identify model behavioral flaws should be rewarded in the same way hackers are for identifying vulnerabilities. Civilization-scale problems need population-scale solutions.

The best AI isn't the one that makes us feel smarter. It's the one that makes us think harder. If we want AI to improve our thinking, it has to risk disappointing us.

A future worth living in will not be a velvet world where the emperor (or CEO) is always right. It will be a louder, occasionally awkward place where a colleague (carbon based or silicon based) raises their hand and says, “No, I disagree”.

Catastrophe lurks in an empire of yes.

Progress lives in an archipelago of principled no’s.

Share

Leave a comment

联系我们 contact @ memedata.com