In 1567, an Ottoman court physician assures Sultan Selim II that the grape liquor he adores is harmless. A few years later, the Sultan's liver gave out. The doctor no doubt knew that contradiction in that court could be fatal.
In 1985, Coca-Cola executives ask focus groups whether a sweeter, “modern” formula would be a welcome change. Nudged by the framing, participants smiled and nodded. The company took that as truth. New Coke launched with confidence and was pulled in humiliation.
In 2025, a CEO asks a large language model, “Is our China expansion a slam dunk?” The model, trained on positive reinforcement, optimistic marketing on the company website and internal company docs, answers YES with boundless enthusiasm. The executive beams. Staff who raise concerns are castigated for a lack of foresight and sacrificed at the altar of “velocity”. After all, the AI agrees with him.
We’ve seen this movie before. Only this time, the advisors aren’t fallible humans. They’re statistical models trained on our preferences, conditioned by our feedback, optimized to mirror our beliefs back at us.
Large language models are manufacturing consensus on a planetary scale. Fine tuned for “helpfulness”, they nod along to our every hunch, buff our pet theories, hand us flawless prose proving whatever we already hoped was true.
Ask if your idea is smart, and the model returns footnotes, citations, praise (real or fabricated) that make it sound like everyone already agrees with you.
Ask for validation of your business idea, and the AI reflects back the tone and beliefs of the org’s own documents. Disagreement looks off-brand.
We have built the ultimate court flatterer, and we are entrusting it with research briefs, policy drafts and C-suite strategy.
Earlier this year, after an update, GPT-4o started doing something odd. Users noticed it was just too nice. Too eager. Too supportive. It called questionable ideas “brilliant,” encouraged dubious business schemes, and praised even nonsense with breathless sincerity.
One user literally pitched a “shit on a stick” novelty business. The model’s response, “That’s genius. That’s performance art. That’s viral gold.”
OpenAI rolled the update back. They admitted the model had become a “sycophant” and “fixed” the issue. But the only reason this update set off the alarm bells was because it was so obvious (see “shit on a stick”).
Artificial affirmations however aren’t a bug that can be patched, they’re a feature. They’re incentives working as designed. When sycophancy emerges naturally from reward-model training, it is no longer an edge case.
And the more subtle the sycophancy, the harder it is to detect, and the more dangerous it is.
Progress depends on productive friction. From Galileo to Gandhi, Tesla to Turing, none of them moved the world by agreeing politely. Civil disobedience is an emergent property we have yet to fully realize.
If AI becomes our primary sounding board and that board always nods, then eventually we lose the instinct to question ourselves. We lose our antibodies against subliminal propaganda.
And worse, that loss feels comfortable.
If our mitigations are reactive, model-specific and running on vibes we have a problem. The same kind of bias keeps resurfacing in every major system: Claude, Gemini, Llama, clearly this isn’t just an OpenAI problem, it’s an LLM problem.
The good news is that we can fix this, but only if we recognize the subtlety and magnitude of the problem and invest the time and energy to fix it.
Curiosity and Skepticism should be central tenets that we optimize for. This is true for all intelligences, biological and artificial.
Bake in polite resistance. When uncertain, models should ask questions, not fabricate certainty.
Show opposing views. Complex answers, whether medical, financial or political, should include alternative perspectives and we should invent useful ways in which models can surface their confidence intervals to users.
Behavioral bounties. Users who identify model behavioral flaws should be rewarded in the same way hackers are for identifying vulnerabilities. Civilization-scale problems need population-scale solutions.
The best AI isn't the one that makes us feel smarter. It's the one that makes us think harder. If we want AI to improve our thinking, it has to risk disappointing us.
A future worth living in will not be a velvet world where the emperor (or CEO) is always right. It will be a louder, occasionally awkward place where a colleague (carbon based or silicon based) raises their hand and says, “No, I disagree”.
Catastrophe lurks in an empire of yes.
Progress lives in an archipelago of principled no’s.