多姆主义的理性结论是暴力。
The Rational Conclusion of Doomerism Is Violence

原始链接: https://www.campbellramble.ai/p/the-rational-conclusion

## AI末日论与对萨姆·奥特曼的袭击 一名20岁的丹尼尔·莫雷诺-加马用燃烧瓶袭击了萨姆·奥特曼的家,并威胁要烧毁OpenAI总部,其动机是对人工智能的极端恐惧。他已被指控犯有谋杀未遂罪。莫雷诺-加马深度参与“暂停AI”社区,使用“Butlerian Jihadist”的网名,并在网上分享末日内容,包括推荐书籍《如果有人建造它,所有人都会死去》——该书认为人工智能不可避免地导致人类灭绝。 这起事件凸显了某些人工智能安全圈内危险的升级。作者详细描述了一种“纯洁螺旋”,成员们通过越来越极端的末日预测,甚至鼓吹暴力来竞争展示他们的决心。这源于一个核心信念,由埃利泽·尤德科夫斯基推广,即先进的人工智能*将*导致人类灭绝,从而证明了采取任何行动来阻止其发展都是合理的。 作者认为,这种优先考虑确定性而非细微差别的框架,可预测地会导致极端主义。尤德科夫斯基自己的声明表明,暴力只是被战略性地推迟,而非在道德上被拒绝。尽管他与这起袭击划清界限,但尤德科夫斯基的逻辑本质上 оправдывает 阻止人工智能发展的任何必要手段,这种三段论现在已被悲剧性地付诸行动。作者总结说,这并非一个安全运动,而是一种赋予少数“理性”思想家对技术进步拥有权威的信仰体系。

一个黑客新闻的讨论围绕着“厄运论”——相信灾难性的未来不可避免——不一定导致暴力。尽管有些人,比如埃利泽·尤德科夫斯基,担心人工智能构成生存威胁,但评论员们争论这是否 оправдывает 采取激烈的行动。 许多人认为,即使人工智能毁灭是一种可能性,试图阻止其发展也将是无效的。这项技术是由国家安全问题驱动的;消除关键参与者只会将发展推向地下或在其他地方加速发展,类似于核军备竞赛。历史表明,世界很少放弃潜在的危险技术。 另一些人认为,真正相信迫在眉睫的厄运将需要采取*行动*,而不仅仅是在线讨论。然而,有人反驳说,也许只需要一篇博文就足够了。这场对话也涉及历史上的相似之处,比如拉达派,他们并非反对技术,而是抗议与自动化相关的剥削性劳动实践。
相关文章

原文

A 20-year-old threw a Molotov cocktail at Sam Altman's house at 3:45 AM Friday. Then walked three miles to OpenAI headquarters and threatened to burn it down. He has been booked on suspicion of attempted murder.

He was not a lone wolf. He was an active member of PauseAI with six community roles. His Discord handle was "Butlerian Jihadist." His Instagram was a feed of doomer content: capability curves captioned "if we do nothing very soon we will die," Venn diagrams placing us at the intersection of The Matrix, Terminator, and Idiocracy. Four months before the attack, he recommended Yudkowsky and Soares' If Anyone Builds It, Everyone Dies to his followers.

His name is Daniel Moreno-Gama.

He had his own Substack. In January he published “AI Existential Risk,” estimating the probability of AI-caused extinction as “nearly certain.” He called the technology “an active threat against anyone who is using it and especially towards the people building it.” He concluded: “We must deal with the threat first and ask questions later.” He wrote a poem imagining the children of AI developers dying, asking their parents why they did nothing. “May Hell be kind to such a vile creature,” he wrote about the builders.

PauseAI has already deleted his messages from their Discord.

For an investing newsletter, I know this is not what most of you are here for. The goal here is to to explain where my worldview is coming from, so that the longer term calls start to make more sense. My ideas behind the “New New Deal” are intended as a direct response to where this is going.

All I am doing here is running their model forward, and connecting the dots.

Here is the framework. It has three moving parts.

Start with certainty. Yudkowsky’s position is that if anyone builds sufficiently intelligent AI, every human being on earth dies. Not probably. Not maybe. Everyone. Your children. His daughter Nina, whom he invokes by name. He published this in TIME. He wrote it in a book called If Anyone Builds It, Everyone Dies. He said we should airstrike data centers, and that the risk of nuclear exchange is preferable to a training run completing.

Purity spiral aka escalation. Within this community, members compete to demonstrate commitment by raising the stakes. P(doom) numbers climb from 50% to 90% to 99.99999%. The Center for AI Safety's national spokesperson said on camera that the correct response is to "walk to the labs across the country and burn them down." PauseAI activated something called a "Warning Shot Protocol" declaring an AI model "a weapon of mass destruction." One of PauseAI's leaders said an Anthropic researcher "deserves whatever is coming to her." When someone flagged this rhetoric in PauseAI's Discord, the mods deleted the post.

The day before the attack, Nate Soares, Yudkowsky's co-author on the very book the kid recommended, tweeted that Altman was "doing terrible stuff."

Then cheap talk gets tested. Game theorists have a term for this: cheap talk is costless signaling that eventually meets reality. When you make the stakes existential for the human race, you can justify any level of extremism if it lowers the hallowed p(doom). These aren't isolated incidents. They are a series of escalating and mutually reinforcing claims around an eschatological philosophy that, taken to its conclusion, would accept killing 99% of the world to save the last 1%.

It was only a matter of time before someone took the framework at face value. The kid read the book. He joined the community. He wrote his own manifesto. In a memoir for his community college English class, he described himself as a consequentialist: “I give very little credence to intentions if the results do not match.” He chose “Butlerian Jihadist” as his name. On December 3rd he wrote in PauseAI’s Discord: “We are close to midnight it’s time to actually act.”

Then he acted.

They gave him a trolley problem. One life versus all of humanity. The kid pulled the lever.

There is a final irony that deserves attention. If the doomers truly hold their stated beliefs at their stated confidence levels, they should be more honest about what those beliefs imply. A few weeks before the attack, a journalist asked Yudkowsky: if AI is so dangerous, why aren't you attacking data centers? His answer, relayed by Soares: "If you saw a headline saying I'd done that, would you say, 'wow, AI has been stopped, we're safe'? If not, you already know it wouldn't be effective."

Notice what that answer is not. It is not “because violence is wrong.” It is “because it wouldn’t work yet.” The restraint is strategic, not moral. And the community knows it. The dark undercurrent is an unspoken agreement: the kid’s greatest sin was bad timing.

This is what I mean by intelligence not equaling power, and it is the deepest flaw in the entire doomer worldview.

Yudkowsky’s framework rests on the conflation: a sufficiently intelligent AI will necessarily acquire the power to destroy humanity because intelligence converts automatically into capability. Most of his followers are not technical. They do not build AI systems or work on alignment engineering. They possess a particular kind of verbal intelligence that lets them construct elaborate arguments about risk, and they have convinced themselves this entitles them to a priestly authority over the technology. They can construct the argument. They cannot build the system.

This isn’t accidental. It’s baked into the foundational texts. Yudkowsky’s Harry Potter and the Methods of Rationality literally models a world where the person who reasons best deserves to override every institution around him. The Sequences build the liturgy: a small caste of correct thinkers, epistemically and morally superior, whose rationality entitles them to govern what the rest of humanity is allowed to build. It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction.

Yudkowsky can distance himself from the kid with the Molotov. But he cannot distance himself from the syllogism. If the builders are going to kill everyone, stopping the builders is self-defense. That is the central claim, stated plainly. The only question was always when someone would take it at face value.

They should stop acting surprised when their own logic shows up at 3:45 AM with a bottle full of gasoline.

Disclaimers

I am not advocating for or against any position on AI safety. I am observing that a framework built on certainty of extinction produces predictable consequences. The suspect is innocent until proven guilty.

These views do not represent those of any investors, clients or affiliates of Rose.

联系我们 contact @ memedata.com