AGI 幻想是实际工程的阻碍。
AGI fantasy is a blocker to actual engineering

原始链接: https://www.tomwphillips.co.uk/2025/11/agi-fantasy-is-a-blocker-to-actual-engineering/

卡伦·郝的《人工智能帝国》揭示了OpenAI内部对通用人工智能(AGI)的坚定信念,这种信念源于他们或竞争对手创造AGI将从根本上重塑人类的信念,无论好坏。这种信念源于埃隆·马斯克对DeepMind的Demis Hassabis的担忧,他认为Hassabis可能成为首先构建AGI的“超级反派”。 OpenAI对AGI的追求根植于“纯语言”假说——AGI仅通过大规模语言模型训练就能涌现。GPT模型的成功强化了这种观点,尽管存在显著的环境成本(用水量、污染)和伦理问题(数据工人剥削、有害输出),但它仍然证明了巨大的计算规模是合理的。 作者批评了基于AGI的期望值(EV)论证来证明这些成本的做法,认为它们无法证伪,并且忽视了现实世界的危害。相反,作者提倡务实的方法:将LLM评估为解决*特定*问题的工具,优先考虑效率,并尽量减少危害——回归健全的工程原则,而不是“AGI幻想”。

## AGI 炒作阻碍了实际人工智能工程 一则 Hacker News 的讨论集中在人工智能通用性(AGI)的追求会分散对有价值的、实际人工智能工程的注意力。核心论点是,关注 AGI 会产生不切实际的期望,并转移资源,使其无法利用当前人工智能工具解决具体问题。 一些评论员争论批评人工智能环境影响(特别是用水量)的有效性,一些人认为与农业或高尔夫球场相比,这只是一个次要问题。另一些人则反驳说,即使是较小的影响也很重要,不应被忽视。 一个关键点是当前 LLM 的硬件限制。实现人类水平的智能需要比目前可用更复杂、更高效的硬件。讨论还涉及驱动 AGI 研究的商业动机——炒作吸引投资——并质疑是否正在取得真正的进展,还是仅仅是营销。 最终,许多人同意,虽然 LLM 是有用的工具,但将它们视为通往 AGI 的垫脚石是适得其反的,并且会分散对现实世界应用和改进现有技术的关注。
相关文章

原文

Reading Empire of AI by Karen Hao, I was struck by how people associated with OpenAI believe in AGI. They really do think someone, perhaps them, will build AGI, and that it will lead to either the flourishing or destruction of humanity.

Elon Musk founded OpenAI because he thought Demis Hassabis was an evil genius who would build AGI first:

…Musk would regularly characterise Hassabis as a supervillain who needed to be stopped. Musk would make unequivocally clear that OpenAI was the good to DeepMind’s evil. … “He literally made a video game where an evil genius tries to create AI to take over the world,” Musk shouted [at an OpenAI off-site], referring to Hassabis’s 2004 title Evil Genius, “and fucking people don’t see it. Fucking people don’t see it! And Larry [Page]? Larry thinks he controls Demis but he’s too busy fucking windsurfing to realize that Demis is gathering the power.”

OpenAI’s co-founder and chief scientist Ilya Sutskever regularly told audiences and employees to “feel the AGI”. At a company off-site in Yosemite in September 2022, employees gathered around a firepit:

In the pit, [Sutskever] had placed a wooden effigy that he’d commissioned from a local artist, and began a dramatic performance. This effigy, he explained represented a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI’s duty, he said, was to destroy it. … Sutskever doused the effigy in lighter fluid and lit on fire.

I think it’s remarkable that what was until recently sci-fi fantasy has become a mainstream view in Silicon Valley.

Hao writes that GPT-2 was a bet on the “pure language” hypothesis, that asserts that since we communicate through language, then AGI should emerge from training a model solely on language. This is contrast to the “grounding” hypothesis, that asserts an AGI needs to perceive the world. Successfully scaling GPT to GPT-2 convinced enough people at OpenAI that the pure language hypothesis was valid. They just needed more data, more model parameters, and more compute.

So the belief in AGI, plus the recent results from LLMs, necessitates scaling, and justifies building data centres that consume hundreds of litres of water a second, run on polluting gas generators because the grid can’t supply the power (and might use as much power as entire cities), driving up CO2 emissions from manufacture and operation of new hardware, and exploits and traumatises data workers to make sure ChatGPT doesn’t generate outputs like child sexual abuse material and hate speech or encourage users to self-harm. (The thirst for data is so great that they stopped curating training data and instead consume the internet, warts and all, and manage the model output using RLHF.)

And this is all fine, because they’re going to make AGI and the expected value (EV) of it will be huge! (Briefly, the argument goes that if there is a 0.001% chance of AGI delivering an extremely large amount of value, and 99.999% chance of much less or zero value, then the EV is still extremely large because (0.001% * very_large_value) + (99.999% * small_value) = very_large_value).

But AGI arguments based on EV are nonsensical because the values and probabilities are made up and unfalsifiable. They also ignore externalities like environmental damage, which in contrast to AGI, have known negative value and certain probability: costs borne by everyone else right now.

As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

LLMs-as-AGI fail on all three fronts. The computational profligacy of LLMs-as-AGI is dissatisfying, and the exploitation of data workers and the environment unacceptable. Instead, if we drop the AGI fantasy, we can evaluate LLMs and other generative models as solutions for specific problems, rather than all problems, with proper cost benefit analysis. For example, by using smaller purpose-built generative models, or even discriminative (non-generative) models. In other words, make trade-offs and actually do engineering.

联系我们 contact @ memedata.com