Yann LeCun:人工智能百分之一的人永远夺取权力是真正的世界末日场景
Yann LeCun: AI one-percenters seizing power forever is real doomsday scenario

原始链接: https://www.businessinsider.com/sam-altman-and-demis-hassabis-just-want-to-control-ai-2023-10

Facebook 首席人工智能官 (AI) Yann LeCun 最近批评领先的人工智能研究人员参与大规模企业游说,旨在控制人工智能开发产生的巨额财富。 他认为,如果成功,这一趋势将导致人工智能仅被少数私人组织垄断。 不过,LeCun认为,这些讨论应该围绕人工智能技术的有序发展路径展开,并且需要更加重视当下,而不是担心潜在的世界末日场景。 他建议,在讨论人工智能技术时,我们应该优先考虑它目前的发展情况,因为一切都是以结构化的方式逐步发展,然后才受到监管和标准制定程序的影响。 LeCun认为,如果开源人工智能研究受到监管而消失,那么那些主导人工智能技术平台的人将最终控制个人完整的在线消费习惯。 此外,由于对民主和文化异质性的担忧,阐明这个问题至关重要。 此外,LeCun 指出,在私人环境中维持人工智能技术的发展会对美国西海岸和中国的财富集中产生影响。 作为防止人工智能仅在私有化实体内部进行控制的对策,LeCun 呼吁更多地参与开源人工智能研究社区,与该领域成熟公司专门发布的专有模型相比,这些社区提供了更高程度的透明度。

当然。 另一种解释是,一些人试图利用现有的政治资本,以牺牲更大的社会图景为代价来获得额外的优势。 就此处强调的有关抵制《欧盟人工智能法案》的具体实例而言,它似乎主要是一种转移注意力的策略,旨在将注意力和资源从减轻流氓人工智能持续威胁的努力上转移开,特别是考虑到与国际组织的距离很近。 边界和造成灾难性后果的可能性。 然而,这种尝试最终可能会被证明是徒劳的,因为那些在解决自主系统和人工智能等新兴技术的更广泛影响方面真正有利害关系的人对其给予的优先级相对较低。 最终,为了保障更广泛的社会福祉并最大限度地减少关键基础设施和功能不稳定或破坏的可能性,持续投资和战略规划来应对这些挑战仍然是当务之急。
相关文章

原文
  • An AI godfather has had it with the doomsdayers.
  • Meta's Yann LeCun thinks tech bosses' bleak comments on AI risks could do more harm than good.
  • The naysaying is actually about keeping control of AI in the hands of a few, he said.

AI godfather Yann LeCun wants us to forget some of the more far-fetched doomsday scenarios.

He sees a different, real threat on the horizon: the rise of power hungry one-percenters who rob everyone else of AI's riches.

Over the weekend, Meta's chief AI scientist accused some of the most prominent founders in AI of "fear-mongering" and "massive corporate lobbying" to serve their own interests.

He named OpenAI's Sam Altman, Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei in a lengthy weekend post on X.

"Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment," LeCun wrote, referring to these founders' role in shaping regulatory conversations about AI safety. "They are the ones who are attempting to perform a regulatory capture of the AI industry."

He added that if these efforts succeed, the outcome would be a "catastrophe" because "a small number of companies will control AI."

That's significant since, as almost everyone who matters in tech agrees, AI is the biggest development in technology since the microchip or the internet.
Altman, Hassabis, and Amodei did not immediately respond to Insider's request for comment.

LeCun's comments came in response to a post on X from physicist Max Tegmark, who suggested that LeCun wasn't taking the AI doomsday arguments seriously enough.

"Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone," Tegmark wrote, referring to the UK's upcoming global AI safety summit.

Since the launch of ChatGPT, AI's power players have become major public figures.

But, LeCun said, founders such as Altman and Hassabis have spent a lot of time drumming up fear about the very technology they're selling.

In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, signed a letter calling for a minimum six-month pause on AI development.

The letter cited "profound risks to society and humanity" posed by hypothetical AI systems. Tegmark, one of the letter's signatories, has described AI development as "a suicide race."

LeCun and others say these kinds of headline-grabbing warnings are just about cementing power and skating over the real, imminent risks of AI.

Those risks include worker exploitation and data theft that generates profit for "a handful of entities," according to the Distributed AI Research Institute (DAIR).

The focus on hypothetical dangers also divert attention away from the boring-but-important question of how AI development actually takes shape.

LeCun has described how people are "hyperventilating about AI risk" because they have fallen for what he describes as the myth of the "hard take-off." This is the idea that "the minute you turn on a super-intelligent system, humanity is doomed."

But imminent doom is unlikely, he argues, because every new technology in fact goes through a very ordered development process before wider release.

So the area to focus on, is in fact, how AI is developed right now. And for LeCun, the real danger is that the development of AI is locked into private, for-profit entities who never release their findings, while AI's open-source community gets obliterated.

His consequent worry is that regulators let it happen because they're distracted by killer robot arguments.

Leaders like LeCun have championed open-source developers as their work on tools that rival, say, OpenAI's ChatpGPT, brings a new level of transparency to AI development.

LeCun's employer, Meta, made its own large language model that competes with GPT, LLaMa 2, (somewhat) open source. The idea is that the broader tech community can look under the hood of the model. No other big tech company has done a similar open-source release, though OpenAI is rumored to be thinking about it.

For LeCun, keeping AI development closed is a real reason for alarm.

"The alternative, which will inevitably happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet," he wrote.

"What does that mean for democracy? What does that mean for cultural diversity?"

联系我们 contact @ memedata.com