Vitalik Buterin 强调 OpenAI 领导层剧变中的人工智能风险
Vitalik Buterin Stresses AI Risks Amid OpenAI Leadership Upheaval

原始链接: https://www.zerohedge.com/technology/vitalik-buterin-stresses-ai-risks-amid-openai-leadership-upheaval

以太坊联合创始人 Vitalik Buterin 表达了对“超级智能”人工智能 (AI) 的担忧,称其在 OpenAI 最近的领导层变动中存在风险。 Cointelegraph 的一份报告显示,OpenAI 的一位领导人 Jan Leike 由于在安全文化和优先事项方面存在分歧而辞职。 行业专家担心我们当前的基础设施可能无法充分处理如此先进的人工智能系统。 Buterin 建议我们保持开放态度,避免在人工智能开发方面仓促采取行动。 相反,他建议建立在通用硬件上运行的开放模型,作为防止垄断控制的潜在保障。 此前,Buterin 声称 OpenAI 的 GPT-4 模型已经超越了图灵测试,意味着类似人类的行为。 同样,包括英国在内的各国政府也越来越担心大型科技公司对人工智能创新和潜在主导地位的影响。 像 6079 这样的草根运动倡导去中心化人工智能来平衡这些趋势。 另一位知名人士、OpenAI 联合创始人兼首席科学家 Ilya Sutskever 宣布退出,但仍对他们创建安全且有利的 AGI 的能力保持乐观。 这些事件标志着围绕人工智能不断发展的能力和社会影响的持续讨论。

相关文章

原文

Authored by Savannah Fortis via CoinTelegraph.com,

Ethereum co-founder Vitalik Butertin has shared his take on “superintelligent” artificial intelligence, calling it “risky” in response to ongoing leadership changes at OpenAI. 

On May 19, Cointelegraph reported that OpenAI’s former head of alignment, Jan Leike, resigned after saying he had reached a “breaking point” with management on the company’s core priorities.

Leike alleged that “safety culture and processes have taken a backseat to shiny products” at OpenAI, with many pointing toward developments around artificial general intelligence (AGI).

AGI is anticipated to be a type of AI equal to or surpassing human cognitive capabilities — the thought of which has already begun to worry industry experts, who say the world isn’t properly equipped to manage such superintelligent AI systems.

This sentiment seems to align with Buterin’s views. In a post on X, he shared his thoughts on the topic, emphasizing that people should not rush into action or push back against those who try. 

Source: Vitalik Buterin 

Buterin stressed open models that run on consumer hardware as a “hedge” against a future where a small conglomerate of companies would be able to then read and mediate most human thought. 

“Such models are also much lower in terms of doom risk than both corporate megalomania and militaries.”

This is his second comment in the last week on AI and its increasing capabilities.

On May 16, Buterin argued that OpenAI’s GPT-4 model has passed the Turing test, which determines the “humanness” of an AI model. He cited new research that claims most humans can’t determine when they talking to a machine.

However, Buterin is not the first to express this concern. The United Kingdom’s government also recently scrutinized Big Tech's increasing involvement in the AI sector, raising issues related to competition and market dominance.

Groups like 6079 are already emerging across the internet, advocating for decentralized AI to ensure it remains more democratized and not dominated by Big Tech.

Source: 6079

This follows the departure of another senior member of OpenAI’s leadership team on May 14, when Ilya Sutskever, co-founder and chief scientist, announced his resignation.

Sutskever did not mention any concerns about AGI. However, in a post on X, he expressed confidence that OpenAI will develop an AGI that is “safe and beneficial.”

联系我们 contact @ memedata.com