OpenAI 扩大游说队伍以影响监管
OpenAI Expands Lobbying Army To Influence Regulation

原始链接: https://www.zerohedge.com/technology/openai-expands-lobbying-army-influence-regulation

OpenAI 是一家由微软和苹果支持的人工智能 (AI) 研究实验室,它正在加大其国际游说力度,以制定全球人工智能法规。 该团队从 2023 年初的 3 名成员扩大到计划到年底增加到 50 名成员。 此次扩张是在 OpenAI 与科技巨头之间达成有争议的交易以及解散专注于安全的团队之后进行的。 OpenAI 政府事务副总裁 Anna Makanju 强调,他们的目的不是逃避监管或优先考虑利润,而是为了确保 AGI(通用人工智能)造福全球人类。 代表 OpenAI 的游说者目前分布在比利时、英国、爱尔兰、法国、新加坡、印度、巴西和美国等国家,这些地区积极制定人工智能政策。 OpenAI 最初聘用人工智能政策专家和专业倡导者,现在正在招募传统科技行业的游说者,这标志着战略的转变。 一位了解谈判情况的消息人士表示,“他们只是想以大型科技公司多年来一直在做的方式影响立法者。” 尽管公众对硅谷公司不信任,但 Makanju 的目标是改变人们对当前人工智能技术与需要细致监管的未来发展之间差异的看法。 她补充道:“我们的重点仍然是创新,为人们带来有益的技术,并实现安全的未来。” OpenAI 政策规划主管 David Robinson 分享了雄心勃勃的目标,他表示:“我们的使命是制定法律,促进创新技术进步,同时维护安全的环境。” 批评者认为,这种影响力的推动可能源于 OpenAI 安全团队最近的解散,而不是对人工智能法规的真正担忧。 无论如何,他们的行为体现了硅谷生态系统中的典型行为。

相关文章

原文

A new report from the Financial Times reveals that OpenAI is expanding its international lobbyist Army, aiming to sway politicians and regulators who are tightening their grip on artificial intelligence. 

OpenAI's move to expand its lobbyist team from three at the start of 2023 to 35 and soon to 50 by the end of the year comes after sweetheart deals with Microsoft and Apple to infiltrate billions of smartphones worldwide. Also, just weeks ago, the startup dissolved a team focused on ensuring AI safety. 

"We are not approaching this from a perspective of we just need to get in there and quash regulations . . . because we don't have a goal of maximizing profit; we have a goal of making sure that AGI benefits all of humanity," said Anna Makanju, OpenAI's vice-president of government affairs, referring to artificial general intelligence.  

OpenAI's lobbyists are being positioned in countries to counter the spread of AI legislation around the world. They're being sent to Belgium, the UK, Ireland, France, Singapore, India, Brazil, and the US, countries where AI legislation is the most advanced. 

"Initially, OpenAI recruited people deeply involved in AI policy and specialists, whereas now they are just hiring run-of-the-mill tech lobbyists, which is a very different strategy," said one person who has directly engaged with OpenAI on creating legislation. 

"They're just wanting to influence legislators in ways that Big Tech has done for over a decade," the person said. 

Makanju said the startup is attempting to address some hangovers from the social media age, which has sparked great "distrust of Silicon Valley companies." 

"Unfortunately, people are often seeing AI with the same lens," she said, adding, "We spend a lot of time making sure people understand that this technology is quite different, and the regulatory interventions that make sense for it will be very different."

David Robinson, head of policy planning at OpenAI, said the global affairs team has ambitious goals: "The mission is safe and broadly beneficial, and so what does that mean? It means creating laws that not only let us innovate and bring beneficial technology to people but also end up in a world where the technology is safe." 

It's unbelievable—the approach these executives took, pretending to hold themselves accountable while proposing new laws for their own products. 

OpenAI needs to start following laws, and the first one they can start with is copyright. 

It's clear that the moment the safety team dissolved, OpenAI started ramping up its lobbying efforts to exert influence on AI legislation worldwide. This is Typical Silicon Valley behavior—nothing more. 

联系我们 contact @ memedata.com