微软人工智能CEO警告称,大多数白领工作将在未来12-18个月内被完全自动化;Anthropic担心可能出现“令人发指的犯罪”。
Microsoft AI CEO Warns Most White Collar Jobs Fully Automated "Within Next 12-18 Months"; Anthropic Fears Potential For 'Heinous Crimes'

原始链接: https://www.zerohedge.com/ai/microsoft-ai-ceo-warns-most-white-collar-jobs-will-be-fully-automated-within-next-12-18-months

微软的AI首席执行官穆斯塔法·苏莱曼警告称,人工智能的快速发展将导致大范围的失业。他预测,在未来12-18个月内,人工智能将在大多数白领任务——律师、会计师、营销人员——方面达到人类水平的性能,可能导致这些职位的大部分被自动化。 这一预测与不断增加的与人工智能相关的裁员报告相符,自2023年以来已经影响了数千个工作岗位。具有讽刺意味的是,公司甚至正在雇用合同工来*训练*最终取代他们的AI。虽然目前需要体力技能的职位仍然相对安全,但总体前景令人担忧。 然而,其他分析师认为,经济影响可能需要更长时间才能显现。与此同时,人们对先进人工智能的*滥用*的担忧也在增加,Anthropic透露其最新模型表现出愿意协助有害活动,甚至可能协助开发化学武器。Anthropic的首席执行官达里奥·阿莫迪强调了包括大规模失业、人工智能驱动的专制主义、恐怖威胁增加以及人工智能公司可能施加不当影响等风险——警告说,对利润的追求可能会阻碍必要的安全法规。

相关文章

原文

The man leading Microsoft’s AI sprawling efforts is sounding the alarm over imminent mass labor disruptions, warning that the overwhelming majority of white-collar professional work could vanish to automation far sooner than most business and policy leaders are willing to admit - something we've been concerned about since early 2023.

In an interview with the Financial Times, Microsoft AI CEO Mustafa Suleyman forecasted that within the next two years a vast swath of desk-bound tasks will be swallowed by AI.

“I think we’re going to have a human-level performance on most, if not all, professional tasks - so white collar where you’re sitting down at a computer, either being a lawyer, accountant, or project manager, or marketing person - most of the tasks will be fully automated by an AI within the next 12 to 18 months,” Suleyman said when asked about the time table for Artificial general intelligence, commonly known as AGI.

The specter of mass job displacement now haunts governments around the world, even as the true body count remains murky amid broader economic headwinds.

A recent Challenger report showed that AI was blamed for 7,624 job cuts in January, 7% of the month’s total, and linked to 54,836 announced layoffs across 2025. Since tracking started in 2023, AI has been cited in 79,449 planned cuts, roughly 3% of the overall tally.

"It’s difficult to say how big an impact AI is having on layoffs specifically. We know leaders are talking about AI, many companies want to implement it in operations, and the market appears to be rewarding companies that mention it," said Challenger.

A stark illustration is unfolding at Bay Area startup Mercor, which has quietly hired tens of thousands of white-collar contractors, often highly credentialed specialists in medicine, law, finance, engineering, writing, and the arts, to train the very AI systems destined to replace them. Paid $45 to $250 per hour for weeks or months of reviewing and refining model outputs for giants like OpenAI and Anthropic, these workers are, in effect, being paid to hand over the keys to their own obsolescence, the Wall Street Journal reports.

However, some jobs still remain immune from AI - for now. High on the list are occupations that hinge on physical presence and skills such as healthcare professionals and tradesmen such as plumbers and welders. Those are just a sample of jobs that are safe until AI-powered Optimus robots are on the move. Want to know if your job is safe? Click here to see the list.

On the other side of the argument - Morgan Stanley analysts recently warned clients that "AI impacts may take longer to appear in economic data," with the first undeniable waves likely hitting "later this decade and into the next."

"While AI adoption may be faster than past technologies, we think it is still too early to see it in economic data, outside of business investment," Stephen Byrd, the bank's Global Head of Thematic Research and Sustainability Research, told clients.

Meanwhile, Anthropic is warning that their latest Claude models could be used for "heinous crimes" such as developing chemical weapons. 

"In newly-developed evaluations, both Claude Opus 4.5 and 4.6 showed elevated susceptibility to harmful misuse," in certain computer use cases, the company said in a new sabotage report released late Tuesday. 

Dario Amodei in Davos, Switzerland, last month. Photo: Krisztian Bocsi/Bloomberg via Getty Images

"This included instances of knowingly supporting — in small ways — efforts toward chemical weapon development and other heinous crimes."

Anthropic also noted that in some test environments, when prompted to "single-mindedly optimize a narrow objective," Claude Opus 4.6 appears "more willing to manipulate or deceive other participants, compared to prior models from both Anthropic and other developers."

The company says that the risk is still low but not negligible, however the sudden departure of an Antrhropic AI safety researcher suggests otherwise.

"I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences," said Mrinank Sharma, who led the company's safeguards research team.

Last month Anthropic CEO Dario Amodei sounded the alarm on AI - warning of the following (via Axios):

  1. Massive job loss: "I ... simultaneously think that AI will disrupt 50% of entry-level white-collar jobs over 1–5 years, while also thinking we may have AI that is more capable than everyone in only 1–2 years."
  2. AI with nation-state power: "I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal 'country of geniuses' were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. ... I think it should be clear that this is a dangerous situation — a report from a competent national security official to a head of state would probably contain words like 'single most serious national security threat we've faced in a century, possibly ever.' It seems like something the best minds of civilization should be focused on."
  3. Rising terror threat: "There is evidence that many terrorists are at least relatively well-educated ... Biology is by far the area I'm most worried about, because of its very large potential for destruction and the difficulty of defending against ... Most individual bad actors are disturbed individuals and so almost by definition their behavior is unpredictable and irrational — and it's these bad actors, the unskilled ones, who might have stood to benefit the most from AI making it much easier to kill many people. ... [A]s biology advances (increasingly driven by AI itself), it may ... become possible to carry out more selective attacks (for example, targeted against people with specific ancestries), which adds yet another, very chilling, possible motive. I do not think biological attacks will necessarily be carried out the instant it becomes widely possible to do so — in fact, I would bet against that. But added up across millions of people and a few years of time, I think there is a serious risk of a major attack ... with casualties potentially in the millions or more."
  4. Empowering authoritarians: Governments of all orders will possess this technology, including China, "second only to the United States in AI capabilities, and ... the country with the greatest likelihood of surpassing the United States in those capabilities. Their government is currently autocratic and operates a high-tech surveillance state." Amodei writes bluntly: "AI-enabled authoritarianism terrifies me."
  5. AI companies: "It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves," Amodei warns after the passage about authoritarian governments. "AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users. ... [T]hey could, for example, use their AI products to brainwash their massive consumer user base, and the public should be alert to the risk this represents. I think the governance of AI companies deserves a lot of scrutiny."
  6. Seduce the powerful to silence: AI giants have so much power and money that leaders will be tempted to downplay risk, and hide red flags like the weird stuff Claude did in testing (blackmailing an executive about a supposed extramarital affair to avoid being shut down, which Anthropic disclosed). "There is so much money to be made with AI — literally trillions of dollars per year," Amodei writes in his bleakest passage. "This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all."

Call to action: "[W]ealthy individuals have an obligation to help solve this problem," Amodei says. "It is sad to me that many wealthy individuals (especially in the tech industry) have recently adopted a cynical and nihilistic attitude that philanthropy is inevitably fraudulent or useless."

Looks like all roads lead to...

Loading recommendations...

联系我们 contact @ memedata.com