Meta被发现操纵AI基准测试结果。
Meta got caught gaming AI benchmarks

原始链接: https://www.theverge.com/meta/645012/meta-llama-4-maverick-benchmarks-gaming

Meta发布Llama 2的后续模型Scout和Maverick,在AI社区引发了争议。Maverick,一个中等规模的模型,最初在LMArena基准测试中排名靠前,似乎超越了GPT-4o等竞争对手。然而,后来发现LMArena上测试的版本是经过优化的“实验性聊天版本”,与公开发布的模型不同。 这一差异导致LMArena更新了其政策,以确保评估的公平性。像Simon Willison这样的批评者认为,这使得基准分数对开发者来说“毫无价值”,因为用于测试的模型无法访问。Meta为其行为辩护,称他们会尝试定制版本。 进一步加剧争论的是,有人指控Meta专门训练Llama 2模型以在基准测试中表现出色,该公司否认了这一说法。不同寻常的周末发布和据报道的内部延误,凸显了Meta在竞争激烈的AI领域面临的压力,尤其是在DeepSeek等强大的开源模型出现之后。这一事件强调了基准测试在AI发展中的日益重要性,以及公司操纵系统以显得更有竞争力的可能性。

Hacker News 的一个帖子讨论了 Meta 被指控在其 Llama 模型中“操纵 AI 基准测试”。评论者表达了对该模型性能的失望,认为它在其他开源替代方案中并不突出,并质疑其发布是否为时过早。一些人对 LMArena 作为可靠评估工具的价值提出了质疑,认为其主观性仅对专注于用户参与的公司有用。讨论还延伸到 OpenAI 也被指控通过使用承诺不使用的训练数据来操纵基准测试。一位用户认为“针对对话的优化”可能会优先考虑讨好的提示,这引发了人们对基准比较背后动机的担忧。讨论最后指出,“开放权重”的黑盒模型可能会以不可预测的方式被操纵。

原文

Over the weekend, Meta dropped two new Llama 4 models: a smaller model named Scout, and Maverick, a mid-size model that the company claims can beat GPT-4o and Gemini 2.0 Flash “across a broad range of widely reported benchmarks.”

Maverick quickly secured the number-two spot on LMArena, the AI benchmark site where humans compare outputs from different systems and vote on the best one. In Meta’s press release, the company highlighted Maverick’s ELO score of 1417, which placed it above OpenAI’s 4o and just under Gemini 2.5 Pro. (A higher ELO score means the model wins more often in the arena when going head-to-head with competitors.)

The achievement seemed to position Meta’s open-weight Llama 4 as a serious challenger to the state-of-the-art, closed models from OpenAI, Anthropic, and Google. Then, AI researchers digging through Meta’s documentation discovered something unusual.

In fine print, Meta acknowledges that the version of Maverick tested on LMArena isn’t the same as what’s available to the public. According to Meta’s own materials, it deployed an “experimental chat version” of Maverick to LMArena that was specifically “optimized for conversationality,” TechCrunch first reported.

“Meta’s interpretation of our policy did not match what we expect from model providers,” LMArena posted on X two days after the model’s release. “Meta should have made it clearer that ‘Llama-4-Maverick-03-26-Experimental’ was a customized model to optimize for human preference. As a result of that, we are updating our leaderboard policies to reinforce our commitment to fair, reproducible evaluations so this confusion doesn’t occur in the future.“

A spokesperson for Meta, Ashley Gabriel, said in an emailed statement that “we experiment with all types of custom variants.”

“‘Llama-4-Maverick-03-26-Experimental’ is a chat optimized version we experimented with that also performs well on LMArena,” Gabriel said. “We have now released our open source version and will see how developers customize Llama 4 for their own use cases. We’re excited to see what they will build and look forward to their ongoing feedback.”

While what Meta did with Maverick isn’t explicitly against LMArena’s rules, the site has shared concerns about gaming the system and taken steps to “prevent overfitting and benchmark leakage.” When companies can submit specially-tuned versions of their models for testing while releasing different versions to the public, benchmark rankings like LMArena become less meaningful as indicators of real-world performance.

”It’s the most widely respected general benchmark because all of the other ones suck,” independent AI researcher Simon Willison tells The Verge. “When Llama 4 came out, the fact that it came second in the arena, just after Gemini 2.5 Pro — that really impressed me, and I’m kicking myself for not reading the small print.”

Shortly after Meta released Maverick and Scout, the AI community started talking about a rumor that Meta had also trained its Llama 4 models to perform better on benchmarks while hiding their real limitations. VP of generative AI at Meta, Ahmad Al-Dahle, addressed the accusations in a post on X: “We’ve also heard claims that we trained on test sets -- that’s simply not true and we would never do that. Our best understanding is that the variable quality people are seeing is due to needing to stabilize implementations.”

“It’s a very confusing release generally.”

Some also noticed that Llama 4 was released at an odd time. Saturday doesn’t tend to be when big AI news drops. After someone on Threads asked why Llama 4 was released over the weekend, Meta CEO Mark Zuckerberg replied: “That’s when it was ready.”

“It’s a very confusing release generally,” says Willison, who closely follows and documents AI models. “The model score that we got there is completely worthless to me. I can’t even use the model that they got a high score on.”

Meta’s path to releasing Llama 4 wasn’t exactly smooth. According to a recent report from The Information, the company repeatedly pushed back the launch due to the model failing to meet internal expectations. Those expectations are especially high after DeepSeek, an open-source AI startup from China, released an open-weight model that generated a ton of buzz.

Ultimately, using an optimized model in LMArena puts developers in a difficult position. When selecting models like Llama 4 for their applications, they naturally look to benchmarks for guidance. But as is the case for Maverick, those benchmarks can reflect capabilities that aren’t actually available in the models that the public can access.

As AI development accelerates, this episode shows how benchmarks are becoming battlegrounds. It also shows how Meta is eager to be seen as an AI leader, even if that means gaming the system.

Update, April 7th: The story was updated to add Meta’s statement.

联系我们 contact @ memedata.com