苹果的AI并非令人失望,令人失望的是AI本身。
Apple's AI isn't a letdown. AI is the letdown

原始链接: https://www.cnn.com/2025/03/27/tech/apple-ai-artificial-intelligence/index.html

苹果公司因其姗姗来迟且令人失望的AI进军而面临批评,评论员认为该公司在AI领域“失败”了。然而,作者认为问题不在于苹果的失败,而在于AI自身的不足。华尔街对“超级周期”的渴望驱使苹果仓促整合AI,导致功能令人失望。 与其他乐于发布不完美AI产品的科技公司不同,苹果优先考虑用户信任和无缝体验。作者认为,期望消费者适应AI的局限性与苹果的品牌形象相悖。 核心问题在于,AI虽然在科学上令人着迷,但缺乏具有持续准确结果的实用消费者应用。例如,略有不准确的AI助手可能会导致现实世界中的问题。因此,苹果的谨慎并非失败;它反映了AI目前在可靠地服务日常用户方面的局限性。在AI真正准备好广泛应用于消费者之前,它本身还需要进一步完善。

Hacker News上的一篇讨论围绕着一篇文章展开,这篇文章声称苹果的AI并非令人失望,真正令人失望的是AI本身。评论者们表达了对AI倾向于提供通用或错误信息从而难以找到特定答案的沮丧之情。一位用户强调了AI提供“随机的、未经整理的数据”而不是一致的、经过整理的事实的问题。他们建议用可靠的来源,例如旧的百科全书,来训练AI。普遍的共识是,目前的AI模型优先考虑新颖性而非准确性和一致性,无法提供可靠的结果。另一位用户指出了AI的不可靠性和苹果“它就是能用”(it just works)的理念之间的冲突,暗示消费者期望更高的质量标准。讨论质疑了在追求事实准确性和可靠输出时AI的实际效用。

原文

A version of this story appeared in CNN Business’ Nightcap newsletter. To get it in your inbox, sign up for free here.

New York CNN  — 

Apple has been getting hammered in tech and financial media for its uncharacteristically messy foray into artificial intelligence. After a June event heralding a new AI-powered Siri, the company has delayed its release indefinitely. The AI features Apple has rolled out, including text message summaries, are comically unhelpful.

The critique of Apple’s halting rollout is not entirely unfair. Though it is, at times, missing the point.

Apple, like every other big player in tech, is scrambling to find ways to inject AI into its products. Why? Well, it’s the future! What problems is it solving? Well, so far that’s not clear! Are customers demanding it? LOL, no. In fact, last year the backlash against one of Apple’s early ads for its AI was so hostile the company had to pull the commercial.

The real reason companies are doing this is because Wall Street wants them to. Investors have been salivating for an Apple “super cycle” — a tech upgrade so enticing that consumers will rush to get their hands on the new model.

In a rush to please shareholders, Apple made a rare stumble. The company is owning its error, it seems, and has said the delayed features would roll out “in the coming year.”

Of course, the cryptic delay has only given oxygen to the narrative that Apple has become a laggard in the Most Important Tech Advancement in decades.

And that is where the Apple-AI narrative goes off the rails.

There’s a popular adage in policy circles: “The party can never fail, it can only be failed.” It is meant as a critique of the ideological gatekeepers who may, for example, blame voters for their party’s failings rather than the party itself.

That same fallacy is taking root among AI’s biggest backers. AI can never fail, it can only be failed. Failed by you and me, the smooth-brained Luddites who just don’t get it. (To be sure, even AI proponents will acknowledge available models’ shortcomings — no one would argue that the AI slop clogging Facebook is anything but, well, slop — but there is a dominant narrative within tech that AI is both inevitable and revolutionary.)

Tech columnists such as the New York Times’ Kevin Roose have suggested recently that Apple has failed AI, rather than the other way around.

“Apple is not meeting the moment in AI,” Roose said on his podcast, Hard Fork, earlier this month. “I just think that when you’re building products with generative AI built into it, you do just need to be more comfortable with error, with mistakes, with things that are a little rough around the edges.”

To which I would counter, respectfully: Absolutely not.

Roose is right that Apple is, to put it mildly, a fastidious creator of consumer products. It is, after all, the $3-trillion empire built by the notoriously detail-obsessed Steve Jobs.

The Apple brand is perhaps the most meticulously controlled corporate identity on the planet. Its “walled garden” of iOS — despised by developers and fair game for accusations of monopolistic behavior, to be sure — is also part of the reason one billion people have learned to trust Apple with their sensitive personal data.

Apple’s obsession with privacy and security is the reason most of us don’t think twice to scan our faces, store bank account information or share our real-time location via our phones.

And not only do we trust Apple to keep our data safe, we trust it to design things that are accessible out of the box. You can buy a new iPhone, AirPods or Apple Watch and trust that the moment you turn it on, a user-friendly system will hold your hand through the setup and seamlessly sync it with your other devices. You will almost never need a user manual filled with tiny print. Even your Boomer parents will be able to navigate FaceTime calls with minimal effort.

Roose contends, at one point in the episode, that “there are people who use AI systems who know that they are not perfect,” and that those regular users understand there’s a right way and a wrong way to query a chatbot.

This is where we, the people, are apparently failing AI. Because in addition to being humans with jobs and social lives and laundry to fold and art to make and kids to raise, we should also learn how to tiptoe around the limitations of large language models that may or may not return accurate information to us.

Apple, Roose says, should keep pushing AI into its products and just get used to the idea that those features may be unpolished and a little too advanced for the average user.

And again, respectfully, I would ask: To what end?

As Hard Fork co-host Casey Newton notes in the same episode, it’s not as if Google or Amazon has figured out some incredible use case that’s making users rush to buy a new Pixel phone or an Echo speaker.

“AI is still so much more of a science and research story than it is a product story,” Newton notes.

In other words: Large language models are fascinating science. They are an academic wonder with huge potential and some early commercial successes, such as OpenAI’s ChatGPT and Anthropic’s Claude. But a bot that’s 80% accurate — a figure Newton made up, but we’ll go with it — isn’t a very useful consumer product.

Back in June, Apple floated a compelling scenario for its newfangled Siri. Imagine yourself, frazzled and running late for work, simply saying into your phone: Hey Siri, what time does my mom’s flight land? And is it at JFK or LaGuardia? In theory, Siri could scan your email and texts with your mom and give you an answer. That saves you several annoying steps of opening your email to find the flight number, copying it, then pasting it into Google to find the flight’s status.

If it’s 100% accurate, it’s a fantastic time saver. If it is anything less than 100% accurate, it’s useless. Because even if there’s a 2% chance it’s wrong, there’s a 2% chance you’re stranding mom at the airport, and mom will be, rightly, very disappointed. Our moms deserve better!

Bottom line: Apple is not the laggard in AI. AI is the laggard in AI.

联系我们 contact @ memedata.com