够了,人工智能副驾驶,我们需要人工智能抬头显示器。
Enough AI copilots, we need AI HUDs

原始链接: https://www.geoffreylitt.com/2025/07/27/enough-ai-copilots-we-need-ai-huds

## 超越AI副驾驶: “隐形”界面的力量 马克·维瑟1992年的一篇评论对现代AI设计仍然具有惊人的现实意义。维瑟告诫人们不要将AI定义为“副驾驶”——一个需要关注的虚拟助手。相反,他提倡一种“隐形计算机”,它在不干扰的情况下*增强*人类的能力。 他用飞机驾驶舱举例说明:与其让“副驾驶”大声警告,一个设计良好的界面能让飞行员自然地感知周围环境,从而从一开始就避免碰撞。 这个概念通过“抬头显示器”(HUD)——例如拼写检查或自定义调试器——在软件中实现,这些工具能微妙地增强我们的感官和理解力。拼写检查不会*告诉你*一个单词拼写错误,它会*显示*给你看。同样,视觉调试器能提供对程序行为的洞察,从而加深理解。 虽然“副驾驶”风格的AI可能适合常规任务,但维瑟的愿景表明,应该赋予专家能够扩展他们感知和解决问题能力的工具,尤其是在复杂或关键情况下。关键在于构建一种感觉像是自我延伸的技术,而不是另一个需要管理的实体。

相关文章

原文

In my opinion, one of the best critiques of modern AI design comes from a 1992 talk by the researcher Mark Weiser where he ranted against “copilot” as a metaphor for AI.

This was 33 years ago, but it’s still incredibly relevant for anyone designing with AI.

Weiser’s rant

Weiser was speaking at an MIT Media Lab event on “interface agents”. They were grappling with many of the same issues we’re discussing in 2025: how to make a personal assistant that automates tasks for you and knows your full context. They even had a human “butler” on stage representing an AI agent.

Everyone was super excited about this… except Weiser. He was opposed to the whole idea of agents! He gave this example: how should a computer help you fly a plane and avoid collisions?

The agentic option is a “copilot” — a virtual human who you talk with to get help flying the plane. If you’re about to run into another plane it might yell at you “collision, go right and down!”

Weiser offered a different option: design the cockpit so that the human pilot is naturally aware of their surroundings. In his words: “You’ll no more run into another airplane than you would try to walk through a wall.”

Weiser’s goal was an “invisible computer"—not an assistant that grabs your attention, but a computer that fades into the background and becomes "an extension of [your] body”.

Weiser’s 1992 slide on airplane interfaces

HUDs

There’s a tool in modern planes that I think nicely illustrates Weiser’s philosophy: the Head-Up Display (HUD), which overlays flight info like the horizon and altitude on a transparent display directly in the pilot’s field of view.

A HUD feels completely different from a copilot! You don’t talk to it. It’s literally part invisible—you just become naturally aware of more things, as if you had magic eyes.

Designing HUDs

OK enough analogies. What might a HUD feel like in modern software design?

One familiar example is spellcheck. Think about it: spellcheck isn’t designed as a “virtual collaborator” talking to you about your spelling. It just instantly adds red squigglies when you misspell something! You now have a new sense you didn’t have before. It’s a HUD.

(This example comes from Jeffrey Heer’s excellent Agency plus Automation paper. We may not consider spellcheck an AI feature today, but it’s still a fuzzy algorithm under the hood.)

Spellcheck makes you aware of misspelled words without an “assistant” interface.

Here’s another personal example from AI coding. Let’s say you want to fix a bug. The obvious “copilot” way is to open an agent chat and ask it to do the fix.

But there’s another approach I’ve found more powerful at times: use AI to build a custom debugger UI which visualizes the behavior of my program! In one example, I built a hacker-themed debug view of a Prolog interpreter.

With the debugger, I have a HUD! I have new senses, I can see how my program runs. The HUD extends beyond the narrow task of fixing the bug. I can ambiently build up my own understanding, spotting new problems and opportunities.

联系我们 contact @ memedata.com