赛博格 vs. 房间:两种对未来计算的愿景
Cyborgs vs. rooms, two visions for the future of computing

原始链接: https://interconnected.org/home/2025/10/13/dichotomy

人机交互的未来似乎正在分化为两种主要愿景:**改造人**和**改造空间**。目前业界更倾向于“改造人”的路径,专注于用技术增强*人*的能力——例如智能手表、人工智能眼镜,甚至未来可能将技术*融入*人体的生物增强。这基于通过外部工具扩展人类能力的理念。 另一种“改造空间”的愿景则侧重于用计算能力增强*环境*。这涵盖了从智能家居和环境计算到布雷特·维克多(Bret Victor)的动态乐园(Dynamicland)等大规模互动空间的一切。 它的核心是创造智能环境——一种“全息甲板”式的未来——技术嵌入在我们周围的世界中,响应语音、手势和存在。 作者更倾向于“改造空间”的方法,这与一种更温和的“躯体塑造”(somaforming)相符——调整环境以适应我们,而不是相反。这种范式强调多参与者、真实世界的互动,并为日益个性化、以身体为中心的改造人技术提供了一种引人注目的替代方案。 关键问题是:应该开发哪些核心技术来构建这些智能、互动空间?

## 赛博格 vs. 房间:计算的现实未来 最近在Hacker News上进行了一场讨论,探讨了计算的两种未来愿景:一种是“赛博格”未来,即普遍的个人技术整合;另一种是“房间”未来,即完全计算机化的环境。然而,评论者们普遍认为,更有可能的结果是务实的融合——计算机成为在需要时使用的*工具*,而不是完全沉浸于任何一种极端方式。 许多人指出,计算机*已经*无处不在,例如智能手机,成为了我们身体的延伸。人们对高度“赛博格”未来可能失去控制表示担忧,质疑人们是否能够真正从日益融入日常生活的系统中脱离——从数字身份到金融系统。 另一些人则倾向于“房间”的概念,但承认它可能对世界上大部分地区来说是难以企及的。一个反复出现的主题是技术的兴衰历史模式,最终在基础设施中找到平衡的位置,而不是完全统治。最终,重点应该放在开发赋能的抽象概念上,而不是强行将技术融入生活的方方面面。
相关文章

原文

Loosely I can see two visions for the future of how we interact with computers: cyborgs and rooms.

The first is where the industry is going today; I’m more interested in the latter.

Cyborgs

Near-term, cyborgs means wearables.

The original definition of cyborg by Clynes and Kline in 1960 was of a human adapting its body to fit a new environment (as previously discussed).

Apple AirPods are cyborg enhancements: transparency mode helps you hear better.

Meta AI glasses augment you with better memory and the knowledge of the internet – you mutter your questions and the answer is returned in audio, side-loaded into your working memory. Cognitively this feels just like thinking hard to remember something.

I can see a future being built out where I have a smart watch that gives me a sense of direction, a smart ring for biofeedback, smart earphones and glasses for perfect recall and anticipation… Andy Clark’s Natural Born Cyborgs (2003) lays out why this is perfectly impedance-matched to how our brains work already.

Long term? I’ve joked before about a transcranial magnetic stimulation helmet that would walk my legs to work and this is the cyborg direction of travel: nootropics, CRISPR gene therapy, body modification and slicing open your fingertips to insert magnets for an electric field sixth sense.

But you can see the cyborg paradigm in action with hardware startups today trying to make the AI-native form factor of the future: lapel pins, lanyards, rings, Neuralink and other brain-computer interfaces…

When tech companies think about the Third Device - the mythical device that comes after the PC and the smartphone - this is what they reach for: the future of the personal computer is to turn the person into the computer.

Rooms

Contrast augmented users with augmented environments. Notably:

  • Dynamicland (2018) – Bret Victor’s vision of a computer that is a place, a programmable room
  • Put-that-there (1980) – MIT research into room-scale, multimodal (voice and gesture) conversational computing
  • Project Cybersyn (1971) – Stafford Beer’s room-sized cybernetic brain for the economy of Chile
  • SAGE (as previously discussed) (1958–) – the pinnacle of computing before the PC, group computing out of the Cold War.

And innumerable other HCI projects…

The vision of room-scale computing has always had factions.

Is it ubiquitous computing (ubicomp), in which computing power is embedded in everything around us, culminating in smart dust? It is ambient computing, which also supposes that computing will be invisible? Or calm computing, which is more of a design stance that computing must mesh appropriately with our cognitive systems instead of chasing attention?

So there’s no good word for this paradigm, which is why I call it simply room-scale, which is the scale that I can act as a user.

I would put smart speakers in the room-scale/augmented environments bucket: Amazon Alexa, Google Home, all the various smart home systems like Matter, and really the whole internet of things movement – ultimately it’s a Star Trek Holodeck/Computer… way of seeing the future of computer interaction.

And robotics too. Roomba, humanoid robots that do our washing up, and tabletop paper robots that act as avatars for your mates, all part of this room-scale paradigm.


Rather than “cyborg”, I like sci-fi author Becky Chambers’ concept of somaforming (as previously discussed), the same concept but gentler.

Somaforming vs terraforming, changing ourselves to adapt to a new environment, or changing the environment to adapt to us.


Both cyborgs and rooms are decent North Stars for our collective computing futures, you know?

Both can be done in good ways and ugly ways. Both can make equal use of AI.

Personally I’m more interested in room-scale computing and where that goes. Multi-actor and multi-modal. We live in the real world and together with other people, that’s where computing should be too. Computers you can walk into… and walk away from.

So it’s an interesting question: while everyone else is building glasses, AR, and AI-enabled cyborg prosthetics that hang round your neck, what should we build irl, for the rooms where we live and work? What are the core enabling technologies?

It has been overlooked I think.

联系我们 contact @ memedata.com