人工智能正在把我们变成粘合剂。
AI is turning us into glue

原始链接: https://lincoln.swaine-moore.is/writing-about/ai-and-glue

2025年4月18日,作者反思了AGI,特别是大型语言模型(LLM)对其软件开发工作的影响。像o3这样的LLM加速了bug修复,但作者怀念解决智力难题的乐趣。他担心AGI会自动化“深度线性思维”,把人类的工作局限于“掌舵”(高层设计)和“管道维护”(乏味、低层任务)。 作者预见不久的将来,自己要么重复性地指挥AI工具,要么沦为配置基础设施——这些工作他都觉得缺乏成就感。最终,甚至这些角色也可能被自动化。虽然硬件项目在体力操作方面提供暂时的避风港,但即使是这些,在AI的指挥下也变得索然无味。作者对AI解放人类从事创造性工作的乌托邦式愿景持怀疑态度,反而担心未来会被商品化的智能和过剩的、缺乏成就感的“粘合剂”工作所主导。虽然可能会出现新的工作,但作者担心这些工作会缺乏内在价值。

Hacker News 上的一篇讨论围绕着一篇文章展开,该文章认为 AI 正在把人们变成“胶水”,连接各个部分而缺乏深入的理解。一位评论者批评了 AI supposedly 加速事物但却最终导致时间浪费和标准降低的常见说法。 其他人则辩论了大型语言模型 (LLM) 对软件工程的长期影响。担忧包括 LLM 的有效性达到平台期以及人类由于过度依赖 AI 而丧失技能,导致系统难以维护。一些人预见未来“AI 问题修复者”或“工匠式手工软件”开发人员将非常有价值。 反驳的观点包括软件需求无限,AI 只是一个获取信息的高级工具。一些人发现 AI 对处理繁琐任务很有帮助,使他们能够专注于更具挑战性的方面。另一位用户不同意人们已经拥有了他们想要的大部分软件。
相关文章
  • 被千次提示终结的软件工程 2025-03-30
  • (评论) 2024-06-13
  • 软件工程身份危机 2025-03-23
  • (评论) 2024-03-13
  • 后开发者时代 2025-04-15

  • 原文
    2025-04-18

    Not like that, probably.

    I'm trying to learn to stop worrying and love the AGI, but I'm feeling pretty bleak about it.

    I make software as my day job, and like basically everyone I know at this point, I've used LLMs to get some stuff done faster. o3 came out yesterday, and already it's helped me get to the bottom of a thorny bug, with a lot less trial and error than I would've needed otherwise. On its face, this is a good thing. So what's the problem?

    Well, I like fixing thorny bugs! They're puzzles, and digging into them lets me learn about parts of the computer I usually don’t see. Same goes for refactoring--when I'm doing it right, I'm understanding the shape of my system better, and crystallizing that into a structure that expresses it. Solving these puzzles scratches an itch in my brain. I'm not sure it's the most rewarding part of my job, but it is the part I most enjoy.

    I don't think we're quite there yet, but the writing is on the wall: very conservatively, within ten years, I'll be inferior to the computer at doing most tasks roughly shaped like "deep linear thinking about a concrete problem".

    When you excise that role from this work, you're left with two chunks that are mostly nonadjacent. There's ship-steering, and there's plumbing (forgive the mixed metaphors, here and throughout). When I hear from people who are excited about an AI-empowered future, they're universally talking about the former. The promise of vibe coding is that you only need to care about the top layer of the work--bring your sensibilities (idea / design / ethos) to the table, and the machine will do the rest for you. The human is freed to do the human work.

    I've got a few ideas, and I can almost talk myself into liking that world. But in my experience, it's just not the full story for anything at all sophisticated. For one, even when using an agent with tools, there are issues that a human can see that the system cannot. If I'm building a web application, and Claude Code has written some styles for it per my prompting, I'm the one who needs to verify that it looks right in the browser. Inevitably it doesn't, because that's just how writing styles goes. And since I'm unfamiliar with the styles myself, not having written them, the easiest thing for me to do is bring the issue to Claude, and let it churn. Rinse and repeat. Writing up a bug is a lot less fun than fixing it, and makes me just another tool for Claude to navigate my computer: its eyes.

    You might well object that this bit of cybernetic plumbing is not long for this world. The frontier labs are definitely working on full computer-use agents, which could handle tabbing over to the browser and checking things out as well as I could. Given how poor at spatial reasoning the best models remain, I feel like I've got a bit of moat. But even if I don't, there's plenty of plumbing left. In the short term, I'll still be the one who has to figure out how to pipe logs from one platform to another, or to configure the access policy on a storage bucket so that the agent's code can actually put stuff there. That's good for my job security, but unfortunately I don't much like doing those things. I'd much rather be thinking about the meat of my project than looking up 2FA codes for my nth cloud provider. But it'll get harder to justify that use of time versus the glue-like tasks.

    The good (?) news is that in the slightly longer run, even that sort of role will get gobbled up. At that point, I see myself as the link between AI and the tangible world. For the foreseeable future, if I'm working on a hardware project, I'll be the one connecting jumper wires on a breadboard, or futzing with an antenna. I love tinkering, but it'll be a lot less fun when it's the computer who knows the game plan. If I'm lucky, I might be the Idea Captain steering the ship, but there'll only be so much steering to do before I need to consult an oracle about what to solder where. And I'm skeptical that everyone can make a living as the Idea Captain of their own ship.

    I have no idea what's going to happen further down the line than that. Putting aside existential risks, I don't see a future where a lot of jobs don't cease to exist. The bullish case is that we'll create new ones we can't even imagine now, which empower people to self-actualize in ways they could never have before. But in a world of commoditized (super)intelligence, I worry that a lot of those new jobs will look like glue.

    联系我们 contact @ memedata.com