人工智能并没有简化软件工程:它只是让糟糕的工程更容易了。
AI Didn't Simplify Software Engineering: It Just Made Bad Engineering Easier

原始链接: https://robenglander.com/writing/ai-did-not-simplify/

## 无痛软件开发的幻觉 软件行业正流传着一种危险的误解:人工智能,特别是大型语言模型(LLM),可以消除对熟练软件工程师的需求。尽管LLM可以快速生成代码,但这被错误地等同于简化的软件*开发*。这并非新模式——历史上充斥着承诺绕过工程学科的“突破性”工具,最终都伴随着复杂性的增加和最终的失败。 核心问题不在于*编写*代码,而在于*理解*系统——它们的设计、交互和长期行为。LLM加速了代码生产,但也加速了“规格漂移”,即实现与原始规范和测试产生偏差,导致不可靠的系统。 LLM是探索和起草的宝贵工具,但它们无法取代专业软件工程所需的批判性思维、判断力和持续验证。正如训练有素的机械师对于维护复杂的飞机至关重要一样,经验丰富的工程师对于构建和维护可靠的软件至关重要。 重点应该在于利用LLM来*加强*工程流程——促进更具对话性和迭代性的设计方法——而不是将其作为专业知识的替代品。以牺牲工程学科为代价,追求快速代码生成是一种目光短浅的做法,最终将导致更复杂、脆弱且不可靠的软件。

## AI 与软件工程:复杂的关系 最近 Hacker News 上的一场讨论集中在 AI 是否真正*简化*了软件工程,还是仅仅让*糟糕*的工程更容易实现。 结论倾向于后者。 尽管 AI 工具可以提高速度并协助研究和代码生成等任务,但它们并没有解决该领域的核心挑战——复杂系统设计、确保代码正确性和理解细微的需求。 许多评论员指出,编码本身从来都不是主要困难; 真正需要专业知识的是问题解决、用户体验考虑以及长期可维护性。 AI 擅长复制现有模式,但在新情况或深入理解系统级安全性方面却存在困难。 即使继续需要熟练的工程师,如果 AI 仅仅被视为降低成本的手段,也存在对工作岗位流失的担忧。 最终,AI 被视为现有技能的放大器——一种强大的工具,但需要深思熟虑的应用,并且不能取代经验丰富的开发人员的关键思维。 它更快更便宜,但不一定更简单,并且持续的质量仍然是一个重要问题。
相关文章

原文

The software industry is trying very hard right now to convince itself that software engineering is no longer necessary. Now anyone can do it. I'm calling bullshit!

Large language models can certainly write code, and sometimes that can be a time saver. Rather than searching Stack Overflow and other sources I can go from description to code quickly. Sometimes it's spot on. Often times not. But it's an impressive advance, no doubt. However the industry seems to have concluded that software development has finally been simplified to the point where the expertise isn't needed. If code can be generated on demand, then the hard part must be over. Architecture, specifications, careful validation — are those just quaint artifacts? Nonsense.

In some organizations this idea isn’t even being explored cautiously. It has already begun to shape policy. Engineers are being laid off in startling numbers with AI advances cited as making expertise redundant. The truth is that AI is just the latest excuse to deflect from bad business decisions or overwhelming market forces.

The discipline that governed how complex systems are built are being abandoned practically wholesale. Prompting an AI is increasingly being presented as a replacement for the discipline that once defined software engineering. Should I use another expletive here? You can fill in your favorite.

I’m writing this because I'm feeling déjà vu all over again (anyone know who Yogi Berra was?). Over the course of a long career you start to recognize certain patterns in this industry. Every few years a new tool appears and someone declares that the difficult parts of software engineering have finally been solved, or eliminated. To some it looks convincing. Productivity spikes. Demos look impressive. The industry congratulates itself on a breakthrough. Staff reductions kick in in the hopes that the market will respond positively. And then, slowly, the systems continue to grow. The complexity grows. And now what?

I’ve spent more than 4 decades in this industry, and I’ve watched several cycles like this play out. The tools change and the arguments change, but the pattern rarely does.

It never works out the way people expect.


The Aircraft Maintenance Problem

Aircraft maintenance has evolved as the aircraft systems themselves have evolved. The hand tools improved. Diagnostics became computerized. Manuals are digital. Procedures are well documented. AI systems can help interpret telemetry from the aircraft. Given all that progress, do we still need trained aircraft mechanics? Of course.

Modern aircraft are extraordinarily complex systems. A commercial airliner contains millions of parts and thousands of interconnected subsystems. Diagnosing a problem is not simply a matter of having the right tools or following a checklist. It requires experience. It requires judgment. It requires understanding how those systems behave under real operating conditions.

The tools help. The manuals help. The diagnostic systems help. But none of those things replace the expertise of the people responsible for maintaining the aircraft. No airline would ever suggest that improved tools eliminate the need for trained mechanics in favor of having the gate agent do repairs (sorry, no offense to gate agents).

Yet that is very close to the argument the software industry is now making about itself. Apparently we can finally get rid of those pesky software developers?


DIY vs Professional Systems

Before going further it’s worth clarifying something important. I’m not talking about hobbyists. I’m not talking about someone experimenting with a small application, building something for personal use, or exploring a new idea. People should absolutely do those things. Some of the most interesting ideas in computing have come from exactly that kind of experimentation.

But professional software development is a different category entirely.

Professional software is not a hobby project. It is a product. It is something customers rely on. It processes payments, stores sensitive information, manages infrastructure, and increasingly operates systems that people depend on every day. Once software crosses that line, the expectations change.

Customers assume the system behaves correctly. They assume it will continue to behave correctly as it evolves. They assume the people building it understand how the system actually works. Those expectations are not unreasonable. They are the basic conditions of professional engineering. And that is where discipline and expertise becomes unavoidable.


Code Was Never the Hard Part

One of the longest-standing misconceptions about software development is that writing code is the difficult part of the job. It never was. Typing syntax into a machine has always been the least interesting part of building a system. The difficult work lies elsewhere: deciding how the system should behave, determining how its components interact, and ensuring that the system remains understandable as it grows in complexity.

Those questions require design decisions, careful reasoning, and a clear understanding of how changes propagate through a system over time. They are engineering problems, not coding problems.

Reducing the effort required to produce code does not eliminate those problems. It simply allows people to produce larger and more complicated systems more quickly. The delusion is that this is a productivity gain. It's not. Not yet. It has shifted the burden elsewhere. Just consider code review and the cognitive load required to actually deal with all of that code that someone can generate. That's ultimately a bigger drain on productivity than writing the code. And if the underlying behavior has not been understood clearly enough, the additional speed merely accelerates the moment when the complexity becomes unmanageable while the outcome is wrong.


We’ve Seen This Before

In the 1990s we heard something similar about tools such as Visual Basic. The promise was that programming had been democratized and that software development would no longer require specialized expertise. Anyone with a useful idea could now produce an application.

There was some truth to that claim. Visual Basic enabled many applications that might never have been written otherwise. But it didn't eliminate the need for engineering discipline.

As systems grew larger and more interconnected, organizations rediscovered something important: producing software artifacts is not the same thing as engineering reliable systems.

What we are seeing today is the same pattern again, only amplified. Instead of lowering the barrier to building applications, large language models lower the barrier to producing code itself.

From that has emerged the seductive belief that expertise is no longer necessary.


The Alignment Problem

Up to this point the hype sounds like reality. Better tools. Faster output. Less friction. But every wave of enthusiasm in this industry eventually runs into the same problem. It isn’t a tooling problem. It isn’t really a productivity problem either. It’s a systems problem.

Reliable software depends on something that most people outside engineering rarely talk about: alignment. A system begins with an idea about how something should behave. That idea is written down as a specification. Engineers translate that specification into tests and into production code. For the system to remain reliable over time, those three things have to stay aligned.

The specification describes the behavior. The tests verify it. And the implementation actually performs it. When those three drift apart, the system slowly begins to lose its integrity.

Specifications describe behavior that the system no longer implements. Tests verify fragments of behavior but miss the rest. Engineers who arrive later are forced to infer how the system really behaves by reading code that may or may not reflect the original design.

At first that seems manageable. A few educated guesses here and there. But over time the guesses pile up. Eventually the system becomes something nobody really understands anymore.

In my whitepaper Engineering Alignment, I describe this phenomenon as spec drift. Spec drift is exactly what it sounds like: the description of the system and the system itself gradually move apart.

Sometimes the code changes and the specification doesn’t. Sometimes the specification evolves but the tests remain frozen. Sometimes the behavior shifts incrementally until nobody can say with confidence what the original intent actually was.

However it happens, the result is the same. The system loses alignment. And once that happens, reliability rarely survives for long. You can read more about this problem here: https://robenglander.com/writing/engineering-alignment/


AI Accelerates the Drift

Large language models dramatically accelerate the production of code. That is their greatest strength. It is also where the danger appears.

When code can be produced faster than the engineering discipline surrounding it, the forces that create spec drift begin to accelerate. Changes that once required careful thought and manual implementation can now appear in seconds. Entire sections of a system can be rewritten before anyone has asked whether the behavior still corresponds to the specification.

The code usually looks reasonable. It compiles. It reads well. It might even pass the existing tests. But the alignment that once governed the system may already be gone. What appears to be productivity can quietly become the ability to move toward misalignment faster than ever before.


Where AI Actually Helps

None of this means large language models are a mistake. They are remarkable tools, and used thoughtfully they can dramatically improve the way engineers explore and design systems.

Language models are exceptionally good at helping engineers reason about problems, explore design alternatives, summarize complex systems, and generate drafts that accelerate the early stages of implementation.

Where they struggle is in the areas that require strict discipline and consistency over time. Maintaining alignment between specifications, tests, and implementation remains an engineering responsibility. No tool can replace that responsibility, although many tools can help support it.

The real opportunity lies in using language models in ways that strengthen the engineering process rather than quietly replacing it.


Conversational Software Engineering

One of the more interesting possibilities opened by language models is that parts of software engineering may become more conversational. For decades the tools we used to design systems were rigid. Specifications were documents. Architectures were diagrams. The reasoning that led to those artifacts often disappeared into meetings and hallway conversations.

Language models change that dynamic. Engineers can explore ideas interactively, test assumptions, and work through designs in ways that feel much closer to natural conversation. That ability is genuinely valuable. But conversation is not engineering.

Conversation is how ideas are explored. Engineering begins when those ideas are captured in a form that can be validated, tested, and maintained. The challenge for the next generation of engineering tools will be learning how to bridge those two worlds without losing the discipline that complex systems require.


Expertise Still Matters

Professional software still requires engineers who understand how the systems they build actually work. Tools can accelerate development, but they do not eliminate the expertise required to design, reason about, and maintain complex systems. Right now the industry seems dangerously close to forgetting that.

LLMs are remarkable tools. They can make experienced engineers far more productive. But they do not replace the engineering discipline required to build reliable systems.

Let’s use these tools effectively, not worshipfully.

联系我们 contact @ memedata.com