The software industry is trying very hard right now to convince itself that software engineering is no longer necessary. Now anyone can do it. I'm calling bullshit!
Large language models can certainly write code, and sometimes that can be a time saver. Rather than searching Stack Overflow and other sources I can go from description to code quickly. Sometimes it's spot on. Often times not. But it's an impressive advance, no doubt. However the industry seems to have concluded that software development has finally been simplified to the point where the expertise isn't needed. If code can be generated on demand, then the hard part must be over. Architecture, specifications, careful validation — are those just quaint artifacts? Nonsense.
In some organizations this idea isn’t even being explored cautiously. It has already begun to shape policy. Engineers are being laid off in startling numbers with AI advances cited as making expertise redundant. The truth is that AI is just the latest excuse to deflect from bad business decisions or overwhelming market forces.
The discipline that governed how complex systems are built are being abandoned practically wholesale. Prompting an AI is increasingly being presented as a replacement for the discipline that once defined software engineering. Should I use another expletive here? You can fill in your favorite.
I’m writing this because I'm feeling déjà vu all over again (anyone know who Yogi Berra was?). Over the course of a long career you start to recognize certain patterns in this industry. Every few years a new tool appears and someone declares that the difficult parts of software engineering have finally been solved, or eliminated. To some it looks convincing. Productivity spikes. Demos look impressive. The industry congratulates itself on a breakthrough. Staff reductions kick in in the hopes that the market will respond positively. And then, slowly, the systems continue to grow. The complexity grows. And now what?
I’ve spent more than 4 decades in this industry, and I’ve watched several cycles like this play out. The tools change and the arguments change, but the pattern rarely does.
It never works out the way people expect.
The Aircraft Maintenance Problem
Aircraft maintenance has evolved as the aircraft systems themselves have evolved. The hand tools improved. Diagnostics became computerized. Manuals are digital. Procedures are well documented. AI systems can help interpret telemetry from the aircraft. Given all that progress, do we still need trained aircraft mechanics? Of course.
Modern aircraft are extraordinarily complex systems. A commercial airliner contains millions of parts and thousands of interconnected subsystems. Diagnosing a problem is not simply a matter of having the right tools or following a checklist. It requires experience. It requires judgment. It requires understanding how those systems behave under real operating conditions.
The tools help. The manuals help. The diagnostic systems help. But none of those things replace the expertise of the people responsible for maintaining the aircraft. No airline would ever suggest that improved tools eliminate the need for trained mechanics in favor of having the gate agent do repairs (sorry, no offense to gate agents).
Yet that is very close to the argument the software industry is now making about itself. Apparently we can finally get rid of those pesky software developers?
DIY vs Professional Systems
Before going further it’s worth clarifying something important. I’m not talking about hobbyists. I’m not talking about someone experimenting with a small application, building something for personal use, or exploring a new idea. People should absolutely do those things. Some of the most interesting ideas in computing have come from exactly that kind of experimentation.
But professional software development is a different category entirely.
Professional software is not a hobby project. It is a product. It is something customers rely on. It processes payments, stores sensitive information, manages infrastructure, and increasingly operates systems that people depend on every day. Once software crosses that line, the expectations change.
Customers assume the system behaves correctly. They assume it will continue to behave correctly as it evolves. They assume the people building it understand how the system actually works. Those expectations are not unreasonable. They are the basic conditions of professional engineering. And that is where discipline and expertise becomes unavoidable.
Code Was Never the Hard Part
One of the longest-standing misconceptions about software development is that writing code is the difficult part of the job. It never was. Typing syntax into a machine has always been the least interesting part of building a system. The difficult work lies elsewhere: deciding how the system should behave, determining how its components interact, and ensuring that the system remains understandable as it grows in complexity.
Those questions require design decisions, careful reasoning, and a clear understanding of how changes propagate through a system over time. They are engineering problems, not coding problems.
Reducing the effort required to produce code does not eliminate those problems. It simply allows people to produce larger and more complicated systems more quickly. The delusion is that this is a productivity gain. It's not. Not yet. It has shifted the burden elsewhere. Just consider code review and the cognitive load required to actually deal with all of that code that someone can generate. That's ultimately a bigger drain on productivity than writing the code. And if the underlying behavior has not been understood clearly enough, the additional speed merely accelerates the moment when the complexity becomes unmanageable while the outcome is wrong.
We’ve Seen This Before
In the 1990s we heard something similar about tools such as Visual Basic. The promise was that programming had been democratized and that software development would no longer require specialized expertise. Anyone with a useful idea could now produce an application.
There was some truth to that claim. Visual Basic enabled many applications that might never have been written otherwise. But it didn't eliminate the need for engineering discipline.
As systems grew larger and more interconnected, organizations rediscovered something important: producing software artifacts is not the same thing as engineering reliable systems.
What we are seeing today is the same pattern again, only amplified. Instead of lowering the barrier to building applications, large language models lower the barrier to producing code itself.
From that has emerged the seductive belief that expertise is no longer necessary.
The Alignment Problem
Up to this point the hype sounds like reality. Better tools. Faster output. Less friction. But every wave of enthusiasm in this industry eventually runs into the same problem. It isn’t a tooling problem. It isn’t really a productivity problem either. It’s a systems problem.
Reliable software depends on something that most people outside engineering rarely talk about: alignment. A system begins with an idea about how something should behave. That idea is written down as a specification. Engineers translate that specification into tests and into production code. For the system to remain reliable over time, those three things have to stay aligned.
The specification describes the behavior. The tests verify it. And the implementation actually performs it. When those three drift apart, the system slowly begins to lose its integrity.
Specifications describe behavior that the system no longer implements. Tests verify fragments of behavior but miss the rest. Engineers who arrive later are forced to infer how the system really behaves by reading code that may or may not reflect the original design.
At first that seems manageable. A few educated guesses here and there. But over time the guesses pile up. Eventually the system becomes something nobody really understands anymore.
In my whitepaper Engineering Alignment, I describe this phenomenon as spec drift. Spec drift is exactly what it sounds like: the description of the system and the system itself gradually move apart.
Sometimes the code changes and the specification doesn’t. Sometimes the specification evolves but the tests remain frozen. Sometimes the behavior shifts incrementally until nobody can say with confidence what the original intent actually was.
However it happens, the result is the same. The system loses alignment. And once that happens, reliability rarely survives for long. You can read more about this problem here: https://robenglander.com/writing/engineering-alignment/
AI Accelerates the Drift
Large language models dramatically accelerate the production of code. That is their greatest strength. It is also where the danger appears.
When code can be produced faster than the engineering discipline surrounding it, the forces that create spec drift begin to accelerate. Changes that once required careful thought and manual implementation can now appear in seconds. Entire sections of a system can be rewritten before anyone has asked whether the behavior still corresponds to the specification.
The code usually looks reasonable. It compiles. It reads well. It might even pass the existing tests. But the alignment that once governed the system may already be gone. What appears to be productivity can quietly become the ability to move toward misalignment faster than ever before.
Where AI Actually Helps
None of this means large language models are a mistake. They are remarkable tools, and used thoughtfully they can dramatically improve the way engineers explore and design systems.
Language models are exceptionally good at helping engineers reason about problems, explore design alternatives, summarize complex systems, and generate drafts that accelerate the early stages of implementation.
Where they struggle is in the areas that require strict discipline and consistency over time. Maintaining alignment between specifications, tests, and implementation remains an engineering responsibility. No tool can replace that responsibility, although many tools can help support it.
The real opportunity lies in using language models in ways that strengthen the engineering process rather than quietly replacing it.
Conversational Software Engineering
One of the more interesting possibilities opened by language models is that parts of software engineering may become more conversational. For decades the tools we used to design systems were rigid. Specifications were documents. Architectures were diagrams. The reasoning that led to those artifacts often disappeared into meetings and hallway conversations.
Language models change that dynamic. Engineers can explore ideas interactively, test assumptions, and work through designs in ways that feel much closer to natural conversation. That ability is genuinely valuable. But conversation is not engineering.
Conversation is how ideas are explored. Engineering begins when those ideas are captured in a form that can be validated, tested, and maintained. The challenge for the next generation of engineering tools will be learning how to bridge those two worlds without losing the discipline that complex systems require.
Expertise Still Matters
Professional software still requires engineers who understand how the systems they build actually work. Tools can accelerate development, but they do not eliminate the expertise required to design, reason about, and maintain complex systems. Right now the industry seems dangerously close to forgetting that.
LLMs are remarkable tools. They can make experienced engineers far more productive. But they do not replace the engineering discipline required to build reliable systems.
Let’s use these tools effectively, not worshipfully.