“Vibe coding” is one of the latest “in-words” in code development right now. It’s defined as such…
Vibe coding (also written as vibecoding) is a recently-coined term for the practice of writing code, making web pages, or creating apps, by just telling an AI program what you want, and letting it create the product for you.
Merriam-Webster
It’s instructing AI (or, rather, LLMs – which the Wikipedia article on the same subject clarifies) to write code for you.
Why do I hate it so much?
If we were referring to writing a recipe book or creating a novel it doesn’t have its own “hip” phrase to go with it. Many people would simply call it “stealing” (see video game voice-over artists, actors, book authors, etc. for citations).
LLMs don’t miraculously know how to create code – it’s learning from what’s available to it online already. Do you think it’s learnt from closed code such as Microsoft software, or anything from Apple? No. It’s taking advantage of the generosity and sharing spirit of the open-source community.
So, if you “vice code” something for WordPress, how do you think it knows all the tricks of WordPress? All the right functions and hooks to use? The LLM has trawled open source code and documentation to learn it. But without knowing the context and properly understanding it, it’s generating poor quality code too.
And on that we stick a trendy phrase, “vibe coding”. If you have a cleaner for your home, do you refer to that as “Oh yeah, I was vibe cleaning today” – i.e. getting someone else to do the work for you?
I like the second sentence in the Merriam-Webster definition…
In vibe coding the coder does not need to understand how or why the code works, and often will have to accept that a certain number of bugs and glitches will be present.
Translated: the result is likely to be shit, and you probably won’t know how to fix it.
But Wikipedia defines this part a little differently…
The LLM generates software, shifting the programmer’s role from manual coding to guiding, testing, and refining the AI-generated source code.
Wikipedia
So, it’s shit and you’ll spend a long time fixing it.
This also suggests the person doing it is proficient enough to properly test and debug – all the AI is doing is taking away the need to write the code in the first place. If this is the case then I’d suggest this is a false benefit as, based on what I’ve seen, you’d spend as much time, if not more, fixing and debugging your code than if you’d written it correctly in the first place yourself.
But this raises another question. What kind of coder wants to avoid the pesky issue of writing code and concentrate on debugging and testing? Someone who isn’t very good or doesn’t like coding I’d suggest. Is this the right person to be checking over the AI results and then supporting it?
What is vibe coding then? Let me share my own, personal definition…
Getting LLMs, who have learnt from people who have generously shared their code online, to write code for you, often poor quality, with no accountability and often with no understanding from the person doing it, of what could be wrong with it, and hence unsupportable as a result. At best, the requester1 does code but is so uninterested in the art of coding that they’d rather debug poor quality AI results rather than write it themselves.
Is my definition fair? I’d say it’s actually pretty much what Merriam-Webster and Wikipedia are defining it as.
I touched upon something else in my definition too, “the art of coding”. “Code is Poetry” has long been the tagline for WordPress and it’s something I passionately believe in. There is a genuine artistry to good quality code – how it’s structured, the logic of it, etc. You’re not going to see it in a gallery any time soon but another developer will acknowledge and appreciate good code in the way you would a piece of art.
What the LLMs are producing is not art. But it is stealing from it. Which is why I initially compared this to how AI is stealing from authors, performers and, yes, artists. This is no different. And like in those other examples the AI code is usually inferior, so sully the name of the original work that it’s stealing from.
When I looked at a recent example of AI generating code, I noted the following comment from the person who’d done it, talking about WordPress plugin developers…
WordPress is notorious for having inexperienced developers that don’t know a thing or two about security or don’t even bother.
Yet, when I checked a simple plugin generated by the same LLM, it had 13 security vulnerabilities in it – none of which would be accepted by the plugin directory that the same person was berating. Will a world in which inexperienced (or zero-effort) developers use AI to create poor quality code enhance or diminish the work of those who develop themselves and pride themselves on good quality results? I think we know.
Eventually LLMs will get better and, hopefully, they’ll one day create code that’s as good as anything a really good developer can write. But that’s not happening yet and, even in that world, it doesn’t get away from where that learning has come from.
Until then, we need 2 things…
- An insistence that all AI generated code is marked as such
- A way to prevent our code from being used by LLMs for learning
“Vibe coding”. It’s just a trendy, hip name to hide the reality of what it actually is. And that’s why I hate it.