"We've been vibe coding since the stone age": Do we still need humans in the loop?
The other day I read this line, buried at the end of a Steve Yegge article on the future of coding agents, and it’s been haunting me ever since:
People still don’t understand that we’ve been vibe coding since the Stone Age. Programming has always been a best-effort, we’ll-fix-shit-later endeavor. We always ship with bugs. The question is, how close is it? How good are your tests? How good is your verification suite? Does it meet the customer’s needs? That’s all that matters. Today is no different from how engineering has ever been. From a company’s perspective, historically, the engineer has always been the black box. You ask them for stuff; it eventually arrives, broken, and then gradually you work together to fix it. Now the AI is that black box.
Is it true? Is the value that software engineers provide by being the humans-in-the-loop really that negligible? Should we just be running coding agents 24/7 and get out of their way?
The end of software craftsmanship?
For a long time, it has been well-known that the process of actually developing quality software – quality from from the business’s perspective, not the craftsman’s – is muddled and nebulous.
Things like agile methodologies have helped us get closer, but it was still fundamentally about trial-and-error and incremental improvements. Maybe agile sprints were just a human-speed abstraction that can now be replaced with “re-prompt the coding agent to try again” – a process that takes minutes-to-hours, not weeks.
In another article, the same author compares this new version of running multiple coding agents 24/7 to unloading fish at the docks:
Work becomes fluid, an uncountable that you sling around freely, like slopping shiny fish into wooden barrels at the docks. Most work gets done; some work gets lost. Fish fall out of the barrel. Some escape back to sea, or get stepped on. More fish will come. The focus is throughput: creation and correction at the speed of thought.
Some bugs get fixed 2 or 3 times, and someone has to pick the winner. Other fixes get lost. Designs go missing and need to be redone. It doesn’t matter, because you are churning forward relentlessly on huge, huge piles of work, which Gas Town is both generating and consuming. You might not be 100% efficient, but you are flying.
If I’m the (hypothetical) manager of a fishing company, I can see the argument for how I don’t really need artisans that hand-deliver each fish to the correct barrel with care and compassion. I really just want the fish dumped out as quickly as possible so the ship can head back out to collect more, and the next ship can berth to unload.
Efficiency at this scale has no concern for a few fish “falling out of the barrel” because they’ll be quickly replaced by the higher velocity of the larger system.
Is this where we are headed?
My intuition still says no.
While I can understand these arguments on some level, I think software is different. A few missing fish on the docks won’t cause a product outage or angry customers, since the individual fish are indistinguishable.
I also think that organizations will still operate at human speed. Someone still has to make decisions about what features should be built and what their desired behavior should be. Someone still has to talk to customers, and write the specs. Someone still has to make sure the service stays online and meets SLAs.
It still feels like there is some intrinsic, natural limit to an organization’s velocity that is tied to the quality and quantity of software engineers it has employed. Coding agents might be able to 5-10x that velocity, but it’s very hard to imagine they could meaningfully or sustainably 1000x that velocity.
The other thing that comes to mind is the IBM internal training quote from 1979, recently re-surfaced by Simon Willison:
A computer can never be held accountable
Therefore a computer must never make a management decision
Who is to blame when a coding agent creates a security vulnerability, or ruins an important feature, or takes down production? Certainly customers will want to blame someone. This is an essential reason why people want to maintain a human in the loop.
This is similar to what humans are currently trying to figure out with self-driving cars. If the car autonomously “decides” to take some evasive action to avoid a pedestrian and ends up crashing and killing the driver, who is responsible for that death? (Or if it decides not to take evasive action and kills the pedestrian instead?)
It would require a wholesale restructuring of society and our laws and even our moral systems to adjust to this sort of thing. Insurance contracts already separate out coverage for damage due to “acts of God” versus from human negligence or recklessness. Would we need to create a new category for “acts which were done by a human-created-but-fully-autonomous system which cannot be understood”?
It’s very difficult to imagine how a society of human beings could adapt to that new category of culpability.
Brief History of the Luddites
All of this reminds me of something I read recently about the role of the Luddites during the early industrial revolution.
Quoting extensively from the fascinating book “Peak Human: What We Can Learn From History’s Greatest Civilizations” by Johan Norberg
There are examples of well-paid cottage workers who lost jobs, but sometimes a previous privileged position had been only a temporary phenomenon. The ‘Luddites’ who destroyed textile machines in the early 1810s were brutally suppressed by the government. History has seen them either as reactionary enemies of technological progress or as unfortunate workers who protested against being made redundant in the only way they could. None of these versions captures the complex story.
In The Fabric of Civilization, Virginia Postrel explains that the Luddites were elite craftspeople, handloom weavers who enjoyed a golden heyday of plenty of job opportunities at high pay. Ironically, this had been made possible by a previous wave of disruptive automation. Machines that mechanized spinning in the late eighteenth century had made many workers redundant, but it also supplied weavers with an abundant supply of once-scarce weaving yarn to weave cloth with. Since it took time to educate skilled weavers, those who already had the skills temporarily got very enviable working conditions.
A generation later, much of their work could in turn be automated by the new power looms, and this is the moment when they protested, sometimes by smashing machines. In other words, the Luddites were not principled enemies of technology but defenders of a privileged livelihood owed to an earlier and more disruptive technology.
From this we learn that disruption always hurts some groups of workers and benefits others, and that the best way to get the benefits is to ride the wave of innovation. Luddites actually managed to delay wool-shearing technology in much of the West Country (England’s southwest).
If I’m a software engineer who is skeptical of autonomous coding agents running 24/7 with little to no oversight – producing piles of code like dockworkers slopping fish into barrels – am I a member of the new luddites? Is it only a matter of time until this method of producing software becomes inevitable?
I am still wrestling with this, and trying to learn how to wield these new tools so as not to be replaced by them.
Maybe the doubt I’m feeling is justified, as I’ve laid out already. But maybe it just comes from being a human who needs to feel like I add value to society to earn a living, and that the experience I’ve developed over the last decade and a half being a code craftsman still means something… 😰