← Back

Change as a Commodity

I’ve been trying to answer a question about AI and software development for a while now, and I don’t think I’ve been looking far enough for an answer.

The question I started with: what happens to the developers? (and their careers?)

When I talk to friends who have been building software for fifteen or twenty years and watch them ship things with Google Antigravity or Claude Code (I hope no one is using xAI Grok Code) in an afternoon that used to take weeks, or I read stories online of people one-shotting implementations that would have needed a small team and weeks of back-and-forth, it’s hard not to wonder what this means for the people doing this work today.

But that’s mostly a generational question, about what happens as one way of working gives way to another, and I think the one I keep thinking about is what comes after that transition and how we can navigate it in a psychologically safe manner without losing a lot of talented people along the way.

I’ve been writing code for more than twenty years, so I understand concerns of a craft disappearing. A new generation of developers is entering the field and they’ve never coded any other way. They don’t have ten years of production incidents behind them. They’ll still get paged at 2am because a slow-burning regression quietly degraded checkout conversion for three days before anyone noticed. But how they fix it, how fast they fix it, and how much they understand about what caused it when most of the system is a black box to them will be completely different. All that knowledge, the kind you accumulate through actual failure at scale, they’re going to pick it up differently. And I would never use that against them or their new-found way of working, because it’s their reality.

At Delivery Hero we spent years getting availability and resilience right. Not because someone handed us a checklist to perfect resilience, but because things broke and we fixed them and wrote it down and did it again. At that scale, an outage doesn’t just cost you revenue for the hour it’s down (which is a lot at that scale!), it costs you customers who had a bad experience and never came back. The blog posts, the postmortems, the conference talks that carry that kind of knowledge will eventually thin out, because fewer people will be in the position to write them. I’ve benefited too much from that kind of knowledge over the years to pretend it doesn’t make a difference when it thins out.

Stack Overflow served tens of millions of developers as a knowledge base for over a decade. When it started losing relevance, barely anyone spoke up because everyone moved on and got it from AI models. The knowledge doesn’t disappear, it shifts, but the way we access and preserve it changes, and we’re not paying enough attention to that. That means the knowledge now lives inside AI models, and whoever trains those models shapes what gets preserved. There’s already evidence that LLMs favor React over Vue.js simply because there was more React content in the training data. This I would call a structural bias, built into the tools that increasingly shape how software gets built.

To understand what might be happening, it helps to look at the second half of the twentieth century, when the Apple II and Commodore PET arrived in 1977 and the question was what would happen to the specialists who had built careers on institutional hardware that was locked away in universities and large corporations with deep enough pockets to afford it. Mainframes, COBOL, and Fortran didn’t disappear; they retreated into the specific domains where they made sense. A generation of people who had never thought of themselves as developers started writing code not because they were trained, but because the machine was suddenly within reach at their homes. New languages emerged for new contexts like hobby programming, new communities formed around them, and the total volume of software created grew by orders of magnitude. The specialists didn’t get replaced, the population of people who could build things just expanded dramatically around them.

AI in software development follows the same pattern, but the scale is different. The specialists of the mainframe era kept their place because personal computing created new demand around them. This time, the new group isn’t just growing alongside the specialists, it’s taking over most of the volume. The remaining specialists will likely shrink into a niche, and the question is whether the next generation will even know they’re there. We’re moving from a few building software to everyone having access to build it, and we haven’t fully understood what that means yet.

The framing I keep coming back to: AI doesn’t just make coding faster, it makes change accessible and affordable.

I remember as a kid typing out BASIC code printed on paper from a floppy disk, copied by a classmate from a computer magazine for me, into my graphical calculator just to play a game. That was what it took to run someone else’s code. Compare that to today, where you describe what you want in plain language and get working software back. Building something new, modifying something that exists, shipping a product, automating a process: all of these have historically needed people at every step. You needed people who could translate an idea into working software. That translation layer was expensive, slow, and bottlenecked by a limited talent pool. At previous companies I ran early career incubator programs specifically because we couldn’t hire fast enough from the outside. Coding bootcamps grew rapidly through the 2010s for exactly the same reason: demand for developers outpaced supply so severely that companies and individuals found ways to manufacture the skills from scratch. Even then, the gap between “we know what we need” and “it’s built” was measured in months. A straightforward feature could take weeks to spec, prioritize, and build, not because the engineering was hard, but because every team was competing for the same limited capacity, it got worse with more teams involved.

We’re already seeing early signals of what happens when that changes. The first indicator isn’t coming from professional developers at all, it’s coming from people who never saw themselves as developers: hobbyists, founders, researchers, people with a problem and a clear idea of what they need. The volume of things being built by people who wouldn’t have called themselves developers two years ago has grown noticeably. You can see it in the amount of new SaaS products popping up and in how traditional SaaS stocks react whenever OpenAI launches something that enters their market. Recent data suggests that the majority of people using AI coding platforms today are not professional developers. Some of the fastest-growing software companies of the last two years are tools that let people describe what they want and get working software back.

📌 There’s significant AI bubble capital behind many of these valuations, and rapid growth reflects demand and investor appetite more than it reflects product quality or long-term viability.

For me it comes down to this: things are getting built that simply wouldn’t have been built before, by people who couldn’t have built them before, and that changes who gets to participate in creating software.

We’ve seen this pattern before. Let’s think about digital photography: professional cameras and film development were expensive and specialized, so photography stayed in the hands of professionals and dedicated hobbyists. When digital cameras and then smartphones made it essentially free, the volume of photos taken didn’t double or triple, it became practically uncountable and new platforms like Flickr or Instagram (the initial platform, not the ad- and influencer-infested one we have today) emerged. Access to a tool that was previously scarce doesn’t just serve existing demand. It reveals demand that was always there but had no way to express itself.

Change is becoming a commodity. Building and changing software is becoming as accessible and affordable as any other good.

The most common pushback in the engineering community, and a recurring theme in the eye-rolling commentary on social media whenever vibe coding comes up, is about resilience. Building software that actually holds up under load, that fails gracefully, that can be operated at scale: that isn’t something you one-shot. Payments, logistics, inventory, anything that runs a real business: these took decades of accumulated engineering judgment to build safely. Availability, fault tolerance, graceful degradation aren’t features you bolt on at the end. They’re the residue of hard lessons and I know because I was involved in or closely saw some of those systems being built and great developers being formed through hard lessons. But the risks we already accept in enterprise software are rarely spoken about with the same alarm, and I think the comparison deserves more honesty than it usually gets.

📌 Supply chain attacks through open source package registries have compromised production systems at large companies for years, and we accept that risk largely without question because it’s familiar to us and somewhat known. The risks of AI-generated code are newer and therefore feel larger, but they’re not fundamentally different in nature.

The resilience concern also contains a hidden assumption: that we’re talking about building at enterprise scale from the first commit. Almost no resilient system was born resilient. It got there step by step, through growth, through incidents, through specific pain. The companies we now think of as engineering benchmarks didn’t start with resilient architectures; they built them because things broke at scale and they had to fix them. The knowledge that came out of that is now part of what AI models learned from, accessible in a way it never was before. The maturation cycle doesn’t disappear with AI-assisted development, but the time it takes to move through it compresses. No company builds for millions of users on day one. They grow into that complexity, and the process of figuring it out as you go doesn’t change, it just happens at a different pace now.

I think it’s only a short time until we see institutional knowledge and engineering skills deeply built into agentic coding products that terms like “vibe coding” and “code slop” lose all relevance. The gap that people point to today is closing faster than most expect by looking at how far we have come so quickly.

📌 In the last eighteen months, agentic coding products have gone from simple autocomplete to multi-agent systems that plan, write, test, and review code autonomously, with persistent project memory that encodes architecture decisions and engineering standards. The financial incentive behind this is enormous: every business has a reason to reduce the cost of building and maintaining software, and that alone guarantees continued investment in making these tools better. The companies building these products have every reason to keep improving them, and their customers have every reason to pay for it.

What about quality next to resilience? The realistic scenario for a lot of AI-generated software is a short lifespan, and for a small startup shipping an MVP, that’s fine, often even desirable. You build fast, learn fast, rebuild. Most software in production today is already buggy and unreliable, let’s not pretend otherwise, and the way it gets fixed is through real users finding the problems. AI-generated software isn’t fundamentally different in that regard; the difference is the speed at which you can respond when something breaks. For larger organizations the calculus is harder: a supplier that ships something with an unclear maintenance horizon is a procurement risk, and established security and compliance frameworks already require assessing long-term viability, security posture, and support continuity. Those assessments will need to get sharper as the number of suppliers shipping AI-generated products with thin developer teams grows.

📌 Open source changes the longevity equation. If the code is open, it shifts from “will this vendor maintain it” to “can we maintain it ourselves.” When AI handles much of the implementation work, a small internal team can realistically own a codebase that would have previously required a specialist vendor contract. RedHat and Elastic already built businesses around this pattern: take open source, add compliance, support, and long-term maintenance. Someone is going to build this well for the AI era.

I find it hard to think through what happens to the people who use software when it becomes as invisible as running water. Think about how people use enterprise software today. Someone logging into a large ERP system has no idea how it’s built. The database structure, the years of business logic that went into it: completely invisible to them. They interact with the interface, and the engineering is a black box. When I install a game, I have no idea what technology was used to build it or how the graphics work (sometimes I know; it’s Unreal, Unity or Godot). I don’t care. The same with mobile apps, desktop tools, most of the software I use every day. Finished goods, where the engineering is invisible by design.

Most custom software will increasingly work the same way. You pick a platform or a composable set of components, configure and adapt, and you never see what’s underneath, so the black box grows. And if that happens, what is important shifts from knowing how to build things to being clear about the problem you’re trying to solve, and being able to recognize whether what you got actually solves it. Those are skills, but not the ones we’ve historically called software engineering. And that’s part of what makes change a commodity: when the building becomes invisible, knowing what to build is what counts.

A lot of developer identity has been built on being able to do something most people around you simply couldn’t, and I understand why that’s hard to let go of. People who have spent a decade getting very good at something don’t easily accept that the thing itself is becoming less scarce. But I think the discomfort is about identity, not about the value of the underlying thinking. The strategic, process-driven, risk-aware way developers approach problems doesn’t disappear when the syntax does. What disappears is the syntax as a barrier to entry, and that’s exactly what makes change a commodity.

📌 When software becomes a commodity, data and customers are what remains scarce. A team can duplicate an application in hours, but not the user behavior, the history, the patterns that only emerge over time. Software increasingly becomes the interface to data rather than the asset itself. This thought has been shared by one of our Heads of Engineering at roadsurfer and resonated well with me. To expand on that, customers are just as volatile: Clubhouse went from zero to ten million users in under a year, lost most of them just as fast, and today barely anyone remembers it existed. People don’t hesitate to adopt a product or drop it, and no amount of engineering changes that.

So back to the original question. It’s not just about what happens to developers. It’s about what happens when the ability to build things stops being a scarce skill and time a limited resource, both become something much more widely available. Every problem that goes unsolved today because the cost of building a solution is too high, every process that stays broken because no one can justify the engineering time, every idea that never gets tested because the team is already at capacity: that’s the surface area that changes. The same shift that lets a company move faster also lets a researcher in a field with no engineering budget build the tool they need, lets a small community solve a local problem that was never big enough for a startup, lets an individual act on an idea that would previously have required finding a technical co-founder or raising money.

The colleague from the quote about data and customers framed it in a way that stuck with me: the majority of the technical execution that used to define the job is becoming automatable, but the rest that remains, the judgment, the context, the ability to ask the right question and recognize a wrong answer, can now produce an order of magnitude more impact than it could before. For some people that’s energizing. For others it’s a genuinely hard thing to accept, and that deserves honesty, not just a shrug. The generational transition will be uncomfortable, and the fears about quality and the skepticism from established developers and teams are not entirely wrong. Some of it is legitimate. But the same concerns were raised about every technology that lowered the barrier to access, and in every case the access won. Not because the concerns were wrong, but because when something becomes a commodity, demand finds a way. The question was never whether people wanted to take photos or to build software. The question was whether they could. And once the answer shifts to yes, the question of what gets built stops being a question of who has access to developers, and becomes a question of who has a clear enough idea of what needs to exist. That’s a different world, and it’s one that’s very hard to stop.

None of this changes because we want it to or because we don’t. What we can choose is how we move through it. The developers and builders who have spent years accumulating hard-won knowledge have something real to pass on. The people coming after them have a perspective that is just as valuable: they see what is possible without the weight of how things used to be done. That exchange is worth protecting. I think it is one of the most overlooked parts of this whole transition, and we are not doing nearly enough to support it. The shift will be easier for everyone if we treat it as something to navigate together rather than a problem each person has to work through alone.

This blog has been inspired by conversations across tech leaders and a post published by Jasper.


Topics: Software Development