Artificial Intelligence (AI) hit the world like a storm. Within months, tools that once required teams of specialists became accessible to anyone with a browser. The promise is intoxicating: faster creation, fewer barriers and unprecedented productivity. But humans are predictable creatures who gravitate toward the shortest path, and when something looks too good to be true, it usually is.
The reality, as always, sits somewhere in the middle.
People can now create code, content, designs, analysis and output, at a pace that was unthinkable five years ago. But "more" is not the same as "better," and the sheer volume is becoming overwhelming. We are drowning in generated content while struggling to distinguish signal from noise.
But while we’re creating more, faster, we’re skipping a step that still matters. AI can generate the output, but it cannot tell you whether the output is right. Whether it’s a blog post, a financial model. Or a system architecture, someone still needs to understand what’s being built and why.
AI does not understand your business constraints, your users or the trade offs between security and speed. A developer using AI effectively still needs to know how components should communicate, how data should flow, and what "good" looks like for their specific problem. The AI accelerates the building; the human still provides the blueprint.
Our role has become conductors rather than individual musicians. This is genuinely powerful, the hours saved on mechanical typing and boilerplate are real. But conducting an orchestra still requires deep musical knowledge: you cannot wave a baton at an orchestra if you do not understand the score.
And this is where things start to fracture.
A new generation of builders is emerging. Those who use AI to construct things they don't fundamentally understand. The code works, the app runs; but ask them why it was built that way, whether it's secure, whether it will scale, or why a small change touched 13,000 lines of code. At best, you get a vague answer. At worst, silence.
Take car manufacturing as an example: What we’ve experienced here was not just technological progress; it was a system driven by regulation, cost engineering and planned obsolescence working in concert. Consider emissions laws: car manufacturers introduced features like auto stop-start to reduce pollution, but that same action accelerates engine wear. Over time, components have become cheaper to manufacture but more expensive to buy. Cars have become disposable commodities and we’re encouraged to replace them every few years, keeping the cycle turning.
With cars, this happened by design and it was done to us. With software though, it’sdifferent.The code is right there and we can still choose to have some control of these systems. You can read it, trace it, understand exactly what every byte does if you want to. Nothing is hidden from you. That's what makes this moment interesting. Control hasn't been taken away but the question is: will people continue exercising it?
On paper society benefits: cleaner air, safer vehicles and better fuel economy. In practice, the individual pays more for something that does not last as long while losing the ability to maintain what they own. AI is heading down the same road.
Seriously, where are we racing to?
AGI? Superintelligence? Full automation? The industry talks about these milestones with breathless enthusiasm. But nobody has a coherent answer for what happens to the eight billion people who need to eat, pay rent, and find meaning in their days when we get there.
The bullish crowd insists AI will replace developers, writers, analysts and designers. These are entire professional categories wiped out and rebuilt. I do not buy it. Not in its current form and likely not in the next several iterations. You still need a human in the loop. The conductor, someone who understands what needs to be built, why it matters, and whether the output is actually fit for purpose.But that does not mean the impact will be small.
What I see happening is not replacement but compression. AI will not eliminate roles, it will eliminate headcount.
An engineering team that needed fifteen people now needs five. A content operation that employed twenty writers now needs six plus better tooling. The work still exists. The problems still need solving. But fewer humans are required to solve them.
Instead of four layers of management and execution, you need one or two. Instead of a junior, a mid-level, a senior, and a lead, you need one experienced person with AI augmentation who can cover the full range.
This sounds like efficiency. In fact, it is efficient, however, efficiency and human welfare are not the same thing.
Here is the part that should keep people up at night: the fundamental engine of the global economy is predicated on a continuous cycle: people work, earn, and spend. This assumption of broad adult participation in the labour market underpins every operational economic model, including capitalism, mixed economies, and social democracies. Consequently, essential systems like taxation, housing, healthcare, pensions, and overall consumer demand are reliant on high levels of employment.
Now, compress the workforce by thirty, forty, fifty percent over the next two decades and the impact is significant. The work will still get done and the output might even improve, but the people who used to do that work still need to live, need housing, food, healthcare and purpose.
The economic system that runs the world cannot adapt fast enough. It is too large, too slow, and too deeply embedded in every institution, law, and social contract we have built over centuries. We are not talking about retraining programmes and upskilling initiatives. We are talking about a fundamental mismatch between how value is created and how it is distributed.
Efficiency advocates rarely address the most uncomfortable contradiction of all. As fewer people work, fewer people earn, and as fewer people earn, fewer people can afford to buy the very goods and services that a leaner, more productive economy generates. The gains become self-defeating. What is the point of doing more with less, of squeezing every drop of productivity from a system when the market of buyers is quietly hollowing out beneath you? Efficiency without broad participation is not progress. It is simply a very sophisticated way of producing things that fewer and fewer people can afford.
There is another dimension that rarely makes it into the AI discourse: humans were never designed to work like this in the first place.
The eight hour workday, the five day week, the forty year career are extraordinarily recent inventions in the span of human history. For most of our existence, work was seasonal, task-based and interwoven with community and rest. The industrial model of continuous productivity is an aberration, not the norm.
Energy for this kind of work has been declining for years. Burnout is endemic. COVID did not create this trend, but it ripped the curtain back. Millions of people discovered that the relentless pace was not just unpleasant but unnecessary. Remote work proved that output did not require presence. Reduced hours proved that productivity did not require exhaustion.
AI arrives into this context. A workforce that is already questioning why it works the way it does is now being told that much of that work can be automated. The psychological impact of this is enormous, and almost entirely unexamined.
None of this means AI is bad. The technology is genuinely transformative, and the benefits are real:
But we need guardrails and not just technical ones. We need honest conversations about economic transition and what work means when machines can do more of it, and how we distribute the prosperity that AI creates rather than concentrating it among those who own the systems.
The car analogy holds: the technology got better, but the benefits flowed upward while the costs flowed down. Cleaner air for everyone, but unaffordable repairs for the individual. AI risks the same pattern at a civilisational scale.
We are in the early innings of a transformation that nobody fully understands including the people building it. The honest position is not breathless optimism or existential doom;it’s a cautious engagement with eyes wide open.
Use AI.
Build with it.
Let it save you the mechanical hours.
But do not lose the underlying skills. Do not stop understanding what you are building and why. Do not assume that because something works, it works well.
And demand better answers from the people racing us toward a future they cannot describe. "It will be amazing" is not a plan. "We will figure it out" is not a strategy. The stakes are too high and the pace is too fast for that kind of handwaving.
The question is not whether AI will change everything. It will. But the people racing us toward that future have a financial interest in the speed, not the landing. AWS, Microsoft, OpenAI measure success in adoption curves and market cap, not in whether the economic transition is survivable for the people it displaces. Shaping this requires pressure from outside the room where those decisions are made, because the people inside that room are doing just fine.