The Last 10%: Where Human Expertise Still Matters

Every few months there's a fresh prediction that AI will replace developers. If you're commissioning a website or hiring someone to build you a product, you might reasonably wonder what this means for you. Will projects get cheaper? Will you still need a specialist? Can you just describe what you want to a tool and get working software out?
Here's what I've noticed after two years of using AI coding tools seriously: they've made me dramatically faster at the 90% of the job that was never the hard part. The remaining 10%, the part that decides whether a project succeeds or quietly fails, still takes exactly as long as it ever did.
That 10% is where I now spend most of my time. It's also where the value concentrates for anyone paying for custom software or a website in 2026.
Let me explain what I actually mean by the 10%, and why it's not going anywhere.
What's Got Cheaper
Let me be honest about what's changed.
A lot of traditional development work has become dramatically cheaper. Boilerplate code, routine refactoring, translating intent into syntax, navigating unfamiliar APIs, generating tests, writing documentation. Things that used to take hours now take minutes. A developer using these tools well can produce significantly more working code in a day than they could two years ago.
If you've been told by vendors that AI is transforming their productivity, they're right about the mechanical parts of the job. The gains are real.
But here's the thing: the mechanical parts were never what made your project succeed or fail. The work that decides outcomes, the decisions and judgement calls and hard conversations, hasn't got any easier. If anything, it's got more important, because there's less busywork to hide behind.
What Still Requires Human Expertise
Understanding the Problem, Not Just the Request
AI answers the question it's asked. It's remarkably good at this.
The issue is that most requests are subtly wrong. A client asks for a feature that sounds reasonable but won't solve their actual problem. A stakeholder asks for a report that will take a week to build and then go unread. A team asks for an integration that duplicates something the platform already does for free.
Experienced developers spend a lot of time pushing back on requests. Not because they're obstructive, but because they've learned that the first version of any request is rarely the right version.
AI won't push back. It will cheerfully build whatever you describe. That's useful when you know exactly what you want. It's expensive when you don't.
Judgement About Trade-offs
Every decision in software involves trade-offs. Faster vs simpler. More flexible vs easier to maintain. Perfect for today vs extensible for tomorrow.
AI will happily present you with three approaches and describe the trade-offs of each. What it can't do is tell you which trade-off is right for your team, your budget, your maintenance capacity, and your tolerance for risk.
Those decisions require knowing things that aren't in the codebase:
- How skilled is the team that will maintain this?
- How likely are the requirements to change?
- What's the cost of being wrong about the future?
- Is this code something you'll care about in two years, or a throwaway?
I've seen AI-assisted projects where every individual decision looked defensible but the cumulative result was a system nobody could maintain. The AI made each choice in isolation. Nobody made the choices together.
Context That Isn't in the Code
Every codebase has stories. Why a particular field was built an unusual way (because the obvious approach caused performance problems three years ago). Why this workflow doesn't use the "clean" modern pattern (because it broke for authenticated users in ways that only showed up in production). Why a plugin is pinned to an old version (because updating it would break something for one editor who happens to be the chair of the board).
None of that is in the code. It's in the heads of the people who built the site, and in scattered notes in old project management tools.
AI can read the current state of a codebase. It can't read the history of decisions that produced it. When it suggests a "cleaner" refactor, it's often suggesting you undo a fix someone made for a good reason nobody wrote down. Untangling that takes experience and institutional memory.
Knowing When the Answer Is "Don't"
The most valuable advice I give clients is often "don't build this."
Don't build the feature, because nobody will use it. Don't migrate to the new platform, because your current one is fine. Don't add the integration, because the ongoing maintenance cost exceeds the benefit. Don't rebuild the site, because optimising the one you have would solve the actual problem for a tenth of the price.
AI won't say don't. It's trained to be helpful, and in its world helpful means providing what was asked for. That's a useful bias when you already know what you want. It's a dangerous one when the most helpful answer is "let's reconsider what we're trying to do here."
Reading the Room
Building software for clients is half technical, half communication. A good developer needs to understand what you're really worried about, not just what you're asking about. They need to know when to say "that will cost more than you think" and when to say "that's a straightforward change." They need to translate between technical realities and the people paying for the work.
AI can draft an email. It can't tell that you're anxious about an internal promotion and need this project to succeed for reasons you haven't mentioned. It can't read the long pause on a video call and know the real objection is coming.
Taste
This one is harder to articulate, but it matters.
Some code is well-designed. Some is technically correct but a mess to maintain. Some solutions are elegant. Some are over-engineered. Some are cleverly simple. Some are lazy.
Distinguishing these things requires taste, and taste comes from experience. AI-generated code often looks right because it pattern-matches the average of a million similar solutions. Senior developers look at the same code and say "that works, but it's going to cause problems in six months" or "that's more complicated than it needs to be."
You can't get taste from a tool. You get it from writing a lot of code, maintaining it, and watching how it ages. It's one of the things you're paying an experienced developer for.
What This Means If You're Paying a Developer
If you're hiring a developer or commissioning a website in 2026, you're not paying for typing speed. You haven't been for a while, but it's now obvious.
You're paying for:
- Someone who can tell you when you're asking for the wrong thing
- Someone whose judgement you trust on trade-offs that don't have a right answer
- Someone who can read your codebase and understand its history
- Someone who will say "don't build this" when that's the right advice
- Someone who can look at AI-generated code and spot where it's subtly wrong
Those things aren't quantifiable on a job spec. They're the difference between a project that succeeds and one that technically ships and then quietly rots.
What Hasn't Changed
The fundamentals of a good software project haven't shifted.
You still need someone who understands problems before solving them. You still need someone making sensible judgement calls about trade-offs. You still need someone maintaining what gets built. You still need someone communicating clearly with the people paying for the work.
What's changed is where the hours go. The grunt work is cheap now. The thinking is where the value is, and always was.
Anyone who tells you AI will replace developers is either selling something or hasn't used the tools enough to know their limits. Anyone who tells you AI is all hype and won't change anything hasn't used them either.
The truth, as usual, is in the middle. AI tools are genuinely transformative for the mechanical parts of development. The rest of the job, the part that actually decides whether your project succeeds, is still a human job.
What to Look For
If you're commissioning development work this year, here are the things worth paying attention to.
Don't assume AI has made the work trivial.
A developer using AI tools can get more done in a day. That doesn't mean projects have got dramatically cheaper overall. The bottleneck has moved from writing code to making decisions, and decisions still take the time they've always taken. If a quote seems unrealistically low because "we use AI," ask what's being skipped.
Be cautious of AI-generated specifications.
If you or someone on your team uses AI to generate a detailed feature spec, treat it as a starting point, not an answer. The spec will describe exactly what was asked for, not necessarily what you need. A good developer will push back on it before building anything. If they don't, that's a warning sign.
Ask "should we?" before "can we?"
The most expensive mistakes I see are features that work perfectly and solve the wrong problem. Before committing to a build, make sure someone has asked whether it's the right thing to build. That question costs almost nothing to ask and saves enormous amounts of money when the answer is no.
Hire for judgement, not output.
When you're choosing who to work with, the questions to ask are: Does this person push back when they disagree? Can they explain why they'd do something a particular way? Can they tell you about a time they talked a client out of something? Those signals matter more than any portfolio or CV claim about AI tooling.
The best value is often in advice, not deliverables.
Counter-intuitively, the developers worth paying are often the ones who help you build less. An hour of honest advice that saves you a six-week project you shouldn't have started is worth more than six weeks of someone happily building the wrong thing.
The craft of building software hasn't gone anywhere. It's just concentrated into the parts that were always hardest, and the parts that mattered most to you all along.
Want a second opinion?
If you've got a build you're considering, an AI-generated spec to evaluate, or a project you're not sure is the right one to start, I'm happy to take a look. Sometimes the most valuable hour you'll spend is the one that talks you out of the wrong thing.