Accelerator X
arrow_back Back to Hub
Capability Building

The hard truth about AI in the workplace that nobody selling AI wants to say

The hard truth

"It would have been faster to just write it myself. It produced a pile of garbage that didn't sound anything like me."

We've heard some version of that sentence more times than we can count. In workshops. In readiness surveys. In the kind of candid conversations that happen after the formal part of the session is over and people start saying what they actually think.

It's not a complaint about the technology. It's a complaint about the gap between what AI promises and what most people experience when they actually try to use it. And it's more revealing than it might first appear — because it tells us almost everything we need to know about why AI adoption in the workplace is stalling, and what it's actually going to take to fix it.


The numbers don't lie, but they do flatter

Last month, the UK Government published the results of a major research programme into AI skills — fifteen months of work, eleven reports, led by Ipsos alongside the Alan Turing Institute and the University of Warwick. The headline findings were striking, but not in the way the press release made them sound.

97% of UK adults have heard of AI. 73% have used or consumed it in the past month. Those are the numbers that made the news. They suggest a country rapidly getting to grips with a new technology. They suggest momentum.

Here's the number that didn't make the news: only 21% of UK workers feel confident using AI in their jobs.

We run an AI readiness survey before every workshop we facilitate. Our data from the last fifty participants maps almost exactly onto the national picture — and in some ways it's more stark. Ten out of ten people have heard of AI. Nine out of ten have used it. Six out of ten use it at least once a week. But only three out of ten describe themselves as confident using AI tools professionally. And within that three, the proportion who say very confident is considerably smaller.

Almost everyone is using AI. Almost nobody feels like they know what they're doing.

That gap — between usage and confidence, between touching it and trusting it — is the thing that matters. It is not closing on its own. And it will not close on its own.


The search engine problem

Here's what's actually happening in most of those AI interactions.

People are treating AI like a slightly smarter search engine. They type a question or a request, they get a response, they scroll through it looking for something useful, and they move on. It's the same mental model they've used for Google for twenty-five years, applied to a fundamentally different kind of tool.

And that's understandable. That's where people start. The interface looks similar. The instinct to type and receive is the same. So they use what they know.

The problem is that this approach produces exactly the outcomes they describe. Generic responses that don't sound like them. Outputs that technically answer the question but miss the point. Results that require so much editing they've saved no time at all. The experience reinforces the scepticism, the scepticism reduces engagement, and the gap between usage and competence widens rather than narrows.

What most people haven't been shown is that AI rewards a fundamentally different kind of interaction. It's not a vending machine. It's closer to a highly capable colleague who doesn't know you yet — one who needs context, who benefits from challenge, who responds well to being pushed, and who gets significantly better the more you invest in the working relationship.

The people who get real value from AI are, almost without exception, the people who've figured out how to have that different kind of conversation. Not because they're more technically capable, but because someone, at some point, showed them how — or because they had the patience and curiosity to figure it out themselves.

That's a skills problem. Not a technology problem.


What's below the surface

The government research introduced a useful concept they call the "iceberg effect." The surface concerns people express about AI — worries about accuracy, about privacy, about misinformation — are real, but they're driven by something deeper.

Underneath the stated concerns is discomfort. Powerlessness. A feeling that this is something happening to them rather than something they have agency over.

That resonates with what we see. The person who says "it would have been faster to do it myself" isn't just frustrated with a tool. They're frustrated with a situation in which they feel they should be able to use something that everyone else seems to be finding useful, and they can't work out why it's not working for them. That experience doesn't make people more curious about AI.

It makes them more defensive. And defensive people don't build new skills.

The report also gave us a useful framework for thinking about the workforce. It describes four types of worker:

  • AI experts (researchers, engineers — a small minority)
  • AI specialists (technical roles)
  • AI implementers (business analysts, project managers)
  • general AI users — the majority of the workforce.

Most people will never need to understand how a large language model works. But they do need to understand how to use one effectively. And they need the confidence, not just the access, to do it.

Access is not the problem. The government's new AI Skills Boost programme — launching with Google, Microsoft, IBM, Accenture and others — aims to upskill ten million workers by 2030. Free courses, digital badges, industry-approved benchmarks. It's a serious intervention and it's more than the market has had before.

But course completion is not capability. A badge is not confidence. And the iceberg doesn't melt because someone watched a forty-minute module on responsible AI use.


The automation trap

There's a version of the AI story that goes like this: the technology is the solution. Install the right tools, automate the right processes, and the business problem solves itself. The AI saves the time, captures the value, and the people adapt around it.

This version is being sold aggressively by a significant portion of the AI industry. It's appealing because it's fast, it's measurable, and it requires almost no investment in the hardest and least glamorous part of business change: the human part.

We've seen it. A workflow gets automated. The people who were supposed to benefit from the automation didn't understand it, weren't involved in designing it, and don't trust it. So they work around it, or they use it minimally, or they wait for it to fail — and when it does, they feel vindicated.

The technology gets replaced or abandoned. The problem it was supposed to solve remains.

Installing n8n — or any automation tool — does not change a business. Using AI tools occasionally does not build AI capability.

Technology is an accelerant. It amplifies what's already there. If what's already there is a team that doesn't understand the tools, doesn't have a framework for using them, and doesn't feel confident enough to experiment, the technology will accelerate very little.

The businesses that are going to get this right are not the ones that move fastest to automate. They're the ones that invest, deliberately and patiently, in building the human capabilities that make the technology worth having.


Why the next 12 to 24 months are different

The government research projects that by 2035, 3.9 million people — 12% of the UK workforce — will be in roles where AI is a core activity. Another 9.7 million will be in roles where AI is at least adjacent to their work.

And crucially, most of this growth will not come from new AI-specific jobs being created. It will come from AI responsibilities being added to existing roles. Your finance team, your operations team, your account managers — they will need to be able to use AI tools competently as part of their current jobs, not as a specialist skill on the side.

That's not a 2035 problem. The capability gap that will determine whether businesses are on the right side of that shift is being established now, in the choices organisations make in the next twelve to twenty-four months.

The compounding effect of building habits, skills, and systems early is significant. A team that starts building real capability this year will be in a structurally different position in three years than a team that waits.

This is exactly why the "let's install a tool and see what happens" approach concerns us.

Not because tools are bad (they're essential). But tools without capability are infrastructure without inhabitants.

And the window for building that capability at a pace that creates genuine competitive advantage is not indefinitely open.


What genuine transformation actually looks like

It looks like this: a senior team that understands what AI can and can't do, has a realistic view of where value actually lies in their specific business, and has made deliberate choices about where to focus.

Not a maturity model. Not a framework borrowed from a conference. A clear-eyed assessment of their own operations, with a plan that they own.

It looks like people who have been shown — not told, shown — how to work with AI tools in ways that are relevant to their actual jobs. Who have had the chance to experiment in a safe environment, to make mistakes, to develop their own working methods. Who have moved from occasional, tentative use to habitual, confident use because the investment was made in helping them get there.

It looks like systems and workflows that were designed with the humans in the loop, not around them. Automation that people understand and therefore trust. Processes that have been rebuilt, not just digitised.

And it looks like ongoing support. Not a one-day workshop and a good luck. Not a twelve-week sprint that ends and leaves people to figure out the rest alone. A working relationship with someone who understands your context, who knows your people, who is invested in whether this actually works.

None of that happens quickly. None of it happens without real investment — of time, of attention, of money, of leadership. Anyone who tells you otherwise is selling you something (and it's not the thing you actually need).

And the price of believing them is usually paid six to twelve months later, when the automation project has stalled, the tools are sitting unused, and the team is back where it started, only with less appetite for trying again.


The SME advantage hiding in plain sight

The research confirms that AI expertise is concentrated in London and the South East. Salary premiums and educational requirements make it hard for smaller businesses to compete for specialist talent. SMEs face structural disadvantages in accessing AI skills.

All of that is true. And it still doesn’t capture the most important part of the picture if you’re leading a small or mid-sized business.

Large organisations move slowly. Not because they're badly managed — because of how they're structured. Meaningful decisions happen quarterly. Change programmes take years. The feedback loop between "we tried something new" and "we adjusted based on what we learned" can be eighteen months long. For large organisations, that loop is now their primary competitive liability.

Smaller businesses — businesses in the £10 million to £100 million range — can move differently. A decision made on Monday can be implemented by Wednesday. A new workflow can be tested this week and refined next week. The compounding effect of rapid iteration — trying things, learning, adjusting, building habits — is available to smaller organisations in a way it simply isn't for their larger competitors.

The businesses that are going to look back at this period and feel like they got it right are not necessarily the ones with the biggest AI budgets or the most sophisticated technical infrastructure. They're the ones that committed, with genuine intent, to building real capability across their teams — and that used their size as an advantage rather than an excuse.

The window is open. It won't stay open indefinitely.


Accelerator-X partners with mid-sized businesses to build lasting AI capability. We work with leadership teams who know AI matters but want to do it properly — not quickly.

LI X FB

Rate the signal?

Add Accelerator X as a preferred source on Google.

If this piece was useful, you can tell Google you want to see more from Accelerator X. The link opens source preferences in a new tab.

Add on Google