Skip to content
Work Tech Weekly
Donald Thompson

The AI Adoption Gap Is Real — And You’re Probably Standing Right In It

Every CEO is gung ho about AI. Every board wants a deployment timeline. Every earnings call ends with some version of "and we're leaning hard into AI efficiency."

But talk to the people actually doing the work? Different story. Fear. Ambivalence. A vague sense that something important is being skipped.

That gap — between the hype at the top and the reality on the ground — is exactly what Donald Thompson has been spending his time on.

Donald is managing director of the Center for Organizational Effectiveness at Workplace Options, the largest independent provider of holistic wellbeing solutions, supporting tens of millions of employees across organizations in 200+ countries. He's also a multi-exit entrepreneur, EY Entrepreneur of the Year recipient, Forbes Next 1000 honoree, and author of Underestimated and The Inclusive Leadership Handbook. He has been in enough C-suites during this AI moment to have a very clear read on what's actually going wrong.

Spoiler alert: It's not the tools.

Why AI Adoption Fails at Work
  35 min
Why AI Adoption Fails at Work
Work Tech Weekly
Play

 

It seems like kind of a no-brainer that AI has created trust issues in the world of work. It’s kind of like saying remote work caused some scheduling conflicts. But somehow, the market keeps whistling past this particular graveyard.

Lack of trust is the biggest barrier to AI adoption right now.

All the productivity math in the world doesn't change the fact that business still runs on humans who relate to each other. The speed of business is only limited by trust. And when you push AI adoption too fast — without education, without context, without any real work redesign — you don't get efficiency. You get fear, uncertainty, and doubt. Which, as Donald points out, actively de-programs productivity.

There's also the overreliance problem. Every major LLM on the market tells you, in plain text at the bottom of every output: I make mistakes. Double-check my work. And yet organizations are deploying these tools like they're replacing judgment, not augmenting it.

"If you think it's going to implement from ideation to implementation to full launch," Donald says, "you're actually now putting the enterprise at risk."

As Donald points out, AI can get you from zero to 70% done really fast — to a strong draft, a prototype, a solid starting point. But someone still has to take it from 70% to something you'd actually be proud to ship. "You're still gonna need amazing people from the 70% to complete," he says.

The 71-100% part of the journey is where all the human opportunities lie. For experienced professionals with domain expertise, taste, and a clear picture of what "good" looks like, to take something from good to great. For junior team members to learn and grow. Unfortunately, that growth is what’s lost when you replace junior talent wholesale before building the systems around them.

You Can't Automate Your Way Out of Bad Work Design

This is where Donald goes places most AI conversations don't. Deploying AI tools is not the same as redesigning how work happens. And most organizations are doing the former while completely skipping the latter.

His framing is sharp: If you have someone doing 10 ten tasks and they're excellent at five of them, marginal at the rest — why are the other five in their stack at all? That's not an AI problem. That's faulty work design. AI doesn't fix that. It just amplifies mediocrity faster.

The smarter move, he argues, is to sort your AI use cases into two distinct buckets: routine, repeatable work that's ripe for automation, and knowledge work where experienced people can use AI as a genuine accelerant. Same tools. Completely different protocols. And very different conversations to have with your team.

One important conversation is about matching the AI tool to the job. AI tools aren’t unlike lawnmowers. Donald is in line with that observation.

“I do not want a 6-year-old on a riding lawnmower. I don't want a 6-year-old with even a push one, right?” Donald says. “All kinds of bad things can happen if you look at these amazing tools, but then you put an inexperienced driver and expect them to do excellent work.”

And while AI is capable of removing a lot of friction from work processes, he's also in favor of decelerating meetings in a meaningful way. "I am actually slowing down a little bit to give people a moment to recalibrate so that I can get the best information, innovation, and insight from them."

What does that look like in practice? For one, requiring a one- to two-page business brief as pre-reading, then opening with a question that checks whether people actually engaged with it. In an environment where everyone's on the meeting treadmill, that small move creates the conditions for a higher-quality interaction. Less performance. More thinking.

We Can’t Close a Gap We’re Not Ready to See

So what do we do with the command-and-control CEO — the one who wants 20% cost cuts from AI, now, no debate. How do you reboot that conversation?

Donald’s take: You don't argue philosophy. You talk risk.

Reputational risk. Performance risk. Information risk. Because the one thing a high-octane CEO genuinely does not want is to look ridiculous in front of their board, miss a quarterly number because the organization got confused, or have bad data corrupting a business decision. Frame AI adoption as risk management, and suddenly you've got their attention.

"I do it with a smile," Donald says. "But typically, they slow down because their self-interest is there."

That's not manipulation. That's meeting people where they are, which Donald describes as the most important skill for anyone trying to drive change in an organization right now.

The same instinct drives the question Donald says most leaders should be asking their teams but aren't: How can I be helpful to you right now?

Not as a corporate wellness slogan. As a practical diagnostic. Often the two or three things weighing a team member down are the exact blockers draining collaboration, slowing execution, and quietly eroding the output of the five things they're actually doing well. And sometimes — this is the part that hits — those blockers are things the leader doesn't even care about anymore. They were quietly taken off the priority list while that employee was still white-knuckling them.

Ask the question anyway. The adoption gap lives in exactly that space — between what leaders think is slowing things down and what's actually happening on the ground.

"Usability is actually the value prop for AI. Not just creation." The same is true of leadership. You don't get credit for deploying the tools. You get credit for making them work.

Let's Talk