Work Tech Weekly Podcast | Hosted by Steve Smith at Rep Cap

AI, Anxiety, and the Apprenticeship Problem at Work

Written by Steve Smith | Feb 27, 2026 4:23:32 PM

Talking to Dan Riley is a much-needed dose of perspective. When you talk to most tech founders, you end up talking about … not surprisingly, technology. And Dan had a lot to say about tech and AI. But Dan is, at heart, a humanist.

Maybe it comes from years of building solutions in the employee listening and engagement space. Maybe it comes from being a grungy Minneapolis rock dude in the 1990s. And at a time when you can’t avoid doomer AI predictions, Dan reminds us that AI isn’t the villain. It’s the mirror.

“I think we need to remember that AI is created by us,” Dan Riley says. “It’s humanity’s job to rein in and focus AI in the right direction.”

This isn’t something that happens to us. It’s happening because of us. We are responsible for how it shows up at work. That’s an important reframing of what’s going on.

 

On this episode of Work Tech Weekly, I’m talking with Dan Riley, a co-founder at RADICL, a startup that wants to help employers listen better — not just to data, but to people. It uses AI to surface clarity and insight while keeping human connection at the center of the work experience. He reminds us that two things can be true at once:

  • AI is reshaping work in profound and transformative ways.
  • The human experience still matters — maybe more than ever.

That tension is the heart of this conversation. It’s also a very grounded way to look at things. And, in 2026, grounded feels radical.

Is AI at Work Creating an Apprenticeship Problem?

One of the central paradoxes of AI at work is this: AI makes things easy, but we learn from the hard things in life. Difficulty and conflict are essential parts of being human. It’s also how we learn at work.

I told Dan about the start of my career in newspapers, going from high school straight into a newsroom with award-winning journalists, police-beat reporters and grown-ass adults who had been in the shit. I was in over my head. I screwed up. I got my ass handed to me. I listened. I learned. I did better. That’s apprenticeship.

One of AI’s underappreciated risks is how its ease of use can limit the learning experiences that ultimately become your instincts and perspective. “If we just become vessels for what AI is telling us to do, we lose that beautiful, creative, imperfect, messy thinking of being a human,” he says.

That word — messy — matters. Because creativity is messy. Learning is messy. Apprenticeship is messy. And that’s what worries both of us.

What happens to young people today when early-career work gets automated, teams are fully remote and the first draft is always machine-polished? Do young workers get to struggle anymore?

Dan didn’t dismiss that fear. He doubled down on something important: At the end of your life, you won’t remember the prompt you engineered. You’ll remember the people and experiences.

“I don't think that you're going to say, ‘Wow, I really created an incredible large language model and prompting approach that changed things.’” Dan said. “I think you're going to think about the people in your life that influenced you and inspired you. You might even think about the people in your life that you wish you had spent more time with and cared about more and learned from more.”

Maybe it was the timing. Dan’s a Minneapolis guy, and we were talking at the peak of the recent turmoil. But it is important for us to lose sight of what you’ll remember when you slough off this mortal coil.

The Optimistic Case for AI at Work: What Could Go Right?

You don't have to go too far to come up with a laundry list of what could go wrong with AI at work. That certainly dominates the conversation. However, my experience working with Work Tech founders over the years is that they're inherently optimistic. They believe in a better way of working. Dan is no different in that regard.

“I was an employee listening and employee engagement company for many years. And we actually had that belief system that if you're more engaged, and you're more inspired, and you're happier at work, you're going to show up as a happier person outside of work. So you're going to be a better spouse, a better partner, you're going to be a better father or a better mother.”

So, if we get AI at work right, what does that look like? And how is that making the world better?

Dan believes that better work breaks down into three buckets:

  • Clarity — Do I know what I’m doing?
  • Capability — Do I have the tools and people to do it?
  • Connection — Do I feel part of something bigger?

AI absolutely fits into capability. But connection? That’s human. If AI frees up time from drudgery — scheduling, manual processes, endless coordination — it can create more room for mentorship, debate, constructive conflict, real collaboration. If we let it.

Ultimately, it’s about seeing AI as a tool for employees, not an employee replacement. “If used correctly, AI frees up time and gives us the opportunity to focus on being more human.”

Avoiding the Echo Chamber and Challenging Our Assumptions

Of course, there is another subtle danger. AI is very good at telling you what you want to hear. So are humans. Dan and I talked about how easy it is to retreat into echo chambers — politically, socially, professionally. AI can accelerate that if we use it lazily.

“We get into our echo chambers of what we hear and what we believe, and we're not willing to potentially listen and say, ‘Yeah, maybe you have a point, so let me re-establish or rethink a little bit.’ And that's how we find common ground a little bit more — and I do believe there is common ground to be found. And that goes across the teams, organizations, people, families, friends, society, humans.”

But used responsibly? It can challenge assumptions. Surface new angles. Help synthesize disparate information. It’s not conscious. It’s not creative in the human sense. It’s another voice at the table.

The question is whether we’re strong enough to disagree with it. As with any tool, it’s about using it the right way. AI can accelerate execution. It cannot replace human judgment, struggle, or connection. So, yeah, AI can draft your strategy memo. It cannot decide whether the strategy is actually good.

“If used correctly, responsibly, AI frees up time and gives us opportunity to focus on being more human, right?” Dan says. “My fear is that it's just gotten a little bit ahead of us. And what I'd like to see is just slowing down a little bit, trying to navigate the roads together.”

I love Dan’s idea of responsibility because it puts control in our hands. AI didn’t appear out of nowhere. We built it. If AI takes tasks off our plates so we can invest more deeply in clarity, capability, and connection — that’s a future worth building. But that outcome isn’t automatic.

It requires leaders willing to say: Speed is great. Human development is better.

And that balance? That’s the real work.