Dan Riley:
I think we need to remember that AI is created by us, right? So AI is a human discovery. And with that, the optimistic positive note is therefore, we have the opportunity to make sure it's heading in the right direction and not be afraid to say, "Well, AI is going to take over humanity." I actually believe it's humanity's job to, again, rein in and kind of focus AI in the right direction.
Steve Smith:
Hey, everyone. Welcome back to another episode of Work Tech Weekly. I'm Steve Smith, managing director for growth here at Rep Cap. Today I'm joined by Dan Riley, co-founder of RADICL. Dan has spent years building in the employee listening and engagement space, and his through line has always been simple, work can be better, and organizations have a responsibility to make it better for their people. That belief has shaped his career and is shaping how he thinks about AI right now.
We're living through a moment that feels unsettled. There's lots of anxiety to go around, in the workplace, in society, and about AI. When it comes to AI, it's moving at a pace that feels less like a curve and more like a straight line. And a lot of leaders are trying to reconcile two competing truths at the same time. The technology is powerful, and the human experience still matters, perhaps more than ever.
Dan and I talk about what happens if we lose the apprenticeship moments that shape young talent. We talk about whether AI is a copilot or a crutch. And we dig into the tension between speed and humanity, clarity and connection, automation and instinct. If we get this right, AI can free us up to be more human at work. If we get it wrong, we risk outsourcing the very things that make work meaningful in the first place. Let's get started.
Dan Riley, welcome to the podcast. Great to have you here.
Dan Riley:
I am very happy to be here. I thank you, Steve. I appreciate the invite and I'm honored to be speaking to you. And I love the work that you do. I will just put it right on the table. So thank you.
Steve Smith:
I appreciate that. But I wanted to start off with kind of diving into, before we talk work tech, I guess talking a little real life. You're based out of Minneapolis, and obviously Minneapolis has been in the news for not great reasons lately. I wanted just to kind of open the floor to you to talk about what's going on and how are you doing?
Dan Riley:
Yeah. I'll start with the last question, how am I doing? So I'm a believer in if you give up and you stop fighting, and when I say the word fighting, I want to be clear, peacefully with voice, with reason, if you lose any of that, then nothing changes. And I'm a big believer about change. And change requires consistently doing something different over and over again.
And the problem is we're a bit in this cycle where we have this, I would almost call it a line in the sand where you kind of have to pick a side. And that's really, I guess heartbreaking to me and sad to me. So reach out to those, and it doesn't have to be here in the Twin Cities, but it's happening here. It's been a little bit scary.
I think I would urge us all to remember that number one, we are all, everybody involved is human. We're probably going to talk a bit about AI, I'm sure we will and tech. But showing up as people navigating through challenging times isn't something AI can solve for. That's humanity. And humanity by default is imperfect. Humanity by default is messy.
So I think we need to dig deep and do everything we can to care more and love more. And I do believe, I truly believe there is common ground. I think for the most part, for the most part, people want the same thing. I think that's not always the case, but I'd like to believe and I'm going to continue to believe because I have to continue, push change forward. So I appreciate you asking that.
It's scary times. It's hard to believe it's 2026 now, and this is the life that we all live in. But there's also, which we'll dig into more today, there's also scary times with redefining humanity with the rise of AI. And two things can be true at the same time. Humans being humans, and humans and the machine, and how do we navigate this? How do we not move too quickly where we just can't keep up? The curve of AI growth, it is not even a curve, it's a straight line. It's challenging and it's scary. But it doesn't mean we can't get the best of it. So that would be my, yeah, that'd be my opening. So thank you for asking.
Steve Smith:
Well, I think that to kind of connect some of the dots on this, I think that to provide probably the understatement of the podcast, we're living in a time of a lot of anxiety. I think that the 2020s, I'm not going to claim to speak for everybody, but I think as a decade, zero stars, do not recommend. We've been dealing with an onslaught of disruption and change, starting from COVID and then into the post-COVID era and then AI explodes onto the scene.
I think that right now, I know a lot of people in the workplace, and I write about this a lot in the newsletter, there is a palpable sense of uncertainty and not knowing what's going to come next. And there are a lot of people who are really unsure about where is this going for me? And is this going to be good for me individually? Is this going to be good for society? And I know that you have a lot of thoughts on just what is the place of humanity in the workplace and not losing sight of being human. Could you talk a little bit about where you land on some of that?
Dan Riley:
So my fear is that the creative process of what actually AI doesn't do because AI truly isn't conscious. AI isn't really creating new ideas that haven't existed. I always joke with our sales folks or those who are advocates or evangelists of RADICL, I always say, "Imagine if it's a complete blackout and all you have is you and your brain and you have 15 minutes and you have nothing else. Can you talk about what we do, what our mission is, what our purpose is, why we do what we do, why we care, what we care?" And I'm pretty insistent on you need to know that, you need to feel that, it needs to be authentic. So that's the fear side of it.
I think there are, as far the speed and the opportunity to consolidate and bring ideas together and actually tempt to your mind to think of new directions, and I think AI is incredibly powerful. We use it. I use it often and I love it. I use it to, it will give me ideas. And I put it more in that tool or copilot category that will allow me to think, "Oh, yeah. Hang on, let me think about it this way or let me consider a different path." And then I go back into my own head and then I go back into being human. So there are real positive things about that. Speed. Again, speed with quality is really helpful, so yeah.
But with all that said, I do know this much, and I don't mean to sound morbid or this is not at all meant to sound like that, but at the end of the day, in your final hours, the things that you're going to remember are the things that were really hard. And I promise you, the things that were really hard aren't the things that AI delivered for you or technology delivered for you. It's going to be relationships, it's going to be people, it's going to be memories of people. And I don't think that's going to change. I don't think that you're going to say, "Wow, I really created an incredible large language model and prompting approach that changed things." I think you're going to think about the people in your life that influenced you and inspired you. And you might even think about the people in the life that you wish you had spent more time with and cared about more and learned from more. So I think it's important to keep all those things in check.
It doesn't mean we don't ... Again, two things can be true at the same time. So yeah, keeping people in the loop in any AI scenario is so important. So almost like checkpoints, like are we getting it right? And being comfortable challenging what AI is doing or saying, thinking just as if it were a person on a team. Conflict is good, right? Debate is good and healthy, and so we need to honor that same type of humans working with humans as humans working with the machine, the AI machine. So hopefully that responds to your question.
Steve Smith:
Yeah, there's a lot there to unpack, but one of the things that I really was compelled by is something that's on my mind a lot. I believe that you're right, I think that what is valuable about the human experience is the relationships that you create along the way. I had one of my former colleagues along the way said something that I knew and I'd forgotten, and I go back to. It's just like with people that you work with, you don't necessarily remember maybe what they did or the work you did together or what they said, but you remember how they made you feel.
Dan Riley:
Yes.
Steve Smith:
And it's just like, I think that struck me because when you look at what is going on in the workplace today and what is ... And every week with the newsletter, I almost feel bad about starting it off with AI every week because it's just like, shit, is this the only thing going on in the workplace right now? No, but it's kind of overwhelming the conversation.
Dan Riley:
It is.
Steve Smith:
You know?
Dan Riley:
It is, yeah.
Steve Smith:
But one of the things you said that I'm especially intrigued by is I think about, I go back to when I was right out of high school. I went from high school to a big city newsroom before I was even 18 years old. And I was all of a sudden with these people who had won Pulitzer Prizes and were crime reporters and were just grown-ass adults dealing with serious stuff. And I learned in part by listening and observing and getting my ass kicked some because I did some stupid shit. But that was part of the apprenticeship is just like you learn by all of that.
And one of the things that concerns me when I look at the workplace today, and especially with young people coming into the workplace is are they going to have those opportunities? They're not in the office, they're working remotely. Now we've got AI, so it's just like you don't necessarily have those apprenticeship opportunities because we're flipping those jobs over to artificial intelligence. And it's just like there's a lot of the human relationship aspect of work that we're not even talking about that is just kind of going out the window and being lost. And honestly, that just scares the hell out of me.
It's interesting because you're touching on a lot of issues that I think are interrelated because we're talking a lot about conflict, we're talking about struggle, we're talking about failure, which are all part of not only the learning experience, but the communication experience. It's just like we don't automatically as humans understand each other, and more often than not, we don't understand each other. And it's how do we navigate that misunderstanding in ways that constructive and build to something better? I think that you're a creative person and you know from firsthand experience that a lot of times to get to the best outcome, you have to have that creative conflict and challenging. And then you have people going at it and you're not listening, you don't understand. And then before you know it, you have, oh, wow, this is something that we didn't start here, but we ended up in a better place. So I think that there is that possibility that's out there.
But then when you look at how AI is ... You can write in a prompt and it can spit out what looks like a perfect, well-reasoned piece of whatever. And I think there's the expectation that work needs to move that fast. There should not be any errors. Of course, there's tons of errors that lie below that veneer of polish in AI. But when you think about what does that mean from a, how do you, as a leader of a startup, you're a co-founder of a startup, you have people that you need to lead to an outcome, to building a company, but you also have the responsibility of helping them learn and grow as individuals. And there's a lot of just competing needs that AI can help with, but then also maybe it can get in the way of that human development. Where do you land on, what is your role in that as someone who's leading a startup?
Dan Riley:
Yeah. I love that question. Look, I use AI to help challenge my thinking. We use it in a startup mode if we want, even just as we think about our go-to-market strategy and our value prop. And while we know what we stand for and we've, from day one, we've been pretty very consistent and it's making work better and we can do better and we must do better, and organizations have a responsibility to their people, I think using AI is incredibly powerful and saves a lot of time. And time is money, and not only money, but time, in a world where technology is changing by the second, it does give us ... It's like a compass to some degree. It allows us to say, "Oh, okay, well, maybe we should tune in this way or tune in this way."
But I would also argue that don't let it challenge what you truly believe right here, because I do believe the best ideas, and I can name the companies, I won't, but some of the best companies out there that changed the world came from a vision of what it could or should be and had nothing to do with AI.
So I think AI is a tool to help organize the thoughts. It's a tool to help synthesize ideas, possibly put it into something a little bit more tangible, something that's easier to share and understand. But at the end of the day, I think it's also challenging people, and especially in a startup mode, to stick to their instincts as well.
There's this old saying, I don't know how old of the saying is, but it's a truthful saying, is do what you're really good at and become better at it. Right? Versus just like, or I wish I could do this. This is my dream, so I'm going to go chase that. That's fine. You can do that to some degree, but understand what you're excellent at and just become more excellent at doing it. And then leverage tools like AI, leverage people, mentors, coaches, and I know AI can, and agents can act in that capacity, and that's fine in some cases. But again, stick to what you believe in. I do think that if we get into a world where we just become vessels for what AI is telling us to do, we lose that beautiful, creative, imperfect, messy thinking of being a human.
And I always have this debate around saying, "Well, if humans by default are imperfect, AI cannot be perfect." Right? There is no ... and I think that's really important to know and to remember. So I think we have to take it understanding it's just another opinion, another voice, another approach, a synthesis. There's a lot of things that AI can do.
Now, as far as speed, like if we talk about medicine and research and there's so many incredible things that we should be and can be and will be using AI for, I hope, that will get us, even outside of work and work tech and HR tech.
Yeah, so I think it's just remembering that it's just another voice at the table and recognizing there's imperfection in AI. And at the end of the day, stick to your instincts, stick to your belief system. Open your mind up to being potentially ... You can pivot from what you hear and what you learn through AI tools and that's great, but as long as it then falls back into being human. So I think that would be my response to that.
Steve Smith:
So let's focus on the case for optimism. I think that you don't have to go too far to come up with a laundry list of what could go wrong with AI. And certainly, that dominates the conversation. But my experience working with company founders over the years is they're inherently optimists and they believe in a better way, they can build a better mousetrap.
And I think that one of the things I like about being in the workplace technology space is that there are so many people who are trying to build a workplace for the better, that are trying to change the work experience because they want to get that ripple effect of if someone goes to work and they have a better experience, then they go home and they're a better spouse, they're a better parent, they're a better neighbor, they pet the dog, all that shit.
And I think that let's focus on the optimist case for AI. If we do it right, if we get it right, if you get it right in your work and your product, what does that look like? And how is that making the world better?
Dan Riley:
As you know, my history has been doing this for a long time. I was an employee listening and employee engagement company for many years. And we actually had that belief system that if you're more engaged and you're more inspired and you're happier at work, you're going to show up as a happier person outside of work. So you're going to be a better spouse, a better partner, you're going to be a better father or a better mother, or at least it's just going to be... Happiness is a real thing. It's a very powerful emotion.
So I think that in a best case scenario, so we believe that if you have clarity on what you're doing, you have ... Everyone has to have their acronyms of their three C's, but clarity and the capability which is, again, the people you work with and the tools that you use, and then connection. And connection has a lot more to do with humans being connected to humans.
And those three things, and there's a lot of sub-science underneath that, if you get those three things right, I do believe exactly what you said, work becomes, I had this task to do or I had this project to do and it was very clear and there was clarity on what I needed to do to get it done, and I had the tools necessary, the capability was there, and that might be a combination of people and mentors and AI, it falls into that category for sure. And I have the appropriate connections to help me along the way to celebrate with, to feel that sense of accomplishment. And I think if you get those three things, you do feel more inspired at work.
There's a lot of data out there that set ... We know burnout is real, we know mental health is real, we know all this. With that said, if somebody is feeling like they're recognized and they're a part of something bigger than themselves, which is based on everything that I just said, it will lessen that feeling of burnout just a little bit, which allows you to go home and feel more accomplished and feel happier and show up as a better person to your friends and your family and your loved ones, and your dogs and your cats or your lizards or your spiders or whatever pet you might have.
Steve Smith:
Gila monster
Dan Riley:
Pigs.
Steve Smith:
Ferrets. Miniature horse maybe.
Dan Riley:
So yeah. So I think all those things need to be true. But one thing that all those things that I just said have in common or don't have in common is it's not all just a better and faster AI, a better and faster tech. It has a lot of human connectivity in the fold, in the mix, in the cracks. So I do think it's that balance, it kind of comes back to that balancing act. So that's my hope. That's what I want.
And with the world and societal divides right now and just it's a little bit scary and how we started off this conversation, that weighs into it too. We get into our echo chambers of what we hear and what we believe, and we're not willing to potentially listen and say, "Yeah, maybe you have a point, so let me re-establish or rethink a little bit." And that's how we find common ground a little bit more, and I do believe there is common ground to be found. And that goes across the teams, organizations, people, families, friends, society, humans. So that's-
Steve Smith:
Oh, and I think that kind of bringing it back to the constructive conflict message, I think AI at its best could help get us out of the tasks that take us away from our attention to allow us to really focus on the things that matter. But I think also we have to avoid the trap with AI of just like ChatGPT's really good at telling you what you want to hear, and there's definitely a clear level of sycophantic behavior that is built into the programming because it's built by humans, and why wouldn't it have a human foible like that?
But when we live in echo chambers, when it's easy to insulate ourselves from people who don't think like us or that we don't want to have conflict with people, so great, let's just filter our news, filter everything that's going on around us. But when you think about, ideally, if you're in ... I think back to my college experience. I was a liberal arts major in the humanities, so arguably I didn't really learn anything but a bunch of stuff that helped me really win at trivia night. But I think that what came out of that is something that is actually really applicable in the AI world, which is challenging your assumptions, not assuming that what you believe and know is absolutely true, and then being able to aggregate disparate pieces of information to synthesize that into something else. And it seems like there's an opportunity there-
Dan Riley:
There is.
Steve Smith:
... if we don't hide from it.
Dan Riley:
Yeah, no, and I love what you just said, and that is so, so true. If used correctly, responsibly, AI frees up time and gives us opportunity to focus on being more human, right? And that is that balancing act that I preach. But it can't take over and humans can't just take over, but we need to get the best of both.
I feel like there's a lot of great conversations happening out there, and I feel like we're starting to recognize that more. We have some catching up to do because the speed and the rate of AI and what it can do is just, it's like a, I'm trying to think of the Ferrari heading down the road, and we're in a Hugo trying to catch up. Right? So that's my fear is that it's just gotten a little bit ahead of us. And what I'd like to see is just slowing down a little bit, trying to navigate the roads together.
And then I think everything that you just said is so spot-on. It's the ability to leverage this incredible tech, again, for the record, who those might be tuning in or listening later. That's what RADICL does, we are built ... I think AI is incredible, what it can do and how it learns and how it can truly understand and can give you advice that sometimes can even be better than people. So I do believe in the power of it, but I know in my heart that it has to be in concert and it has to be kind of orchestrated through a sense of togetherness.
So that's the fight, that's the opportunity. And work will find a way, people will find a way, humanity will find a way, but it does require voices. It requires us talking about it. So the work that you do and the conversations that you're having, this is exactly what we need to be doing.
Steve Smith:
Well, I'm going to give you the last word. Tell me something positive about AI to leave our listeners on.
Dan Riley:
So I think on a positive note, I think we need to remember that AI is created by us, right? So AI is a human discovery, and with that, the optimistic positive note is therefore, we have the opportunity to make sure it's heading in the right direction and not be afraid to say, "Well, AI is going to take over humanity." I actually believe it's humanity's job to, again, reign in and kind of focus AI in the right direction. And that's number one.
And number two, AI, what it's really cool for, here's just a fun little fact, but if you're planning a trip and you're like, "Hey, I want to go to Tulum and I have four days and I want to stay at a place that looks like this, and then I will ... Give me a summary. And I want to scuba dive and I want to do this, and I want to do a jungle event," and that's fun. It's amazing. What it comes back with, I'm like, "Wow." You would spend hours. And that's life and that's enjoying life.
So use it for things like that too. We talk about it in the work context so much, but it is quite powerful and it does a pretty good job of just organizing lots of stuff that took us hours, if not days, within seconds. So there's my optimistic view. But the most important thing is just to remember that we as humans have the opportunity and the ability and the responsibility to shape and form AI as we head into the future.
Steve Smith:
Dan Riley, thanks so much for joining, and we wish you and the city of Minneapolis well.
Dan Riley:
Thank you. Appreciate that. Thanks so much for the invite. It's my honor, and that was a super fun conversation. It was great. Loved it.
Steve Smith:
Awesome.
Dan Riley:
All right. Thanks.
Steve Smith:
As I think about this conversation with Dan, the idea that keeps coming back to me is responsibility. AI didn't just appear out of nowhere. It's something we built, which means we have a say in how it's used, how fast we move, and what we protect along the way.
Dan's perspective isn't anti-technology, it's not nostalgic, it's grounded. Use the tools, embrace the speed. Let AI help you synthesize, organize, and challenge your thinking. But don't hand over your instincts, don't outsource your judgment, and don't forget that moments that define your career won't be the perfectly polished outputs. There'll be the hard conversations, the conflict that led to better ideas, and the relationships that shaped you.
If AI can take tasks off of our plates so that we can invest more deeply in clarity, capability, and connection, that's a future worth building. But that outcome isn't automatic. It requires leaders who are willing to slow down just enough to ask whether we're still leaving room for growth, mentorship, and human development. That's the work in front of us.
If you enjoyed this episode, make sure to subscribe to Work Tech Weekly on Apple Podcasts, Spotify, and YouTube Music. And I'll see you next time.