Dustin Clinard
I think it's very difficult to measure something like creativity. I could get a bunch of people to tell my boss that I'm really creative. That doesn't actually mean I'm creative, but that might be how it comes across in a different form of this.
Where we've put in the effort is in looking at taking your response to a problem. And so what we're attempting to do is normalize that and then put a score on it so that your 7.6 and my 6.6 or whatever the numbers happen to be give us a point in time. And now we have a baseline that we can start to work off and we can say, "Relative to your role or mine, is that actually good enough?"
Steve Smith
Hey, everyone. Welcome back to another episode of Work Tech Weekly. I'm Steve Smith, managing director at Rep Cap. In today's episode, we're digging into a question a lot of teams are quietly struggling with right now. As AI tools get faster and more capable, we need to ask ourselves what actually matters more at work and what matters less.
My guest today is Dustin Clinard, CEO of Ignis AI. Dustin works at the intersection of talent, skills, and AI, helping organizations think more clearly about the human capabilities that actually drive outcomes. In this conversation, we talk about why hard skills alone no longer tell the full story and why judgment and context may be the real differentiators as work continues to change. If you're trying to make sense of how teams should evaluate talent and develop people in a world where AI accelerates everything, this episode is for you. Let's get into it.
Dustin, welcome to the podcast. Glad to have you here.
Dustin Clinard
Hey, Steve. It's great to see you. Thanks for having me.
Steve Smith
Oh, I'm really looking forward to this conversation today because I think that what you're doing right now is incredibly relevant for the workforce in the emerging age of AI. And so I think that just to kind of start us off, why don't you tell us a little bit about Ignis AI, what it's all about, and how it's different than maybe other solutions out there.
Dustin Clinard
Sure. Ignis is measuring hard-to-quantify human skills. So we're basically putting a quantifiable measure on things like creativity, collaboration, the skills that you'd look at and say, "If somebody demonstrated these skills, they'd be really good with a new AI tool. They'd be really good with the team." But historically, things like collaboration have been part of a characteristic or trait-based measurement. This person tends to be collaborative, and so if we put these two styles together, they work great together. This is getting to the specific, how do you put a measurement on collaboration? And so now, you can start to look at hiring for certain skills. You can look at talent development, executive development, intern development, whatever it is, with a measurement on a skill and then a plan to improve a skill.
So Ignis is basically quantifying and putting measures on a set of skills. We call them power skills. So they're the Ignis AI power skills. That's different than anybody's doing in the market right now.
Steve Smith
So give me a concrete example. What would this look like inside of a real company with real people who are making real decisions?
Dustin Clinard
So think about an assessment. You could have multiple choice assessment or if this, if that. Our assessment looks like you're interacting with an AI agent. It looks like a chatbot. And so it's presenting you a series of questions and you're responding with a series of words. And so what we're doing is we're using those questions. We're tuning the questions to get to the right level of assessment capability. And then we're using your open-text responses, analyzing that, and scoring that.
The assessment itself results in a, right now we're calling it a talent flower. We're still an early stage company, Steve, so we may change the names on some of these things, but we call it a talent flower. And it's this idea that at any given time, you have petals that are growing stronger than other petals. And what you have to decide afterwards is, where do I want to focus? Do I want to focus on my strengths? Do I want this to become the strengths opportunities discussion that still exists? But the idea is that your flower looks different. If you were to take this thing again a couple of months from now, six months from now, your petals, like some got stronger and some got weaker, just like skills, skills strength and skills atrophy.
A little bit unlike characteristics. A characteristic-based test, you wouldn't expect to change much over time, but in skills, you would. And so take that baseline assessment and now you know, "Hey, where should I spend the next couple of months trying to develop? I thought I was creative. People tell me I'm creative, but I didn't score that well on creativity. What can I do?" And so now you spend a few months, work on it. You try things that work, like real skills application or skills development, and then maybe you test again for that part. And so it becomes part of an ongoing developmental cycle with coaching and feedback.
Steve Smith
Well, what I think is interesting about the way that you're looking at it is that for... I mean, it seems like for the last 15 or 20 years, there's been this kind of constant drumbeat of really hard technical skills, STEM skills being most important. But what seems to be emerging over the past couple of years is that skills that you typically would associate with like the humanities or liberal arts, the ability to pull together disparate pieces of abstract information, synthesize them into kind of cogent ideas and thinking, have sort of newfound value in the age of AI, and especially in either creating prompts or solving problems within vibe coding and things like that.
And it seems like when you're getting to things like you're talking about like creativity and collaboration, those are always important in the workplace, but it seems like there's a newfound value in these skills. Would you agree with that?
Dustin Clinard
Completely. I think the ability to test for certain hard skills. Do I know Ruby on Rails? You can do a kind of a proficiency test against Ruby on Rails. And that is definitely a part of a lot of technical jobs as an example.
The other part, if I'm presented with a problem and I need to figure out how to attack that problem, which I'm going to then use Ruby on Rails to solve, how creative am I thinking about either practical problem-solving, out-of-the-box problem-solving, things that other people haven't thought of? There's a bunch of different ways you can look at creativity.
And we look at it in several different capacities, but that's the part that's really there's no measurement for that. You'd say, "Steve's really creative and he knows Ruby on Rails. Therefore, fine." But when you look at the success for folks at work, either in hiring or in existing jobs, a lot of the success is not... There's a baseline for technical skills, but it's how you apply those. It's how I interact with you. I mean, it's all the soft stuff that we know we often take for granted, but when you look at the real decisions, that's driving a lot of the decisions.
So I think that maybe to your point, the emergence of the humanities will come back. And there's a lot of folks saying that. Now that the technical components have been more normalized, it's the folks that know how to use them well and interact with each other that are going to be the most successful.
Steve Smith
One of the things that you're working on that I'm especially intrigued in is I think you're dialed into the fact that skills are a dynamic thing. They're not a fixed moment in time. And I think that the problem with a lot of assessments is that they assume that traits are static. And why do you think maybe that assumption is broken in the AI era, or has it always been broken and just now more top of mind because we can see the problem with it?
Dustin Clinard
To answer your question, I think it's because they're really hard to measure. I think it's very difficult to measure something like creativity. I could get a bunch of people that tell my boss that I'm really creative. That doesn't actually mean I'm creative, but that might be how it comes across in a different form of this. It's hard to measure and it's hard to validate.
Where we've put in the effort is in looking at taking your response to a problem. So think about in the role that you have, Steve, the creativity means something different than in the role that I have. And so what we're attempting to do is normalize that and then put a score on it so that your 7.6 and my 6.6 or whatever the numbers happen to be give us a point in time. Now, there may not be a huge difference between those two, but there's a difference. And now we have a baseline that we can start to work off and we can say, "Relative to your role or mine, is that actually good enough? Is that something I should focus on?"
I was talking to a friend the other day that works at a big defense contractor, and we were talking about creativity and the comment came up like, "Oh crap, do we really want people being creative when they're building these defense products?" And that's very fair question. Now, granted, there's different ways that they might view creativity, but you may actually want people who are following the rules and not thinking outside the box. The box is already well-defined. You want somebody who's executing, who's collaborating within that environment. And creativity, your bar for creativity or benchmark level, might be quite low.
So that's going to affect how you evaluate talent, which is happening already during the interview cycle, but it's happening in a qualitative way as opposed to a quantitative way. But you may have different levels where that's like higher is not always going to be better for every organization. It's going to come that context.
Steve Smith
Well, I think another thing that your approach sort of presupposes is not only can you measure something like creativity that seems hard to measure and actually I think is hard to measure, but that these are skills that are trainable and improvable instead of just personality traits.
Dustin Clinard
Yeah. So we measure one called AI fluency. And so we're not measuring AI technical skills like how well can you use ChatGPT or Claude? We're looking at things like, are you a passive user? Meaning if you get prompted, you know how to go in and write a decent prompt and get an LLM to produce something for you, or are you a strategic user who thinks about the brainstorming process or the problem-solving process in an AI first mechanism? And so there's a different... Like, it's categorized in this AI fluency bucket right now, but you're kind of getting to who's leveraging the most tool, who's leveraging the tools that they have in the best way, in a strategic way, not so much in a reactive way. And then where do you fall on the distribution of those at any given time?
And you might find that I can use ChatGPT just fine, but it is often reactive. If I'm a 3.3, what I now want to know is, well, what's the difference between that and a 7? And how can I move to it? How can I move up and what impact will that have? And that's the development side of it. But somebody has to first tell me I'm a 3.3 before I even know that I should do something.
Steve Smith
Well, you mentioned that you're early stage right now. And obviously it's just like you're still in learning about the challenges that are out there. And obviously you're having a lot of conversations, as you mentioned, with organizations, employers, people who are wrestling with this in the real world. As you're kind of going through your discovery in regards to where enterprises are with AI right now, what are some of the things that are jumping out at you?
Dustin Clinard
So this business was founded, like I said, a little over a year ago by two folks that were working at a company called BrainPOP. So BrainPOP is a educational K through 12 business that developed educational videos and snippets for kids all across the country. A lot of folks who are listening and have kids may have kids that have used BrainPOP tools before. And they ended up selling the business to the Lego Foundation, which the Lego Foundation is kind of a division that takes an interest in a lot of childhood development tools.
And going through that, it's a company that's about 30 years old. And so building, they weren't AI-native when they started, and they built a lot of the AI components and tools into selling it to school districts. And if you think of the dynamic of selling it to school districts versus a startup dynamic or a big banking dynamic, trying to understand the level of AI readiness within those businesses evolved a lot over time.
I think where we are today, there's a lot of acceptance for the use of AI in general. And the concerns over privacy components, I think, are still there, but they're less than they were maybe a year or two ago. I think what we're getting to is the explainability of an algorithm. So if you're taking a test, you want somebody to say, "Okay, if I answer these, this is my A on this and B on this and C on this, I now know how the test resulted in something." Where what we're doing is we're looking at all the words that you use to respond and we're running that through an algorithm that is then generating the score. And that's where we're getting a lot of our questions, which is what's the repeatability of that scoring mechanism? If I took the test with the same word answers every time, would I get the same score coming out of the backend?
And so that's where a lot of our effort and validation goes. That's probably one of the biggest questions that we get from folks. One, can you actually quantify these things? And two, how does the scoring mechanism work?
Steve Smith
So, you know, it's interesting because when you look at assessments, a big use case, at least on the talent acquisition side, is filtering people out. And on the development side, of course, it should be used to help people grow skills and ultimately grow their careers. What do you think changes when you really start to embrace an assessment as something that is a growth tool and not just a judgment tool?
Dustin Clinard
Yeah. I don't think we'll be used on the filtering outside. If you have 500 candidates apply to a role, I don't think you apply Ignis to 500 people. I think you wait, you have better large-scale filtering mechanisms at that point.
I think where something like this becomes really useful is when you have five finalist candidates and they all look like they have comparable tech skills or they have four out of the five skills that you want them to have. So therefore, they're all 80% of the requirements that are listed in the job description, right?
If you look at job descriptions, side note, we looked at this with one of the big universities that does a lot of co-operative recruiting hiring. Only 15% of the job descriptions that they see use the word creativity. That's it. But if you look at the hiring managers, what do they want? "I want somebody who can think outside the box as it comes to these problems." So they're not using the words in the job descriptions. I don't know if that's a... That's probably not a surprise to a lot of people.
Anyway, back to your question, if you get to the point where you have candidates that are going to interview with the CEO or they're going to interview with the hiring manager and you're presenting them, that's a place where we can help them understand what are the strengths and where are the skill proficiencies that somebody in the power skills have demonstrated? And I can use them to probe more during an interview, or I can use them to understand how they might fit into the team that we have already. I think that it kind of looks like development, but in the hiring case, if that makes sense.
Steve Smith
Well, another thing you mentioned earlier, kind of the flower part of the assessment. And it's just like it seems like a lot of assessments where they're about abstract things, but where they add a lot of value potentially in the development process is in the visualization. I've been very impressed with the visualization that you have because I think that it is not only something that makes it easy to understand the various points of these dynamics, but I think that it really can accelerate the understanding and comprehension of what you're trying to measure.
What have you learned in the visualization process of building your product?
Dustin Clinard
People love reports. They want a report and they want a scorecard and they want to see a number.
Steve Smith
Right.
Dustin Clinard
Which is fine for today. I think tomorrow though, your report might change.
Now, there's the practical implication of how can you constantly... We're not doing this every single day, but the whole idea with visualizing the flower is we'll give you a snapshot from today. There's seven core power skills that we measure. They have sub-skills within them, but let's talk about the seven power skills. I get seven petals on my flower. Some are bigger, some are smaller today. There is your report. It comes with action, like what do I do now? How do I interpret this and what can I do to start to change the results on certain parts of the flower? So think about that as your report.
Now, the question is, well, if I did this again in a week or two weeks, how does that flower start to change? Maybe you only focus on one petal, but you're focusing on growing a petal. If I took it again, can I see the growth of that petal? So our visualization of the flower will be over time, you can see your flower kind of morph and change. And that's where I think it's going to be. I mean, it's a bit of a visual trick, but I think it's going to get really interesting if somebody's been working with us for a few years and they've worked on some of these skills. They'll see how their flower changes and everybody's going to have a unique flower. They're like, no two flowers are going to be the same. And I think that's really interesting from a human connection standpoint. That's yours. It's not mine. You can't have the same flower as me.
Steve Smith
If I'm taking one of these assessments and it scores me at... Gives me a number for creativity and I'm like, "Hey, I want to improve that number," what are the things that kind of come out of it that I can basically kind of jump on immediately?
Dustin Clinard
So the explanation of creativity, explanation of where you are, the action planning on what you can do and try, think of the use of AI. Some of this stuff is still things we have to build, Steve. So just to be clear, when you think about what the power we have, interpret who you are, where you are, what you do, what your experience has been, what your score is, use factors like that to then generate very tailored action plans for things you can go try and build skill with, come back and check and see if it's actually worked, go off and iterate and try it again.
So if you look at action planning and development planning in any traditional kind of development plan, where it often fails is because I build it and then I put it in a drawer, or I file it in a folder and I don't come back and look at my development plan weekly.
Everybody says you should. I'm not sure how many people that should do, but think about the AI prompt, like the gym coach, the AI coach, like, "Hey, you haven't done anything here. How have you made..." This is a response back and forth with an agent that can help keep you on track with developing the skill, the creativity skill, that you want to improve.
Some of it we still have to build, but if you just think about that concept, that's where this can start to go and bring a whole different dynamic to the skill development game.
Steve Smith
I just sort of wonder if with the pace of change that we're all having to deal with day-to-day in the workforce, on the positive side, I think that the things that you're working on, it's encouraging because I think at its best, the AI has the potential to bring out what's more human in us because it's just like, great. So there's things AI can do, but that it needs to... Say for instance, in recruiting, it's just like there's a lot that AI can do to accelerate some of the manual parts of the job to help kind of narrow down to a good list of candidates. But at the end of the day, it's just that human ability to interview and measure as a recruiter, I think actually becomes more valuable if we don't lose sight of that.
And same thing with the development like you're talking about. I think that's the positive side. I think on the negative side or where we're running a lot of risk is just that idea about, you don't have the luxury of saying, "I don't know," because it seems like all human knowledge is now available at your fingertips instantaneously, and so you should know everything. Sometimes it's just like being able to take a beat and pause and actually think about it is what we need to get to the best answers in a work setting or an interpersonal setting or a leadership setting. How do you look at that?
Dustin Clinard
I look at it. I mean, I jump into tasks and start actioning.
Steve Smith
Right.
Dustin Clinard
That's been historically my... That's where I go first. And I think that's a mistake now because the actioning part can be done with AI, with a model. The part that it can't do is think, "How would I..." Say I'm building a board presentation or I'm building a customer pitch or whatever it is, a product presentation. What do I want it to say and in what order? That's my judgment call to make. Once I decide that, the building it part can happen really quickly.
The tendency is to go and I start to build it really quickly and you step back and like, "This didn't actually tell the story I want. This didn't make any sense." There's a special skill for people who can do things like that. I think this opens up that potential for a lot more people to say, "What story would I like to hear?" And then push the build button as opposed to the inverse.
Back to your recruiting comment, if I can make a comment on that, Steve, where I have the luxury of being in Boston. And so we have a lot of great universities. And there's a private equity firm we're talking to here that goes on campus at some of the universities here and interviews for intern programs, 200 people. So they spend an hour with each person, then they narrow it down to about 100, then they get to their 20 candidates that they're going to make intern offers for. Think about the time that takes the recruitment team to be able to do that.
The idea with Ignis is they can go in with 100. These are already, think back to the vetting, screening part. They've already done the basic screening. They've gotten themselves to 200 people. They can use something like the Ignis AI assessment, power skills assessment, to get to the 20 or 40 people they want to sit down and talk with for two hours or three hours. So you get a lot more depth on a fewer number of folks with even either the equal or less amount of time.
And the reason that example, I think, is relevant is from a hard skill standpoint, they're all kind of the same. They're all just graduating. This happens to be an MBA course, so they have some business experience, but it would apply that kind of early career, the skill profile of somebody's not that different. The work experience is pretty much the same. It's even more important that I can get the time and the depth with people on the things that are going to make them successful at work. So there's an efficiency argument on the recruiting side that I think is pretty strong here.
Steve Smith
So one final question here for you, talking specifically about measurement. So if you were designing the modern workforce from scratch without resumes, degrees, job ladders, or anything like that, what do you think you'd be measuring first right now?
Dustin Clinard
Well, that's a great question. There is a technical skill component that you do need to measure. I don't want to look past that. I think there are a lot of measurement tools out there for technical skills that are doing a really nice job.
The part that I'd look to measure that, I think, is if you were starting from scratch is how much can the team that you have, how much can they see a vision that's similar? So how do they gel on where the future may lie for them? How can they communicate with each other so that they can say, "Hey, this is where our North Star is or these are where our North Stars might be." And therefore, they start to get on this productivity train together of testing and iterating, moving in the same direction or directionally the same way.
I think then you get into things like how can they, when things start to go off the rails, how can they collaborate with each other or how can they communicate with each other to say, "Hey, this is off the rails. We need to bring it back on," or, "This is off the rails. Let's just let it keep going." I think these fundamental kind of collaboration, communication, do we have the analytical thinking ability to look at something and say, "This is off the rails and here's why. Therefore, we should bring it back on. We should let it go," whatever the decision happens to be, building a team of people who have those types of interpersonal skills, you know that they sit on top of a strong technical foundation. Either it's a technical foundation they have with degrees or history, or it's a technical foundation that they now have with all these other tools.
But I think that by itself isn't going to lead to success. I think it's the pieces that sit on top. That's how I think about building a team for modern business. And it is how we think about building our team here, by the way.
Steve Smith
That's great. Dustin, this has been a fantastic conversation and exactly the one I was hoping to have, so thank you very much. But any final points, observations, or comments?
Dustin Clinard
Well, one thing I'd say is if anybody wants to learn more about the assessment side, the quantification and the measurement pieces, if anybody wants to learn more about those, there's a whole depth that's probably outside of my ability to explain it really well.
We have a couple folks with their PhDs in learning science and physics and assessments that we can get into that, because I think the belief in something like this comes down to, do I believe that this can be measured and measured properly? And do I believe an assessment can be scored and scored well? If you believe in those two fundamental things, then everything else is possible. If you don't believe in those two fundamental things, you got some work to do, but I think that's where I think there's going to be a big trick. If anybody wants to nerd out and get into that, let me know and we can make sure that discussion happens.
But I really appreciate it, Steve.
Steve Smith
That's awesome. Dustin, thanks so much for joining us, and look forward to having you come back again sometime soon.
Dustin Clinard
Okay. Thank you.
Steve Smith
And that's a wrap on another episode of Work Tech Weekly. Thanks again to Dustin for a really thoughtful conversation.
A lot of what we talked about comes back to judgment. As AI takes on more of the execution, the hardest part of the work becomes deciding what matters, how skills get applied in context, and how people work together when the answers aren't obvious. We talked about why skills aren't static and why organizations that focus only on technical proficiency risk overlooking the human capabilities that drive real outcomes. As tools continue to accelerate, the most valuable work increasingly sits with people, in the decisions they make, the trade-offs they navigate, and the way they collaborate.
If you enjoyed this episode, make sure to subscribe to Work Tech Weekly on Apple Podcasts, Spotify, or YouTube Music. I'm Steve Smith, and I'll see you next time.