The assessment industry isn’t some scrappy startup category. It’s legacy.
The big players have been around for decades — brands, benchmarks, validation studies, generational trust. HR grew up on them. Boards sign off on them. Procurement practically hugs them. And honestly? They earned it.
For years, we measured personality traits. Benchmarked leadership styles. Stamped cognitive ability and called it portable. That worked when work was stable. Now? AI executes faster than we do. Technical skills age like an avocado. “Knowing how” is table stakes. “Knowing how to think” is leverage.
But here’s an awkward question: What if employee success in the AI era requires skills those tools were never built to measure? What if the way they measure these skills isn’t relevant for today’s workforce?
Are we sure yesterday’s instruments are tuned for this frequency? I don’t think we are.
On this episode of Work Tech Weekly, I sit down with Dustin Clinard, CEO of Ignis AI, a company building assessments that put real numbers on the skills we’ve historically waved our hands at — creativity, collaboration, AI fluency, judgment — that have come to matter more in the AI era.
“Ignis is measuring hard-to-quantify human skills,” he says. “The skills that you'd look at and say, ‘If somebody demonstrated these skills, they'd be really good with a new AI tool. They'd be really good with the team.’”
Not personality traits. Not “great culture fit.” Not “I bet he’d be great at Happy Hour karaoke.” Actual, quantifiable skills.
And Dustin isn’t afraid to say the quiet part out loud. “I think it’s very difficult to measure something like creativity. I could get a bunch of people to tell my boss that I’m really creative. That doesn’t actually mean I’m creative.”
What’s different about Ignis’ approach is its use of AI agents to assess skills by placing individuals in realistic, scenario-based simulations where leadership, collaboration, creativity, and communication actually show up under pressure — not just in self-descriptions. These conversational agents replicate human interactions (from virtual meetings to chat threads), prompting users to make tradeoffs, interpret signals, and respond in ways that reveal observable behaviors rather than aspirational traits. The result is a scalable, psychometrically validated system that turns complex “power skills” into measurable, development-ready data.*
“Where we’ve put in the effort is in looking at taking your response to a problem… normalize that and then put a score on it so that your 7.6 and my 6.6… give us a point in time,” Dustin says.
A baseline.
As AI automates more of the execution layer of work, the question isn’t “What can you do?” The money question today is “How do you think?” And, real talk: Most employers aren’t equipped to answer that yet. Ignis believe that AI has changed not only the skills we need but also should radically change how we assess those skills.
For the last 20 years, hiring has obsessed over hard skills. Ruby on Rails. Salesforce power user. Flash developer. (Remember when that was a thing?) Clean. Testable. Filterable. Easy.
But once everyone clears the technical baseline, performance differences show up somewhere else.
“There’s a baseline for technical skills, but it’s how you apply those,” Dustin says. “It’s how I interact with you… when you look at the real decisions, that’s driving a lot of the decisions.”
How do you approach ambiguity? How do you collaborate when the plan goes sideways? How strategically do you use AI — not just reactively, but intentionally?
Those are harder to measure. So we mostly… haven’t.
Dustin’s bet is that’s no longer sustainable.
Most traditional assessments measure traits. You are “naturally collaborative.” You “tend to be analytical.” You “are just like your father.” OK, maybe not that last one. The assumption: you don’t change much.
“A characteristic-based test, you wouldn’t expect to change much over time,” he says. “But in skills, you would.”
That’s the shift. Ignis AI measures what Dustin calls “power skills” — dynamic capabilities that can strengthen or atrophy. The assessment looks like a chatbot interaction. You respond to open-ended prompts. Your language gets analyzed. You get scored. Not as a label. As a baseline.
You might be a 7.6 on creativity. Or a 3.3 on AI fluency. The number itself isn’t the point. The point is knowing where you are — so you can decide if it’s good enough for your role. It’s also a baseline to begin your journey to develop that skill.
That context piece matters. Creativity at a startup? Probably critical. Creativity building defense systems? Maybe… carefully applied. Higher isn’t always better. Fit beats abstraction.
Assessments have use cases for development and for hiring. Dustin is clear about something that impressed me with self-awareness. “If you have 500 candidates apply to a role, I don't think you apply Ignis to 500 people.”
Ignis is a finalist lens. Picture this: You’ve narrowed to five candidates. All meet 80% of the job description. Technical skills are comparable. Now what? This is where qualitative gut instinct usually takes over. “I just liked her.” “He seemed sharp.”
Ignis wants to add structured signal into that moment. Not replace judgment — but sharpen it.
“We looked at [job descriptions] with one of the big universities that does a lot of co-operative recruiting and hiring. Only 15% of the job descriptions they see use the word ‘creativity.’ That's it,” Dustin says. “But if you look at the hiring managers, what do they want? ‘I want somebody who can think outside the box as it comes to these problems.’ So they're not using the words in the job descriptions... That's probably not a surprise to a lot of people.”
And in intern-heavy environments, like MBA recruiting programs evaluating 200 candidates, that depth can mean spending serious time with the right 20 instead of surface-level time with 100.
That’s efficiency, but with nuance.
Each assessment produces what Ignis calls a “talent flower.” Seven core power skills. Seven petals. Some larger, some smaller. “People love reports,” he says. “They want a report and they want a scorecard and they want to see a number.” That’s the straight-up reality.
However, that’s not really the value of what Ignis is trying to build. Over time, you can watch the flower change as you focus on development. It sounds simple. It’s actually clever because people don’t change from a single report. They change when they can see movement.
“Our visualization of the flower will be over time, you can see your flower kind of morph and change,” Dustin says. “No two flowers are going to be the same. And I think that's really interesting from a human connection standpoint.”
Most development plans die in a drawer because people don’t see them as a reflect who they are. Without that buy-in and a credible path toward improvement? A desk drawer is exactly where that’s going. But an AI-driven, ongoing feedback loop? That’s different. If this works, it turns assessment from judgment into coaching.
I asked Dustin a closing question: If you were designing the modern workforce from scratch — no resumes, no degrees — what would you measure first? His answer wasn’t technical skills. It was alignment. Vision. Communication under stress. Analytical judgment when things go off the rails.
In other words: Can a team think together?
Most importantly, as Dustin put it: “When things start to go off the rails, how can they collaborate with each other or how can they communicate with each other to say, ‘Hey, this is off the rails. We need to bring it back on,’ or, ‘This is off the rails. Let's just let it keep going.’”
This intersection of communication and collaboration is becoming more important when AI handles more of the doing. That’s when the advantage shifts to deciding what to do. It’s also where the creativity of creative problem solving becomes even more valuable.
Dustin really gave me a lot to think about. I’m a big believer in assessments. I know that what I learned from them helped shape my professional growth for the better. However, I also can see clearly where the assessments that got us here aren’t going to be the assessments that get us where we need to go.
AI can now build a presentation deck in seconds — probably better than you can. But can it decide what story the deck should tell?
“The part that [AI] can't do is think, ‘What do I want it to say and in what order?’ That's my judgment call to make,” Dustin says. “Once I decide that, the building-it part can happen really quickly.”
There we are: Humans do creativity, judgment and discernment — probably better than AI can. And that will probably be true longer than the AGI sunshine pumpers want to believe.
Execution is accelerating. The value of judgment is compounding. As tools keep getting faster, the differentiator won’t be who types the fastest. It’ll be who pauses, frames the problem correctly, and moves deliberately.
If we can measure these things and improve them — even imperfectly — that changes how we hire, coach, and build teams. That’s valuable, and, right now, most teams are guessing at who’s actually AI fluent versus who’s just prompt-adjacent.
* I highly recommend finding out more about the technical foundation of Ignis’ approach. Reach out the them and ask for the technical white paper from Ignis co-founder and CPO, Yigal Rosen, Ph.D., and Vice President of AI and data science Ilia Ruskin, Ph.D.