Skip to content
Work Tech Weekly
[WTW] Hubspot thumb - BLOG SQUARE

How Hiring Turned Into a Trust Problem (And Why AI Might Actually Fix It)

Listen to Article
11:31

This is an AI-generated narration of the article below. Skip to podcast player to hear the full episode

“I live on the left-hand side.”

That’s what one job candidate said when Claire McTaggart asked which part of Miami they lived in. You could watch them zeroing in on Google Maps in real time, hunting for an answer that wouldn't immediately blow their cover. Left Miami. Great restaurants there, apparently. Try the Cubano.

How Hiring Turned Into a Trust Problem (And Why AI Might Actually Fix It)
  31 min
How Hiring Turned Into a Trust Problem (And Why AI Might Actually Fix It)
Work Tech Weekly
Play

 

This wasn't some isolated incident. Welcome to hiring in 2026. Deepfake interviews. Proxy test takers. Terrible liars who are geographically challenged. The funny part about this: Claire and her team actually wanted this result. They posted a remote engineering role to stress-test her company’s fraud detection tools, and the fake applicants showed up big-time. Claire and her product manager purposefully interviewed hundreds of them. The pattern was identical: Crushes the coding assessment … and can't name a single restaurant near their supposed home address. Different faces showed up for different interview rounds. Ten different addresses were provided for laptop shipping. It’s like the food at a bad Vegas buffet or content on LinkedIn: Sloppy and terrible, sure, but the volume was staggering.

Hiring didn't break overnight. It became a volume problem because incentives changed. Both candidates and employers started responding rationally to a system that was never built to handle this kind of scale. The result is today’s hiring reality: Massive applicant volume, rising fraud, and a trust breakdown on both sides of the hiring table.

On this episode of Work Tech Weekly, my friend Claire McTaggart, founder and CEO of SquarePeg — a company focused on one thing: Helping recruiters find real signal in a system that increasingly rewards noise. Claire is a founder and CEO who works directly with talent acquisition teams at the exact intersection of where AI is making things worse and where it might actually restore some sanity. She believes that hiring today is usually not a “bad people” problem. It’s people responding to a broken system exactly the way you would expect — which means it’s fixable.

“Candidates respond incredibly well to incentives,” Claire tells me. “Once the employer stops penalizing you for not having a keyword, candidates will respond, and they will start putting more authentic context because that's what's going to get rewarded.”

We cover why “full compliance” fraud detection is putting an impossible burden on HR teams, how keyword matching killed creative hiring, why every resume looks exactly the same now and what better screening might look like if we had the courage to change the incentives. If you've ever wondered why your best hire was always someone unexpected — or why that's become nearly impossible to pull off — this conversation will answer a lot of questions.

Why Every Resume Looks Exactly The Same Now (And Why That's Your Fault)

The problem starts with volume. When you're getting 1,000 applicants per role and managing 15 open reqs, you don't have time for nuance. You can't spend hours researching whether someone's background at a struggling company actually made them grittier. You need a heuristic. Fast.

So recruiters defaulted to keywords. Four years of Java. Salesforce experience. AI on the resume somewhere. The system trained candidates exactly what to optimize for. “Every employer that we talk to says every resume looks the same,” Claire explains. “The GPT resume. Job titles are the same, keywords are the same, tech stack — everybody knows everything in terms of the required skills.”

Candidates responded rationally to the incentives. If they believe that a keyword-stuffed resume will get you an interview, that’s what they’ll do. If saying you “built a widget that grew the company 25%” gets you through the filter, that exact phrase shows up on 10,000 resumes. The result is a sea of identical applications where nobody looks different, and nobody looks real.

Claire calls this the death of out-of-the-box thinking. The lawyer who became a tech founder? The creative writing major who could solve problems nobody else saw coming? You’re cooked with that sort of story on a resume. “If you aren't perfect, then you're ignored,” she says. We’ve entered an era of hyper-specialization where the only acceptable candidate is someone who's done this exact job at another company. Hence, the ChatGPT resume. It sucks. And yeah, it’s mostly hiring leaders’ fault.

The cruel irony? Everyone’s best hire was somebody who really didn’t match the job req at all. Ask any hiring manager about their favorite success story, and that’s what they will tell you. Can that kind of hire come out of the current system? No way, no day.

Fraud Isn't a Talent Problem — It's a Systems Design Problem

Here's the thing about candidate fraud: it's not just North Korean bad actors running laptop farms. That's real, and it's a problem. Claire mentions that every talent acquisition professional on a recent fraud panel had an FBI investigation happening at their company. But the bigger issue is what she calls “gray-area fraud” — the extreme embellishment that's become completely normalized.

Take AI experience. Suddenly, everyone has it. “We see people who had jobs historically and titles and work experience that had nothing to do with AI, and now they've completely fabricated that to be something else,” Claire says.

Like having a secret preference for Budweiser over the hipster-approved hyperlocal IPA that everyone tells you is hella complex, this isn't a moral failing. This is actually a rational response to a broken system. When employers screen for perfect keyword matches and reject anyone who doesn't check every box, candidates learn to check every box. When volume makes it impossible to investigate context, people fabricate context. The system is practically begging to be gamed. Where everyone loses when they game the system is that none of it ultimately determines whether or not a candidate is right for the opening. They might be more right than you think.

What makes this especially unfair is that the burden of detection has fallen entirely on recruiters and HR teams. “Spotting fraudulent candidates is not the expertise that you've honed over the years,” Claire points out. “That has not made you a good recruiter. Asking them to play fraud whack-a-mole on top of everything else they're managing is absurd.

The real fix isn’t better training for HR; it’s fixing the incentives. If screening tools actually rewarded authenticity and context instead of keyword conformity, candidates would respond to that, too. Claire believes we're headed toward a world where showing how you’re different — not how you look the same — is what gets rewarded. We ain’t there yet.

Could AI Actually Make Hiring Less Biased and Restore Trust? Believe It or Not, Yes.

Here's the optimistic conclusion I didn't expect to reach: AI could make hiring fairer, not less.

The core problem with current screening is that humans don't have time to do real research. When you've got 1,000 applicants, you can't investigate what each company was doing during the time a candidate worked there. You can’t figure out if their department doubled in size. Or if they went through an IPO. Or if the products they shipped were actually AI-driven or just marketing fluff.

SquarePeg's approach is to do that research automatically. For every company a candidate worked at, the system pulls valuable context from growth data, news, product launches and team changes. “In a world where GPT resumes all look the same, SquarePeg is starting to differentiate by doing some deep research that a human would never have time to do,” Claire explains.

This opens the possibility of screening for what actually matters. Did this person help start a new department? Did they work on a distributed global team that scaled fast? Did they hit quota even though their product was ranked 20th in the category? These are signals of grit, adaptability and capability that keywords can't capture.

Claire describes working with hiring managers who can now say, “I really need somebody who figured out how to work on a global distributed team that doubled in size in a heavily regulated industry while managing people as an IC.” The system can assess applicants on that criteria even if none of it appears explicitly in their resume. That's the kind of matching that should have been possible a decade ago but never was.

The catch? This only works if AI tools are audited for bias. Claire's company gets audited monthly to make sure their scoring isn't introducing new problems. That kind of accountability needs to become standard. But if it does, we might actually get to a place where every candidate gets a fairer shot than they ever would from a seven-second resume scan.

What Happens Next: Slop vs. Better Incentives, More Context

If AI makes hiring worse five years from now, it’ll be because we drowned in slop. It became trivially easy to generate applications, embellish resumes and fabricate work samples. Everyone’s life got harder and nobody is happy — candidates, recruiters or hiring managers. Like catching a Guns N' Roses concert these days, there’s still a lot of volume, but the talent is harder to spot.

But if AI makes hiring better, it’ll be because we finally changed the incentives. “Once we see that AI screening actually is considering every person and doing the research, candidates will respond,” Claire says. "They will start putting more authentic context because that's what's going to get rewarded."

That's the long game. Make the tools good enough that being different becomes an advantage. Make fraud detection sophisticated enough that embellishment gets caught. Make screening deep enough that keywords stop mattering. Then watch behavior shift.

I found this conversation more encouraging than I expected. Not because the problem is solved — it's not — but because Claire's reframing fraud and broken hiring as systems problems instead of people problems. When the process rewards shortcuts, people take them. When recruiters are drowning, even good tools get used in blunt ways. The fix isn't scolding anyone. It's building better infrastructure and aligning the incentives.

Claire also made a point I keep thinking about. She used to work in management consulting, where case study interviews were designed to find people who were curious and could solve unfamiliar problems. They'd deliberately hire from wildly different backgrounds — auto engineering, creative writing, whatever — because diversity of thought made teams better. “That has died a little bit,” she admits. That’s made me sad because the best people I’ve worked with are these weirdos with really distinct backgrounds.

Getting back to that world requires courage. It means trusting tools that go beyond keywords. It means being willing to take a chance on someone who doesn't fit perfectly. It means remembering that your best hire was always someone you didn't expect.

AI won't fix hiring by itself. But if it gives recruiters the space to make those kinds of decisions again — to focus on judgment instead of drudgery — then maybe we're headed somewhere better. Claire thinks we are. And after this conversation, I'm inclined to believe her.

Let's Talk