Subscribe on Apple Podcasts 

Ep 769: Managing Risk In Talent Acquisition

0

The pressure on talent functions right now is intense. Budgets are tight, teams are stretched, and the mandate to do more with less has pushed many organizations to automate at speed without stopping to redesign what they were automating.  These automated decisions are attracting real legal and regulatory attention. Actions previously seen as simple process steps are now potentially being viewed from a legal perspective as consequential decisions.

At the same time, there’s a growing recognition that AI could be truly catalytic, forcing the kind of fundamental change that talent functions have needed for years.

So how do leaders navigate the constraints while seizing that opportunity?

My guest this week is Kyle Lagunas, Founder of Kyle and Co. In our conversation, Kyle unpacks what defensibility really means in practice, why talent teams need to shift from risk avoidance to risk readiness, and how AI is catalyzing long-overdue transformation.

In the interview, we discuss:

• Credibility under constraint

• Risk averse or risk avoidance?

• What does defensibility look like?

• The AI balance between execution and judgement

• Human-in-the-loop needs to be designed, not assumed.

• Are we holding machines to a higher standard than we hold humans to?

• The importance of rigour in pilot programs

• Building AI literacy

• What does the future look like?

Follow this podcast on Apple Podcasts.

Follow this podcast on Spotify.

00:00
Matt Alder
AI could catalyze the biggest transformation talent functions have ever seen. But with growing constraints and rising legal scrutiny, does getting there mean changing how we think about risk? Keep listening to find out.

00:16
Matt Alder
Support for this podcast comes from smart recruiters. Are you looking to supercharge your hiring? Meet Winston Smart Recruiters AI Powered companion. I’ve had a demo of Winston. The capabilities are extremely powerful and it’s been crafted to elevate hiring to a whole new level. This AI sidekick goes beyond the usual assistant, handling all the time consuming admin work so you can focus on connecting with top talent and making better hiring decisions. From screening candidates to scheduling interviews, Winston manages it all with AI precision, keeping the hiring process fast, smart and effective. Head over to smartrecruiters.com and see how Winston can deliver superhuman results.

01:21
Matt Alder
Hi there.

01:22
Matt Alder
Welcome to episode 769 of Recruiting Future with me, Matt Alder. The pressure on talent functions right now is intense. Budgets are tight, teams are stretched, and the mandate to do more with less has pushed many organizations to automate at speed without stopping to redesign what they were automating. These automated decisions are now attracting real legal and regulatory attention. Actions previously seen as being simple process steps are now potentially being viewed from a legal perspective, as consequential decisions. At the same time, there’s a growing recognition that AI could be a true catalyst, forcing the kind of fundamental change that talent functions have needed for years. So how do leaders navigate the constraints while seizing that opportunity? My guest this week is Kyle Lagunas, founder of Kyle and Company.

02:19
Matt Alder
In our conversation, Kyle unpacks what defensibility really means in practice, why talent teams need to shift from risk avoidance to risk readiness, and how AI is catalyzing long overdue transformation.

02:33
Matt Alder
Hi, Kyle, and welcome back to the podcast.

02:36
Kyle Lagunas
Hey Matt. I am so excited to be here.

02:39
Matt Alder
Well, it’s always a pleasure to have you on the show, so it’s brilliant to have you back again. For the few people out there who may not heard of you and your.

02:48
Matt Alder
Work, just please introduce yourself and tell everyone what you do.

02:51
Kyle Lagunas
I have one of the coolest jobs in the space. It’s also a weird job. I am an industry analyst and so my job is to study innovation cycles in the world of hr, tech and transformation. I founded my own firm last year called Kyle Co. And we are where research meets reality. We’re really trying to put a heavy practitioner lens on all of the work that we do to make sure that we’re tracking what’s really going on in the industry and not just some big ideas that people are cooking up in the ivory tower. So keeping things grounded today.

03:28
Matt Alder
Absolutely. And really on that note, we’ve sort of been talking about your new HR Trends report. The world is full of predictions, particularly about AI at the moment. But what’s kind of really interesting about what’s come across from what you’re writing about is credibility under constraint. Tell us what you mean by that.

03:48
Kyle Lagunas
Oh, my goodness. So first of all, I presented on AI Trends. I mean we all do all the time, right? But I just. This makes me laugh. I have to share this. I asked very recently, I was putting like, all right, today we’re going to talk about what everyone’s talking about. What is HR’s favorite two letter word these days? Making people shout AI. Right? Somebody in the back said, no. Oh, that’s not it. No, no, it’s true. But it’s like, yeah, it is true. But it’s nice to know that even in some really crowded rooms where everybody’s talking about the same thing, people are still thinking a little differently. Well, so constraint. I am very concerned with some of the issues that we are facing and the impact on us as an organization and talent in hr.

04:46
Kyle Lagunas
It is become apparent to me that as I look across the other functions of the business and their pace of change, their pace of adoption, their success rates for integrating AI innovations, they’re ahead, they are ahead of us. And I am concerned that some of the constraints that we’re stuck with are potentially holding us back and eroding some of the credibility that we need to be good partners to the business. We, we need to be able to meet the business where they are right now. We need to have a seat at the table for like, you know, everybody was talking about seat at the table and like in the C suite or the elt, just in general for us be seen as an important part of the business.

05:42
Kyle Lagunas
The seat at the table that we need to seat at these AI councils where we are deciding how the organization’s going to navigate AI, how we’re going to, what we’re going to prioritize, what our plan is for impact to our customer experience and to our workers. And I am just not sure that every HR organization is ready to sit at that table. And so that’s kind of what I’m considering. The other part is of course, the catchphrase that we’ve all heard all for the last 18 months probably is do more with less. And so that’s kind of where HR is at too. Like, we’re not a business driver, we are a cost center. And so we are constrained with what we can be doing with AI just because of the resources that we have.

06:33
Kyle Lagunas
But I think that it’s created a juxtaposition where we are not just being asked to do more with less, we’re being asked to do better work than we’ve ever done before, produce higher output and higher quality output, not just higher volume of output. And it’s just like this juxtaposition of trends that it’s like really hard to reconcile. So as I’m looking at what we should be expecting this year, I’m expecting some bumpy road.

07:00
Matt Alder
You kind of mentioned some of the constraints there around being seen as delivering something, not being seen as strategic. What else is it that’s making HR&TA not ready for this? Is it mindset? Is it? What is it? What else is there holding. Holding everyone back?

07:18
Kyle Lagunas
Yeah, I mean, a big part of it is just the complexity of the use cases that we have. You sit in the uk, I’m sure that you are familiar with the EU AI act, and they really are considering in major bodies of legislation, HR and employment use cases as highly sensitive and categorized as high risk. Well, you know as well as I do that automating interview scheduling, which is a very popular AI project. Right. Is actually not very high risk in terms of amplifying employer bias in decision making. It’s not really in high risk of as the way most of us think about it. But there are other cases where matching and scoring algorithms and black box AI around how we evaluate inbound applications. A little nuanced. Right. So I think that’s partly what we’re up against.

08:22
Kyle Lagunas
The challenge being historically we have as a function been risk avoidant. And people say risk averse, but that’s not it. We’ve been risk avoidant. And the nuance there is to be risk averse. I think that you do need to look at what risk factor, what factors are really at play. You need to calculate what actual exposure you have. Right. You need to come up with plans to mitigate scenarios where you might have exposure, direct exposure, and manage risk. Right. Because we have been so heavily avoidant, I don’t think that we really have those muscles and more so we might have those muscles isolated in a couple of key players in our teams. We don’t have that mindset across our culture, our operating cultures and across the profession. So that’s another like just hurdle that we have to clear. And it’s. We talk about transformation.

09:28
Kyle Lagunas
I do think that AI in not just the application of that technology, but exploring this new pillar of innovation. I am not being colloquial. I say I believe it will be truly catalytic for us. It really will catalyze a lot of much overdue transformation in who we are and how we operate.

09:51
Matt Alder
Yeah, I completely agree with you. I think there’s a really interesting, there’s really interesting points there. And it kind of leads on to something else that you talk about a lot, which is defensibility as a core capability. Because we know that already we are seeing lawsuits emerging around AI. You mentioned there some of the legislation and the element of dealing with risk. So what do you mean by defensibility and what does it look like in practice?

10:16
Kyle Lagunas
Oh, this is such a fascinating conversation because we have to look backwards to look forwards here. We have a lot that we can learn from. I think that were preoccupied because we had these cost cutting measures and these budgetary constraints. We really were primarily focused on utilizing AI to scale our existing operations without adding cost. And what that did, Matt, was we just amplified, increased the number of processes that were being run, the volume of processes that were being run through tec. Most of us didn’t have the, the resources or the corporate calories I call them to burn to optimize those processes before we automated them. So we really just scaled operations without really changing anything. Do you remember when we moved to cloud, we did the same thing, baby, right? It just lifted and shifted.

11:16
Kyle Lagunas
We put the same thing in a different place. And I do think that in. Because of AI catalyst is occurring, that behavior has repeated. The issue with that is that AI really is very good at scaling processing power. So we have kept the same things and moved them to a whole nother level of scale. And so we have maybe the reliance on matching and scoring tools to help us just cut through inbound applications. I think that we have been making a lot of just circular surface level decisions that, I mean, necessity drove that. But we have been making a lot of surface level decisions to get through all of the inbound, like to get through the work that we have to get done. And I don’t know that we necessarily looked at how we had trained those AI matching and scoring algorithms.

12:16
Kyle Lagunas
I actually implemented one when I was on the practitioner side and so I have some experience with this. Many of them are. They have a core algorithm, right. Which train like how they learn, like how they can, what they, what data artifacts to look at and how to contextualize it, et cetera. Like what processes is it trying to support? But most of the local, like most of where it’s deployed for you is it’s going to observe your recruiting practices, it’s going to look at the kinds of universities that you source from the kinds of background that is successful for each role. And so it’s a learning from our past processes. And so when we have implemented these tools that help us make decisions faster based on what’s historically and I’m doing air quotes here worked, we really have not done the actual solution design.

13:11
Kyle Lagunas
We haven’t actually mined our processes and mined our decision making structures before scaling them. And so here now this defensibility question coming and welcome everyone to the mind of analyst. We are long winded, deep thinkers. I would like to say the defensibility is very much in question because we’re now on the other side of this push for scale, this push for efficiency and we’re recognizing we actually automated a bunch of just stuff, man. We really have a large volume of automated decisions being made and a decision could not, is not necessarily changing a candidate status. We’re finding that’s what’s coming up in these courts is some of these things are being considered as decisions.

14:00
Kyle Lagunas
We need to make sure that as we do utilize AI to scale and to conduct more volumes of work person, we need to make sure that the processes, the inputs, the outputs are defensible. We need to do that mining now to look at really the psychological of how in fact the psychology of how business is getting done, how people processes are being managed. And again coming back to the top, I think this is catalyzing some long overdue change. I welcome it. The defensibility is really important. It is us actually leaning into that risk management side. We need to understand what our real exposure is and that is driving I think some really good work around how we decide who moves forward, who’s a fit, who gets promoted, et cetera.

14:58
Matt Alder
Yeah, I think that’s really interesting, fascinating point about some of the things that we thought were just automation or process or execution can actually be seen as decisions. I think that’s a really interesting thing to talk about and I suppose leading on from that in terms of restructuring TA teams and thinking about roles, all this automation is kind of really posing what’s execution or what’s about judgment. If you’re ahead of TA again, I mean and you’re restructuring a team. What does that balance look like? How does this play out? What needs to change?

15:34
Kyle Lagunas
Well, first I need to document these things. I mean, you know, that’s part of this too is like it is one thing to say that we are using these inputs as just a source of data. It is a another to build into your documented workflows and processes. What human in the loop actually looks like, Matt, like it needs to be designed and not assumed. Human in the loop is a solution design, it is not a tagline. Right. And so that’s what I would start to lean into if I’m back in that leader role is I’m leaning heavily on my ops team to help me take what are good ideas and operationalize them.

16:17
Kyle Lagunas
That is the only way that this change that we can march forward with confidence is that we are putting new building blocks in place and that we are training people on how these things can and can’t be used. And so I’m going to look at if I’m using matching and scoring tools to just help us prioritize our inbound. I’m going to have documentation that I expect that every candidate is still reviewed. I operate in North America, in the U.S. which needs to be EEOC compliant. We need to review every single application. Right. And I am going to have documentation where I’m running my own independent audit regularly of any potential adverse impact that these tools are having. These governing bodies, they actually are really happy to see some good faith effort if I am doing.

17:12
Kyle Lagunas
Especially because we don’t have a ton of AI specific regulation right now. Pervasively they are okay to watch this private sector do its best. And so I, I definitely want to have third party audits. You, you’d better believe that my QBRs with my vendors are not about vanity metrics anymore. Right. It is about responsible innovation. I want to, if you want to bring to me a new feature then like that’s another conversation that’s not in our qbr. We’re going to need to engage my infosec, my compliance team, my legal team’s going to have, you know, like we’re not willy nilly just going to turn things on now. I’m going to have a lot more structure and I actually do see this playing out.

18:00
Kyle Lagunas
It’s not just like Kyle’s like best idea because again we’re trying to ground these things in reality whereas I am seeing not a lot of headcount being added into sourcing and recruiting and coordinat I am seeing headcount being added into the specialized core programs teams and I do think this is where we look across other major tech innovations. A lot of people point to the industrial revolution. Specialized headcount is something that we will anticipate. I know I would be making a strong, highly defensible business case to have people that are going to come in and help me build out much more rigorous programs about how we adopt, implement and monitor the tools that we have. I think that’s going to be a huge part of it.

18:48
Matt Alder
One thing that really strikes me from all of this is we’ve moved to a world where machines are being held to a much higher standard than humans ever were in recruiting. Why is that or what are the implications of that? What does that tell us about what’s going on?

19:04
Kyle Lagunas
We have. You’re spot on. I mean if we just stay with the matching and scoring algorithm for a moment, I think that we have like early recruiting, like early career recruiting. You’re gonna be handed a stack of resumes and a stack of job descriptions and you’re gonna, you are going to be weeding through those, screening those and qualifying those yourself. Can you imagine if early in your career you got something wrong and you just got fired for like putting the wrong candidate forward and never learning from it? Right. And yet I was, I literally just this morning had a conversation with somebody who was having this challenge with recruiters where the algorithm them recommend like is grading somebody an A. That’s. And this is so nuanced that it’s stupid.

19:50
Kyle Lagunas
There may be like a B or a C, you know, and it’s like, well okay, well you’ve done your job then recruiter, you took that and then you qualified that. Right. But instead it’s eroding trust in the tool. They’re like this is wrong. AI doesn’t work. And it’s like you don’t understand how it works. It does need that feedback loop to get better. I also have seen where people early in these kinds of tools, people were very wary of using them because they were worried about them automating bias, having implicit bias. Well, the way that most of these are trained is they’re trained on your behaviors, right. It’s going to take the black box of your hiring managers and of your recruiters brains that are heavily human biased.

20:40
Kyle Lagunas
And it’s going to learn from them and we’re going to see those patterns, hiring patterns actually play out completely documented way. Well, that’s very scary. I’m not going to buy the AI. I’m also not going to take the time to go through and try to mitigate any of that human bias. You know, I’m going to just not turn that AI feature on. And so it’s kind of like head in the sand. It’s a difficult psychological situation to understand.

21:12
Matt Alder
I did some rough calculations. I kind of worked out. Over a 20 year career, a recruiter might review something like 250,000 resumes and people can argue with me whether it should be less or more or whatever. It takes an AI less than a minute to review and initially rank 250,000 resumes. And the AI will read every single word rather than spending eight seconds on them. And I think there’s a bit there about understanding the capability of the machines and the nuance and the capabilities for humans and having the right thing in the right. Having the right thing in the right place. Definitely. Which comes to what you said about the humans. Then looking at that and kind of making a judgment.

21:52
Kyle Lagunas
It’s a conundrum, isn’t it? And that’s where, if we look at where we started with AI and by the way, these cycles are getting so small, right? We’re not talking about over the course of the last decade, we’re talking about over the course of the last 10 months, not 10 years. It is important for us to be learning quickly and adapting. And so I think that having this conversation now is a good sign that we are hoping and trying to get better. I definitely do see a bit of the expectation out of ELT. The CEOs are pushing less of like we’re going all in on AI and more of like we need to be continuously experimenting and learning from what AI can and can’t do for us. And it’s changing the expectation with leadership, HR and talent, leadership teams of how they’re approaching.

22:47
Kyle Lagunas
I don’t know if you’ve seen it. I am tracking and happy to see more, a lot more teams building out structured, robust pilot programs for AI. You know, like pilots were really just a word for free seats for me to try something right and see if people like it. I’m doing air, like do it. Do they like it? I, I am. Everyone is taking these pilot programs super seriously. What is the problem that we’re trying to solve? What is the solution that we’re looking for? How do we measure the impact of this solution on that problem? And if that impact was enough do then the question is, can we scale it? You know, like that level of rigor arguably should have been there all the time, but neither here nor there. It’s here now. And I am happy to, I am.

23:43
Matt Alder
Happy to see that, to sort of pull this together. As a kind of a final question, how do you see this sort of playing out over the next sort of two or three years? Where, where does the maturity come from? What’s going to happen?

23:55
Kyle Lagunas
The first thing that has to happen is we need to build literacy in AI across the board. Excuse me. In the past it has been okay. Excuse me. In the past it’s been okay for us to have a central SWAT team that is going to lead the evaluation and design and implementation of a new tool, a new system. Now we need everybody on the same page. Not everybody in the entire TA team needs to be a part of solution design. That’s not what I’m saying. But everyone has a role to play in AI innovation and the adoption of new AI enabled processes and ways of working. This is not a tool, this is a way of life now. And so we need to be literate. We need to understand there’s a difference, by the way, between literacy and expertise.

24:49
Kyle Lagunas
We don’t need to be AI masters, but we all need to understand enough to spell LLM and tell you what that means. So that’s one, a huge change that I need to see or that I anticipate seeing. The other is we need to become, we’ve talked about, we need to lean into risk. We need to not be risk avoidant, we need to be risk ready. And I think that is a good thing. That helps us build more credibility with our partners in the business. The red tape functions, by the way, are stressed to the max. They want better partnership from us. And so I think coming into conversations where we know that they are going to bring up objections and we’re going to be there to talk through things, that’s gonna be huge. And that used to be a drag.

25:41
Kyle Lagunas
I think that is an excellent alignment exercise. Now an excellent readiness exercise. I can’t believe I’m saying that. I used to hate that stuff, but it’s so important, you know, so literacy and risk readiness I think are big. And then I really think we need to be focused on quality. Quality of hire has always been like top three things for CEOs and TA and HR, you know, like it’s always been up there. But as we look at the scale that AI is bringing to our processing power, as we look to the data and insight that AI is able to bring to us at every step of every human process, of every people process, I think that we need to be obsessed with driving quality and not just volume.

26:30
Kyle Lagunas
And I think that’ll help us to discern what works and what doesn’t and help us to navigate some of it’s very noisy out there. I think that it is not just what’s the cool thing that we could do, it’s more of constantly obsessing with how can we be making higher quality decisions, delivering higher quality outputs consistently? That’s what we should scale is quality, not volume. And so we’re in chapter two now. I think now’s a great time for us to shift that narrative.

27:00
Matt Alder
Kyle, thank you very much for joining me.

27:03
Matt Alder
My thanks to Kyle. You can follow this podcast on Apple Podcasts on Spotify or wherever you listen to your podcasts. You can search all the past episodes at recruitingfuture.com on that site. You can also subscribe to our weekly newsletter, Recruiting Future Feast, and get the inside track on everything that’s coming up on the show. Thanks very much for listening. I’ll be back next time and I hope you’ll join me.

Related Posts

Recent Podcasts

Ep 776: Designing Hiring For Humans
March 13, 2026
Round Up February 2026
March 12, 2026
Ep 775: What Makes An Excellent Workplace?
March 11, 2026

Podcast Categories

instagram default popup image round
Follow Me
502k 100k 3 month ago
Share
We are using cookies to give you the best experience. You can find out more about which cookies we are using or switch them off in privacy settings.
AcceptPrivacy Settings

GDPR

  • Privacy Policy

Privacy Policy

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively.

Please refer to our privacy policy for more details: https://recruitingfuture.com/privacy-policy/