The adoption of AI in talent acquisition continues to accelerate, promising efficiencies and new possibilities for hiring. However, as AI tools become central to recruiting processes, they bring significant challenges, particularly around bias, compliance, and trust. Without clear oversight, these systems risk entrenching inequalities rather than addressing them.
So, how can talent acquisition leaders ensure that AI supports fairer hiring while safeguarding compliance and trust? And what role does auditing play in this critical process?
My guest this week is Jeff Pole, Co-founder and CEO of Warden AI, a company specializing in AI auditing for HR and TA Technologies. In our conversation, Jeff shares his insights on the importance of auditing AI systems, the emerging regulatory landscape, and how talent acquisition leaders can better understand and navigate the risks and opportunities of AI-powered hiring.
In the interview, we discuss:
• Is the current pace of innovation in AI set to continue?
• What are the main risks?
• Regulation, Legislation, and Ethics
• Is there a difference between AI influencing a hiring decision and AI making a hiring decision?
• How AI can be less biased and fairer than humans
• Holding machines to a higher standard than humans
• Shining a light on the AI systems used in recruiting and TA technology
• Continuous testing and monitoring
• How widespread is the issue, and how much AI bias has actually been found?
• What should TA leaders be considering when assessing AI solutions
• What does the future look like? How will AI change talent acquisition in the long term?
Follow this podcast on Apple Podcasts.
Follow this podcast on Spotify.
Matt Alder [00:00:00]:
Support for this podcast comes from smart recruiters. Are you looking to supercharge your hiring? Meet Winston Smart Recruiter’s AI Powered Companion. I’ve had a demo of Winston. The capabilities are extremely powerful and it’s been crafted to elevate hiring to a whole new level. This AI sidekick goes beyond the usual assistant, handling all the time consuming admin work so you can focus on connecting with top talent and making better hiring decisions. From screening candidates to scheduling interviews, Winston manages it all with AI precision, keeping the hiring process fast, smart and effective. Head over to smartrecruiters.com and see how Winston can deliver superhuman results.
Matt Alder [00:01:00]:
Hi there. Welcome to episode 661 of Recruiting Future with me, Matt Alder. The adoption of AI and talent acquisition continues to accelerate, promising efficiencies and new possibilities for hiring. However, as AI tools become more central to recruiting processes, they bring significant challenges, particularly around bias, compliance and trust. Without clear oversight, these systems risk entrenching equalities rather than addressing them. So how can talent acquisition leaders ensure that AI supports fairer hiring while safeguarding compliance and trust? And what role does auditing play in this critical process? My guest this week is Jeff Pole, co founder and CEO at Warden AI, a company specializing in AI auditing for HR and TA technologies. In our conversation, Jeff shares his insights on the importance of auditing AI systems, the emerging regulatory landscape, and how talent acquisition leaders can better understand and navigate the risks and opportunities of AI powered hiring. Hi Jeff and welcome to the podcast.
Jeff Pole [00:02:17]:
Hi Matt, glad to be here. Thanks for having me.
Matt Alder [00:02:20]:
An absolute pleasure to have you on the show. Please could you introduce yourself and tell everyone what you do?
Jeff Pole [00:02:26]:
So I am Jeff Pole. I’m the co founder and CEO of a startup called Warden AI and we work within HR and talent technology to help overcome some of the adoption barriers facing the deployment of AI solutions, particularly focusing on AI bias and fairness. Before that, my background is in the last decade in AI machine learning and engineering by trade originally, and I’ve worked in a number of different AI companies, particularly focusing on regulation technology, which is what’s brought me to this new space of AI trust and AI compliance.
Matt Alder [00:03:06]:
Fantastic. So we’ll dive into trust compliance and risk and regulation and all those kind of things around AI as we get into the conversation. Before we do though, I’m just really interested. As someone you’ve been working with AI for a kind of really long Time, sort of all of your career as far as I can see. Have you been surprised by how quickly things have shifted and moved forward in the last sort of 18 months, two years or so?
Jeff Pole [00:03:30]:
It’s a really interesting one, actually. In some ways, yes. In some ways, no. So, yeah, as, as I mentioned, been working in the industry for over a decade, always in. In AI driven products in one way or another. And I’ve always been optimistic about AI and always been excited about the future and big believer in the, in, in the possibilities. But I must say, as a professional working in the industry the last decade, it’s mostly been a bit disappointing, right. Like the level. A lot of people talk about AI, but the level of AI being used has been quite minimal. And there’s a joke in the industry that you might have come across that if an AI product actually works, then it’s actually humans behind it. It’s maybe masquerade as an AI or automation, but really maybe there’s a team in offshore, maybe in India, lots of people working on some manual process quickly to make it look like it’s in the AI system. I’ve definitely worked at the companies that have done that in the last decade, almost to the point that I was like in the last few years getting a bit disillusioned and thinking, is this really going to ever happen? And so the advent of large language models, getting to the quality that they did two or three years ago with ChatGPT most notably, has been like a bit of a reawakening of. No, no, this is for real. This is. AI has huge potential, is actually going to make strides. So I’m kind of surprised in that sense. But obviously big picture, being a big believer in the technology.
Matt Alder [00:04:50]:
Yeah, absolute kind of sort of proof that it is a real thing in business. And it, yeah, just has this kind of huge potential. And do you think that the current pace of change, which is, you know, which is quite phenomenal, is that set to continue in terms of the sort of innovation around AI and the development of it?
Jeff Pole [00:05:07]:
I would say yes. I would say, I mean, in a macro sense, definitely yes. If you look back at your technology diffusion or technology adoption over time, you know, whether looking back at the adoption of the telephone or cell phones more recently and then now large language models, every sort of generation, the time it takes for the technology to be adopted, whether that’s measured as like the time it takes for the average American home to have it or to get to 100 million users, is much faster than before. Right. Telephones was like 70 years, cell phones 14 years, and ChatGPT got to 100 million years in two months, which is the fastest ever growing technology. So in that sense that’s always going to get faster. So with the next breakthrough in 5, 10 years, we’ll be even faster than that and count it on in a more micro sense. There’s a wide opinion on this. I think some believe that the actual level of intelligence, if you will, that the AI that these particular large language models are developing is slowing down a little bit compared to the last few years. So we might see that the actual sheer kind of incremental improvements will be not quite as grind shattering, kind of leaping as last few years. But what I think we will see is that the cost will continue reducing because the more that something is built, the more people get savvy about reducing the cost of building it. So these models will get cheaper to build and as costs go down, adoption increases. And really it’s adoption that it’s been impressive to see how many people have jumped on this and tried to get involved. But actually I think that’s in its infancy and I think the adoption, which is really what innovation is, right? Like actually being used in ways that’s helpful to the world, we will see increasing, accelerating over the next few years, next decade.
Matt Alder [00:06:50]:
No, absolutely. And I suppose that sort of brings us on to the first question about this really. So for all the talent acquisition leaders who are out there, some people are very enthusiastic in their adoption of this, some people are a bit more cautious. Everyone is kind of having to adopt it by default because large language models, new ways of working with AI are kind of being built into pretty much every product that people use in the sector. What are the sort of, the risks of all of this? What do people kind of need to be aware of when it comes to this sort of AI revolution?
Jeff Pole [00:07:24]:
I guess huge opportunities, right? And I think with any big opportunity and reward, there’s always great risk as well. So it’s kind of not a surprise that there’s a lot of risks involved too. So there’s many to cover. One is, I would say, the data side of things, right? Data privacy violations. You’re protecting, you know, copyright, protecting, you know, an employer’s organization’s data as they might interact with and deploy different AI solutions. Data concerns have been around for a while, so we’re quite, as a society, quite mature, I would say, in being aware of them and conscious of them. And there’s lots of true regulations and frameworks around that are just sort of being adapted to address data risks within, with any AI product as well. So that’s a big one. It’s probably the biggest one on the like in terms of like a buy, you know, someone buying an AI product, it’s probably the biggest concern they have is what about the data. But what’s more interesting, at least to me, is that there are new risks, risks that maybe didn’t really exist in traditional software before, that are now apparent with AI with artificial intelligence technology, which really relates to the fact that AI is often about making decisions or influencing decisions and taken away from humans. So if AI is being used for example, to screen resumes and speed up that process in a way that it’s hard for humans to go around and look at every 10,000 resume and application that have come in today, then that leads to new risks about why it recommends or scores certain candidates a certain way. Where of course bias and fairness in the context of, of recruitment is huge. So that’s a big one. And just more generally is not biased per se. Can we even understand why it’s made? That decision can explain itself, which leads to a lot of knock on risks. I would say there’s liability potential, for example, if an organization can’t justify why an AI reached a certain conclusion about a person who applied, then there’s a discrimination potential sort of liability claim that could be made on the back of that. And then there’s of course emerging AI regulations, EUA act and New York City has local level before in Colorado. And we’ll get onto more of that. I’m sure there’s then compliance requirements and just a sheer risk of not being compliant itself is also a risk that’s coming. And then the last one is most important perhaps is on the candidate experience, is that how does, aside from any direct sort of legal or monetary repercussion, how does all of this increasing levels of AI affect candidates and their experience, their perception of the organization and the reputation it has. So I think that one’s also quite an important one to keep in mind.
Matt Alder [00:10:23]:
Yeah, absolutely. And just to I suppose also say at this point there are huge benefits of using AI, but it kind of is really important that people kind of understand what’s going on. And I suppose just to dig down into some of that in a bit more detail, let’s start with regulations. What’s the sort of current state of play across Europe, the uk, the US with regulations around AI?
Jeff Pole [00:10:45]:
I often like to start talking about AI regulation by talking about regulations that are not about AI, that are still relevant. So let’s not forget that for example, in the context of bias and discrimination, any regulation already exists to do with anti discrimination, whether that’s the Civil Rights, you know, act in the US or Quality act in the UK and elsewhere, those still apply equally to any AI powered system or a human powered system. It doesn’t matter, it’s illegal to discover it in any way. And if I’m to do so, then that’s against the law and has ramifications. So in a sense we have a lot of the regulation we already need because we’ve regulated humans for a long time and AI is sort of in many ways just replicating or support those processes. But obviously there are a growing number of AI specific regulations which is important too. The key ones are in Europe. The EU act has passed a comprehensive set of regulation around AI which has been signed and has a slightly staggered rollout in terms of different scenarios taking effect. But by and large for most people in this context of recruitment, the effects will come in about two years, August 2026, if not mistaken, where any system categorized as high risk, which you know, really includes any AI used in a kind of employment opportunity context, has to undergo fairly stringent requirements to be even allowed onto the market in the first place. So that’s one that’s pretty stringent and we can go into more detail on it. And then UK doesn’t do, hasn’t regulated anything yet, has kind of principles and there’s talk that the government might, might do a regulation soon. And then in the US what we’re seeing is largely a lack of strong, a strong chance from a federal level which probably will stay that way for a while, but state specific regulation. So states are kind of encouraged to take their own stance on this. And we’re seeing a growing number of states, you know, passing their own either guidance or indeed regulation. So we have for example Colorado quite recently released the so called AI Consumer Protection Bill, SB205, which targets recruitment employment as one of a number of high risk use cases with AI with a number of fairly stringent requirements. That’s an interesting one to talk more about. New York City is the only local, just the city itself has a local regulation which requires bias audits to be done on any AI used in some form of decision making around candidates and applications. And that’s already live, been live for I think almost two years now. And then there’s a number of other states looking at similar. So New York State itself, New Jersey State are looking at bills have been placed, are not similar from the New York one. And so yeah, there’s obviously other states too, but that’s a kind of key summary of the ones that are targeting employment and recruitment at the moment. California, quite a lot on AI. It’s a little bit, not many of the bills are directly related to yet to employment and recruitment, but have a number of regulations about data privacy and others like that.
Matt Alder [00:14:09]:
Yeah, I think it’s interesting as well. And there’s obviously the, you know, this is not just about regulation, but it’s also about ethics and fairness, which I think will probably get into in a minute. But before we do that, just one really specific question. When a lot of vendors who are putting AI into their system or AI startups, when I, when I talk to them there and I talk to them about regulation, there’s always this thing, this comment that gets made about obviously our AI doesn’t make any decisions when it comes to hiring. However, as you said earlier in the conversation, it does influence that decision in terms of the matching and all that, all that kind of stuff. How does that kind of sit with everything? Because you know, it’s not making a final decision, but it is very much influencing the direction that that final decision goes in.
Jeff Pole [00:14:54]:
Yeah, it’s a really good question. And certainly as you mentioned, everyone’s very keen to say, of course humans, and the humans are very involved, partly because the risks are so big, partly because we don’t want to scare people and say, you know, don’t need humans anymore, like I’ll do all of this and obviously AI is not good enough yet to do everything like that. But I think the key thing and where most of the regulations are going with this is if it’s influencing is materially impacting the ultimate human decision, which is still taken at the end, then it’s considered high risk. Right. The word decision is maybe not used in many of the regulations, although the New York City one is an example of that. I think the other ones are a bit more nuanced and it’s more like if it’s involved in this kind of use case employment opportunity, then it’s high risk. The standards are still emerging to be more specific, but I don’t think they’ll necessarily be quibbling too much about exactly what counts and more be kind of inclusive of anything that influences or could influence will then just be, it’s potentially high risk. So we’re going to treat it as high risk. You therefore have to follow the more stringent regulation. So I think that’s what we’re seeing.
Matt Alder [00:15:56]:
Yeah. And it’s not that it’s outlawed from doing this, it’s just that there has to be a level of transparency there, which I suppose is the perfect opportunity to ask you a bit more about Warden AI. You know, what you do and why you do it.
Jeff Pole [00:16:10]:
Yeah. So our mission at Warden is to drive the fair and compliant adoption of AI systems. So we’re talking a lot about risks, but as we mentioned, there’s a lot of opportunity. And so we’re kind of really based on the premise that AI is not only kind of fast and cost efficient, but can also be more ethical and more fair, for example, people led processes. But it’s a double edged sword. Right. So it can entrench those, those issues, it could entrench and worsen the bias that we already see in recruitment, or it can actually overcome it or mitigate it and do less of it. And so our role in this is to kind of shine a light on AI systems used in recruitment and HR more generally and help kind of the more ethical ones flourish the most, if that makes sense.
Matt Alder [00:17:01]:
Yeah, absolutely. And so you kind of audit some of the tools that are out there, don’t you?
Jeff Pole [00:17:05]:
Exactly. So a bit more concretely, we come in as a third party and do what we call AI assurance or AI auditing. And so our product becomes embedded into an AI system. Let’s imagine this is a resume screening system, for example. It’s quite common use case we work with and our product will do kind of continuous testing and monitoring of how that AI actually behaves for some of these risks and trust issues, particularly focusing on bias at the moment. So we’ll test how bias is help to mitigate those issues, be compliant with these emerging regulations, and then ultimately assuming everything is well and we help get there, then ultimately giving confidence to the stakeholders or customers of the system that they can trust it and actually deploy it with confidence.
Matt Alder [00:17:59]:
When it comes to bias, what sort of particular types of bias are you potentially seeing in AI systems? Or you check AI systems, Where are the kind of biggest risks? What’s the sort of biggest focus there?
Jeff Pole [00:18:41]:
Yeah, it’s a good question and I’m always trying to be careful of saying which forms of which categories are more important or less than others. Right. And of Course, for the technical people, their biases is inevitable in a system. Right. It’s not necessarily a bad thing. What we’re really talking about is unwanted or harmful bias. So bias that’s against, for example, certain sexes or genders, races, age and others. In terms of our offering, we focus on originally on sex and around race and ethnicity. And we actually just this week announced age bias assurance that we’ve added to our system. So we now look at examining whether or how these AI systems treat different age groups. And we’re also got prototypes and launching soon, disability and veteran status and religion, sexual orientation and otherwise working our way through the list of protected characteristics that kind of vary slightly from regulation to regulation, but cover a lot of those types of issues.
Matt Alder [00:19:51]:
And I suppose this might be a question that is too difficult to answer or you might not want to answer it, but there’s obviously been lots of anecdotal stories about biased AI. So, you know, there’s a few that kind of float around when it comes to AI making decisions about recruitment. You know, some of them have actually never been confirmed as actually of actually having happened. Are you actually finding, you know, much bias in here? What, you know, what level of kind of risk is there? You know, what’s the sort of the true picture when it comes to these AI systems? Are they full of bias and rigged to come out with certain. Rigged is the wrong word, but come out with certain results. What’s the actual, you know, stay to play with it?
Jeff Pole [00:20:34]:
Yeah, so obviously it varies a lot. Right. And, and there are, there are some confirmed cases. So the EEOC’s I think first ever AI lawsuit bias lawsuit was around age discrimination. A company that was, I think, found to be fairly systematically discriminated against age using an automated tool. So definite real issue. Yeah, there’s examples that abound of it being real and you’ll. There’s hardly many papers come out and every probably one a month, if not more, come out with a new study of how ChatGPT or similar language model has been demonstrated to be biased measurements. It’s definitely a real issue and obviously there’s quite a lot of news about it, and rightly so. That said, depending on the system itself, we’re actually seeing that a lot of times it’s good and arguably more fair than humans. Even though it’s a little bit of a tricky one to measure something we’re working on. The background is like quite basically benchmarking what the status quo is, what is in this industry, what is a standard level of bias or whatever. Attribute we’re looking at. And then how does AI compare to that? So even if it’s not perfect, if it’s at least significantly better, the average human process, which we know is not as far from great, then that’s my improvement, even if there’s still something there. So, yeah, so it varies, but overall I would say that the news stories are probably make it sound like it’s more of something to be deeply worried about. It’s something you should be concerned about. But there are, you know, it can be overcome. And there are various techniques and tools out there that help with this issue. And so I don’t think it should fundamentally be a reason not to go ahead, but it should definitely be something that’s on the mind of anyone who’s looking at AI, particularly in this context, and obviously a key provision to our mission and why we exist.
Matt Alder [00:22:22]:
I mean, I guess you make a really interesting point there because we’re kind of judging AI, looking for transparency, all of this kind of stuff, in a way that we don’t do with human decision making at that kind of granular level. So I think that is a kind of a really important point in terms of, you know, is this a kind of a much fairer and better way, even if it’s not. Even if it’s not perfect?
Jeff Pole [00:22:45]:
Exactly, that’s it. We do hold machines up to higher standards than we do humans in many cases, which I think is for a good reason. In many ways, like, the danger of this going wrong is we not only just have a mixture of levels of unconscious or conscious bias in people, but we then potentially codify at scale a system to entrench that, which is pretty scary when you think of the scale involved and it being almost like unintentionally intentionally codified into the training data or whatever the AI’s been built on. So that risk is great. But I think probably the reality that we’re at the moment is in many cases the fear and the standard we’re aiming for the AI for the machines is actually higher than we do for humans. And maybe it’s okay if it’s not perfect, but it’s a lot better than the humans are. So anyway, so in terms of our approach, we don’t necessarily make hard judgments on this, but we have the data and tools to, to assess it as a third party and then present the findings. So these are the third party’s findings or it’s ongoing. Right. So here’s the ongoing reports of findings we’ve had that can be shared around with whomever our customer wants to share it with to then give transparency to other people. But it’s a case of not just take our word for it. We’re not marking our own homework, we’ve been marked by someone else. Here’s what you can see, rather than us necessarily saying, this is definitely, absolutely no bias whatsoever. No one can make that statement.
Matt Alder [00:24:09]:
Yeah, no, absolutely. And you’ve mentioned sort of a few things around this already, but maybe as a bit of a summary for us, what are the things that heads of tie acquisition should be considering when they’re either assessing a new AI in a current solution or looking at buying or even creating, you know, an alternative solution?
Jeff Pole [00:24:27]:
Good question. I think. Well, the main thing is going to be first and foremost on how accurate the system is or how, you know, how valuable it is. Is it actually helping, is it useful? Because a lot of these people are jumping in on this, on the AI bandwagon, as it were, and it’s not necessarily actually helping or valuable or accurate in this, in the way it works. So I still put that as number one. And obviously only you and your business and your needs can judge that truly, as to whether that is helpful. But then some of the weighted concerns would be those other things we talked about, like, of course, is it fair? Are always asking the solution provider how they measure or mitigate or monitor for buyers in particular. And then I think what will be increasingly important is to ask about compliance with AI regulations. So, as I’ve mentioned, almost all of them are still prospective. They’re emerging rather than being here now. But time flies. And so if you’re picking a provider now, you want them to be one that’s hopefully going to be compliant and is taking that seriously for when those regulations do kick in in the next year or two.
Matt Alder [00:25:41]:
As a final question, obviously we’ve talked about risks and things like that, but let’s talk about potential. So what does their future look like? How do you think AI might change talent acquisition in the long term?
Jeff Pole [00:25:53]:
Super interesting. I love future glazing at this. Don’t know how good my predictions will be, but I think, I think, well, the challenges in talent acquisition, at least as someone who’s working in industry directly, but I see is around really scale and volume. So whether that’s how to, as a recruiter going out buying, how to outreach to enough people, how to evaluate all those inbound applicants if you have a volume of them, or even on candidates, they want to apply to as many jobs as they can as well, and are using AI to do that. So it’s just everywhere you look. It’s like a scale kind of game. I wonder if we’ll move to a world where AI is used more. For example, maybe AI system does do an initial interview because it can do it scalably. Right. Arguably, the reason we have things like resumes and so on is because we don’t have time to actually interview everybody. So let’s look at something quick. But AI does have time to do that. And so I wonder if what will happen is that will have kind of more meaningful touch points for maybe all candidates or more candidates. But with an AI to begin with, before, of course, you get to the human stage later that’s not going anywhere, which will actually maybe reduce the idea that candidates are kind of spam applying to everyone at scale. It’s like there’s only so many interviews done by an AI that the candidate can actually complete and therefore that’ll actually almost solve the scale problem a little bit. They can actually target the 10 companies, whoever they actually want to apply to, have first stage interview with an AI, albeit which might sound crazy right now, and then have a meaningful assessment about their actual fit rather than just a bit of paper that summarises their experience. And I wonder if that or something along those lines will be where it goes. And again, maybe this is still 5, 10 years away or more, but I wonder if that’s what the future looks like.
Matt Alder [00:27:44]:
I think that’s interesting. I think it’s like if you have that human kind of touch, that intervention in the process, then maybe it does put the brakes on those kind of things and really help with the quality aspect. Aspect of everything. Jeff, thank you very much for talking to me.
Jeff Pole [00:27:59]:
Thanks, Matt. It’s be a pleasure.
Matt Alder [00:28:02]:
My thanks to Jeff. You can follow this podcast on Apple Podcasts, on Spotify or wherever you get your podcasts. You can search all the past episodes at recruitingfuture.com on that site. You can also subscribe to our weekly newsletter, Recruiting Future Feast, and get the inside track on everything that’s coming up on the show. Thanks very much for listening. I’ll be back next time and I hope you’ll join me.






