Subscribe on Apple Podcasts 

Ep 334: Fair AI

0


The number of talent acquisition technology solutions offering AI continues to grow exponentially. However, we must look at AI’s role in recruiting through a critical lens to ensure transparency and that technologies are fair and ethical. If AI is being used to improve the candidate experience and remove bias, how do we make sure it isn’t actually making things worse.

My guest this week is Barb Hyman, CEO of Predictive Hire, an AI based interviewing solution. Predictive Hire has recently published a paper on Fair AI in recruiting intended to provoke debate and help Talent Acquisition leaders ask the right questions when they are buying a solution. Barb has an HR leadership background which gives her a unique perspective on the dangers and opportunities of the AI revolution.

In the interview, we discuss:

▪ Fair AI in Recruiting

▪ The importance of vendor self-regulation

▪ Why AI should be unbiased, inclusive, valid and explainable

▪ Building trust with talent acquisition leaders and candidates

▪ When AI isn’t AI

▪ Why a human candidate experience is critical

Download “Fair AI In Recruitment”

Subscribe to this podcast in Apple Podcasts

Transcript:

Matt Alder [00:00:00]:
Support for this podcast comes from Predictive Hire. Predictive Hire is a frontier interview automation solution that solves three pain points in bias, candidate experience and efficiency. Their customers are typically those that receive an enormous amount of applications and are dissatisfied with how much collective time is spent on hiring. Unlike other forms of assessments, which can feel confrontational, Predictive Hires first interview is built on a text based conversation, totally familiar because text is central to our everyday lives. Every candidate gets a chance at an interview by answering five relatable questions. Every candidate also receives personalized feedback. AI then reads the candidate’s answers for best fit, translating assessments into personality readings, work based traits and communication skills. Candidates are scored and ranked in real time, making screenings 90% faster. Predictive hire fits seamlessly into your HR tech stack and with it you will get off the Richter. Efficiency, reduce bias and humanize the application process. They call it hiring with heart. To find out more, go to predictivehire.com.

Matt Alder [00:01:42]:
Hi everyone, this is Matt Alder. Welcome to episode 334 of the Recruiting Future podcast. The number of talent acquisition technology solutions offering AI continues to grow exponentially. However, we must look at AI’s role in recruiting through a critical lens to ensure transparency and that technologies are fair and ethical. If AI is being used to improve the candidate experience and remove bias, how do we make sure it isn’t actually making things worse? My guest this week is Barb Hyman, CEO of Predictive Hire, an AI based interviewing solution. Predictive Hire has recently published a paper on fair AI in recruiting intended to provoke debate and help talent acquisition leaders ask the right questions when they’re buying a solution. Barb has an HR leadership background which gives her a unique perspective on the dangers and opportunities of the AI revolution.

Matt Alder [00:02:51]:
Hi Barb and welcome to the podcast.

Barb Hyman [00:02:56]:
Hi Matt.

Matt Alder [00:02:56]:
An absolute pleasure to have you on the show. Could you just introduce yourself and tell everyone what you do?

Barb Hyman [00:03:01]:
Sure. I’m Barb Hyman, I’m the CEO of Predictive Hire. I’m doing this conversation from Melbourne, Australia and really delighted to be here.

Matt Alder [00:03:10]:
Fantastic stuff. And tell us a little bit more about Predictive Hire and what it does.

Barb Hyman [00:03:14]:
So at its simplest, it’s chat enabled blind hiring, which means that if you’re looking to find great talent and you want to do so in a way which means everyone feels comfortable, you remove all that bias up front, you would use our technology. What it’s really about though. Matt is that you’re giving everyone a fair go in terms of a job and you’re also giving everyone feedback. And I think those two things are quite disruptive to recruitment and particularly the ability to actually help everyone grow through that, through that experience of the chat.

Matt Alder [00:03:49]:
Fantastic stuff. And unfortunately, you’re right. I mean, they should be things that are in every recruitment process, but they’re not. And to find a way of introducing that through technology is disruptive. So before we sort of dig into the details of the conversation, I just wanted to start by having a bit of an introductory chat about AI. Everyone is allegedly selling AI in Ruby recruiting solutions at the moment, and there’s a huge sort of degree of confusion and misinformation and sort of mistrust in the market about what that actually means. Can you talk us through sort of your definition of artificial intelligence in recruiting? What does it do and how does it work?

Barb Hyman [00:04:36]:
Yeah, no, look, I’d love to. And I think, you know, one thing I really hope people walk away with from listening to this is that, you know, not all AI is weak, is equal, and not all AI is biased. And hopefully people will take away, you know, what’s. Some of the questions are to really discover what they can trust and what they should really question. So for me, and I should say, Matt, I’m not a data scientist nor an engineer. I’m quite an unusual person to sit in this role. My background is hr. And I have to say, when I was ahead of hr, I don’t think anyone ever approached me with any technology that had AI in it. And the world has certainly changed. And at its real simplest, AI. And when you think about artificial intelligence, it’s really trying to replicate the human brain, which is probably the greatest algorithm in the thing that sits inside our skull. And it’s about trying to find the connections, trying to find the patterns between what you hear or see or read and what you think that means in terms of the decision that you’re about to make. And put simply, when you think about chat and what we’re doing when we’re having a conversation with a candidate, is we’re asking a question that you might ask in an interview, very simple, normal question, like, tell us about yourself, what motivates you? And if you’re a human on the other end, you’re listening very intently to what that person is saying and trying to extrapolate. You know, does that sound like someone who’s going to be a leader? Does that sound like someone who’s got great resilience and what we’ve built is effectively doing exactly the same thing, but it’s doing it in a way that can obviously analyze many more data points, many more features of what’s being shared. It can do that repetitively. You know, in our case, it’s about 100,000 conversations that we can have in a couple of hours. But most importantly, it’s doing that without any of the biases that we all bring as humans when we’re listening or watching people respond in an interview. So for me, it’s not a, you know, a radical invention that you can’t understand. It really is about data and finding the data patterns between what you’re looking at and what you’re listening to and what defines success in a role.

Matt Alder [00:06:45]:
Just to dig a bit further into bias, we talk a lot on the podcast about human bias, unconscious bias, even conscious bias in the recruitment process. And when it comes to AI, there. So it seems to be two sides of the discussion. There’s a lot about AI helping to mitigate human bias, but there’s also a conversation that the AI sort of enforces bias and actually makes bias worse. Talk us through that. What’s really going on in terms of bias in technology?

Barb Hyman [00:07:19]:
Look, I absolutely agree that there is a real risk of bias being amplified or even introduced when you’re using AI when it comes to hiring and promotion decisions, which is why we’re having this conversation and we’ll get to talk about how do you, as a decision maker, make sure that you’re asking the right questions so that you’re not selecting technologies that create or enable bias? And there’s a. There’s a bunch of different ways it can happen, but fundamentally, you know, as you probably know, it’s about the data that you’re using to draw inferences, to assess, to understand someone. And you know, the case that everyone knows is the one of Amazon where they scraped. I think it might have been over 10 years, millions of CVs having removed the names to try and identify of all of those who were hired. What’s that profile of success? Because that’s a shortcut for someone like Amazon, who probably gets 50 million applicants a year. They have to use technology. There is no other way to do it. But the challenge with that, as we know, is there’s lots of latent signals about gender, about race, probably even about socioeconomic status in CVs. And so that is the flaw of that way of building a predictive screening tool when you’re using what we call third party data. In a way, it’s kind of dirty data. It’s not clean data. Another one to, I guess, signal that is also at risk is when you’re using any kind of social media data. So what we encourage and what I really plead to HR leaders to do, is to ask the question, where is this data coming from that you’re analyzing, that your machine is analyzing, to figure out whether candidates are the right fit? And you have to understand whether that data has inherently biased memory in it. Just a quick example. So, you know, one of the players in this market built a technology which draws on Twitter and what people say on Twitter. Now, the problem with that is obviously the way we tweet is hopefully not the way we have conversations when we’re applying for jobs. But more importantly, Twitter is not representative of the general population. So when you start to rely on technology that has basically been built of Twitter data, you’re risking introducing bias into your recruitment. So if there’s only one area that you dig really deep into and you’re very, very curious about and rigorous about as a decision maker, it’s to understand where the data comes from.

Matt Alder [00:09:45]:
Now, you’ve just published a framework to really help the industry understand this better and achieve more fairness when it comes to using AI in selection. Obviously, I want to sort of dig into that in some detail because I think it’s excellent and I really like to share it with everyone just before we do, talk us through the motivation for doing that bit of work and how it came about.

Barb Hyman [00:10:09]:
Yes. So, look, we see ourselves as having a responsibility, not just to our customers, but actually to the whole candidate market, to make the process of applying for a job and getting a job as fair and inclusive as possible. I don’t think there’s any 100% perfection, but certainly my background in HR. I had seen many examples of bias, both conscious and unconscious, and that is core to our mission. And when you’ve got a team of data scientists and engineers who are trying to solve a problem, how do you take meaning? How do you take insights from words? How does that actually translate into any kind of value for the client? My directive to them was, you cannot lose sight of that. This needs to be fair and inclusive. So really, since the beginning, I’ve had very clear conversations with our technical team. And, you know, this is, I think, the benefit of having someone with my background lead a business like this. Because if you’re, if you’re, if your only perspective is the technical side and the engineering side, you can sometimes miss what’s really important on the human side is to always be ensuring that doesn’t matter how fantastic it is, if it isn’t inclusive, it’s not a product that we’re going to put out in market. And so it’s become an embedded part of our culture that we’re self assessing. You know, it goes beyond just doing standard bias testing like the four fifths. You know, the research and experiments that we’ve done I think are quite incredible. And you know, I commend the data science team for their ongoing commitment to that. And you know, there would be no one in our team who would be in our team if they felt that we were taking a commercial path, that in any way sacrificed the human path. So it really came out. Matt from this is How We Work and the more conversations I have in market and to be frank, some of the technologies I see really mature companies adopting made me realize that we need to put it out there for the whole community. So that’s what we’re doing is we’re making it available to every leader, every TA team. It’s vendor agnostic as a framework and we really hope that people will start to apply it and I think also legal teams to get really close to the technology because I think there’s some legal risk involved in some of the tools that are out there with potential risks of bias.

Matt Alder [00:12:32]:
Fantastic stuff. So talk us through the framework, what’s it called and what are the key elements?

Barb Hyman [00:12:37]:
Look, we’ve called it fair because we think there is no global standard obviously when it comes to what makes a fair technology in this space. And you know, that’s part of the challenge. So we hope this really starts to set that standard. And FAIR stands for fair AI and recruitment. And you know, our science team obviously did a lot of research looking at what exists out there in market and came up with a definition around fair, which is very simply the absence of any prejudice or favoritism towards any particular individual or group with a protected, protected attribute. I don’t think that’s a controversial definition, but we saw that there were two sides to this self regulation for organizations. One is around the product itself. And there are four dimensions that we think are really important for every Chro to look at when they’re considering any AI technology. And those are, is it unbiased, is it inclusive, is it valid and is it explainable? And we go into each of those in quite some depth and I think, you know, there would be a number of technologies that may not meet all of those hurdles that are currently, you know, in quite widespread use in the market. So that’s almost table stakes. But I think above and beyond that it then comes to the organization itself, which in our case is obviously predictive. Hire. And we think there’s a, you know, an additional element that’s really important for an organization to investigate and that’s around can I trust this organization and trust for me, you know, it’s such a common ambition in HR and for HR leaders to build trust with your people, trust in your processes, trust in promotion decisions. And for us trust is about is there data privacy and security? Can I trust that they will treat like the crown jewels the data that’s coming in from candidates and customers? The second one is team diversity and that’s pretty simply what it says, which is do you have diversity really broadly defined socioeconomic status, obviously race and gender and you know, that’s an important lens through which to look. Are we likely to get a one sided outcome in terms of the product or something that’s much more representative of, you know, the whole community that we serve? And then the last one is around transparency. And I think that’s the element that has the biggest or really the longest way to go for players in this space. You know, I think one of the challenges with anything that’s science based like our product is it’s very hard to trust something that you can’t see. And you know, it’s quite unusual for any vendors to be publishing their signs, which we chose to do. We really exposed our IP by putting it out there in the scientific world, open for peer review. But we felt that was really important because I know myself as a buyer, when I was in my former HR role, I would not have made a decision to use technology that, you know, had not been exposed for general critique and you know, which I couldn’t understand myself.

Matt Alder [00:15:40]:
It’s absolutely brilliant that you’ve done this as an organization and it’s very much needed in the, in the industry. I mean, how do you think something like this is going to build trust with both TA leaders, many of whom are potentially very suspicious about what they’re being sold, but also candidates in terms of the validity of the recruitment processes that they’re going through?

Barb Hyman [00:16:02]:
Well, I think that’s where transparency is what really disrupts lack of trust. You know, any HR leader will tell you that if you make decisions, for instance, to, you know, change salaries and how they’re determined or change benefits in an organization, if you don’t come out and explain why, that’s going to fall pretty flat. And so for us we have to model and we really look to the whole industry to model transparency. So that applies to both how much you reveal in market to buyers and decision makers around the science and how that works. And, you know, that includes all the bias testing that everyone in AI needs to be doing. You know, I think there are some really obvious simple questions for talent leaders to ask to basically almost, you know, like a decision tree. Either go forward with a second conversation or not move forward. You know, basic questions like, show me your technical manual. Show me your bias testing. Do you do bias testing both before deployment and ongoing? Show me what’s in your team. What does your team look like? You know, are there any data scientists in the team? I think one thing, Matt, is there’s a huge amount of claims and I think puffery around tools that claim to have AI. But, you know, as you would know, if there are no data scientists in the team, there’s no AI. It’s basically simply matching tools. And there’s a very big difference between technology that has what ours has, a learning capability. It learns from data like who was hired and who left the business, versus.

Matt Alder [00:17:38]:
Something that’s much more static from a candidate perspective.

Matt Alder [00:17:41]:
There’s a huge amount of sort of misinformation in the media about automation in recruitment and AI in recruitment and success or candidates being picked by robots. And it’s kind of always been the case ever since ATS were invented. But the Amazon example that you, that you, that you mentioned earlier has kind of become widespread, even though Amazon never even confirmed that it actually happened. As an industry, what do you think needs to happen to make candidates feel more comfortable? That actually this technology is there to remove bias and increase fairness and efficiency in recruiting, which for the good of everyone.

Barb Hyman [00:18:21]:
Yeah, yeah. Look, I think the first thing is the actual experience itself for the candidate needs to feel human. And I think that’s partly the problem with a lot of the media stories that are put out there is, you know, I wouldn’t want to have a conversation with a robot. I don’t particularly want to play a game to get a job. And I. I know I don’t want to talk to a video because that’s a bit scary, but I chat every day with my friends and family and my kids, and it’s a very normal, relatable, human thing that we do. So, you know, for us, that was pretty critical from the outset. The experience needed to feel something that anyone can do and feel comfortable and relate to. So I think that not all automation is the same as far as how human it can Be and you know, the other part is, you know, a lot of people are attracted to HR because they’re people people they want to connect and care for those that are in the organization and the candidates. But the reality is it’s impossible to do that at scale and to do that consistently well. And if you look to the right technology that can give humans a human experience, that to be honest is better than what humans are able to do. You know, for instance, the ability to give every candidate coaching and feedback that helps them learn and builds their confidence. You know, no recruiter can do that whether it’s for 10 candidates or you know, 100,000 candidates. So I think, you know, for us again we think it’s for TA leaders to be curious about this space and go, there are certain things that are non negotiable for us. It needs to feel human, it needs to feel trusted by candidates and then to compare and contrast the different technologies that are out there. And I think there’s quite a big range. But I absolutely agree that the human side and the fact that it’s an experience is critical to the decision. And I also think, you know, with us, to be honest, what we see is there’s about every two minutes around the world someone has a conversation with our chatbot fi and we see their feedback and mostly they know it’s AI because when they get their feedback within, you know, 10 or 15 minutes and it understands them perfectly and they feel wonderful and uplifted and more self aware, they’re never going to get that from a human interview and people are grateful for that. So I think again that’s another assumption that’s being made that candidates don’t want that and candidates don’t trust it. We’ve seen the complete opposite with our data.

Matt Alder [00:20:49]:
So it’s been a tough few months for talent acquisition. What do you hope is going to happen in recruiting this year and beyond? What do you think the sort of the short to medium term future looks like?

Barb Hyman [00:21:03]:
Look, that’s a great question. I think that the move to automation will increase because unfortunately unemployment means volumes are going up. And so what we’re seeing is there are more and more organizations who are struggling with how do I get through that? You know, one of our clients receives about 50,000 a week. It’s impossible to screen and interview properly all of those and get back to people. So I think more companies will be curious and be looking into automation. But I hope that what they’ll also be doing is thinking about it from a business perspective. You know, what what is really important here to solve for not just recruitment process efficiency, but actually business value. You know, if you think about a lot of organizations that are consumer brands and what they’re spending in marketing to get someone in the door of that retail store, probably millions that customer acquisition cost. And every week, every month, you’re getting thousands of people who are probably also shopping in your stores, applying for jobs. You know, that’s gold. The opportunity to use technology that’s going to make them love your brand as much as you know, because you’ve given them such a great experience. So we’re seeing a much greater connection between the business and recruitment and I hope that that continues to grow amongst the TA community.

Matt Alder [00:22:24]:
So final question. Where can people find the Fair Framework?

Barb Hyman [00:22:27]:
On our website, predictivehire.com it’s available there. We also have other books that we’ve created to help people think about how do they manage for inclusion and diversity, which obviously ties very much into understanding how you make good decisions around AI tools. So come to our website and try the test. Take the experience that candidates do every day. I think you’ll be amazed at how human and engaging it is.

Matt Alder [00:22:54]:
Bob, thank you very much for joining me.

Barb Hyman [00:22:56]:
Thanks, Matt.

Matt Alder [00:22:57]:
My thanks to Bob. You can subscribe to this podcast in Apple Podcasts on Spotify or via your podcasting app of choice. Please also follow the show on Instagram. You can find us by searching for Recruiting Future. You can search all the past episodes@recruitingfuture.com on that site. You can also subscribe to the mailing list to get the inside track about everything that’s coming up on the show. Thanks very much for listening. I’ll be back next time and I hope you’ll join me.

Related Posts

Recent Podcasts

Ep 716: Using AI To Transform Quality of Hire
July 1, 2025
Ep 715: Addressing Ageism In Hiring
June 25, 2025
Ep 714: Navigating AI In Talent Acquisition
June 18, 2025

Podcast Categories

instagram default popup image round
Follow Me
502k 100k 3 month ago
Share
We are using cookies to give you the best experience. You can find out more about which cookies we are using or switch them off in privacy settings.
AcceptPrivacy Settings

GDPR

  • Privacy Policy

Privacy Policy

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively.

Please refer to our privacy policy for more details: https://recruitingfuture.com/privacy-policy/