For the first show of 2019, I want to address a topic that is critically important for everyone in talent acquisition. There has been some talk about the potential for the algorithms that power recruiting technology to introduce bias into the hiring process but not much analysis on what is actually happening.
To take a deeper dive into the subject, I’m delighted that my guest this week is Miranda Bogen from Upturn, a think tank non-profit with a mission to promote equity and justice in the design and use of digital technology. Upturn have recently published a report that explores algorithmic bias in hiring.
In the interview we discuss:
• Mapping out the predictive algorithms used at the various stage of the hiring process
• Are these recruiting technology tools a more significant risk for causing discrimination than they are a solution to it?
• Are algorithms actually making hiring decisions?
• “Effective deselection” in recruitment marketing and automated sourcing
• The problem with pattern matching
• Machine learning and employer data
• The legal implications of algorithmic recruiting
Miranda also shares her advice to both vendors and employers on what they can do to avoid automated bias.
Subscribe to this podcast in Apple Podcasts
Transcript:
Matt Alder [00:00:00]:
Support for this podcast is provided by jobyac, the industry’s first recruitment marketing platform designed exclusively for Google for jobs. For the first time in house, recruiters can take advantage of the immense power of Google by posting jobs directly to Google for jobs without the need for job board middlemen. Jobiac’s platform encodes job posts to be read by Google and automatically posts them in just three quick steps. Visit www.jobiac.AI to try it for free today. Just enter the URL of your job post and jobyak will take care of the rest. For a limited time. Recruiting Future podcast listeners can receive 10% off the monthly price when they sign up. Just use the code rfpodcast to claim your discount. The website again www.jobiak AI and jobiak is spelled g O B I A K.
Matt Alder [00:01:20]:
Hi everyone, this is Matt Alder. Welcome episode 164 of the recruiting Future podcast. So I ended up taking a slightly longer podcasting break over Christmas and the New Year than I originally intended, but I’ve been busy recording some interviews that I’m very excited to share with you over the coming weeks. First up for 2019 is a topic that’s critically important for everyone in Talent Acquisition to get their heads around. There’s been some talk about the potential for the algorithms that power recruiting technology to introduce bias into the hiring process, but not much analysis on what’s actually happening. To dig into the topic more deeply, I’m delighted that my guest this week is Miranda Bogen from Upturn, a think tank nonprofit with a mission to promote equity and justice in the design and use of digital technology. Upturn have recently produced a report that looks at algorithmic bias in the hiring process. Reading the report and talking to Miranda has really made me look at this issue very differently. Enjoy the interview. Hi Miranda and welcome to the podcast.
Miranda Bogen [00:02:35]:
Thanks so much for having me.
Matt Alder [00:02:36]:
An absolute pleasure to have you on the show. Could you just introduce yourself and tell everyone what you do?
Miranda Bogen [00:02:42]:
Sure. So my name is Miranda Bogen. I’m a Senior policy analyst at Upturn, which is a think tank nonprofit based in Washington, D.C. and our mission is to promote equity and justice in the design, use and of digital technology. So we’re looking at everything from police technology to online platforms to algorithms that are being used in high stakes situations like in hiring.
Matt Alder [00:03:05]:
Fantastic. And I know that you’ve recently published a report looking at Algorithms in hiring. Could you give us kind of a bit of background to that and you know, how that report was put together and what it covers?
Miranda Bogen [00:03:18]:
Absolutely. We had heard a lot from fellow advocates about concern about the use of data in important and area important areas and areas covered by non discrimination law. And so people will often throw out hiring as an example of, of a circumstance where algorithms are being used and where there’s a risk for bias. But there wasn’t a ton of information going much deeper than that to really understand what sort of digital tools are used in the hiring process, what sort of data they’re using, specifically what they’re predicting, and how that kind of fits with hiring decisions. And so we wanted to go deeper and really lay out the life cycle of the hiring process and dig into some of the technologies that are used at all these different stages, both to help other advocates understand what this process looks like and to better inform their conversations with employers and vendors, but also to help vendors and employers understand some of the concerns about bias and why there’s a sense that these tools may be a bigger risk for discrimination than they are a solution for it.
Matt Alder [00:04:30]:
Fantastic. And I think there’s some very, very interesting stuff on bias that I want to talk about just in a minute. But before we do, could you give us an overview of the landscape as you see it? Because I know you’ve looked at where hiring algorithms are being used, where in the recruitment process they’re being used, what kind of decisions are they being used to make?
Miranda Bogen [00:04:52]:
So some of the conceptions outside of the field of hiring and talent acquisition are that, you know, hiring is a single decision, that there’s some kind of robot making a yes, no decision. And that’s a very scary thing. But really if you, if you break down the hiring process into its different steps, we get, we get to see how technology is being used at each place. So we broke it down into four main stages of hiring. So you have sourcing, you know, before any applicants have filled out job applications and expressed interest in applying. How are, how are companies and employers reaching out to potential candidates and letting people know there are jobs. Then you have screening where you’re testing for qualifications or doing some kind of assessment. There are some tools at the interviewing stage that are helping employers speed up this, this more resource and time intensive process. And then you have the selection and offer stage where once an employer has decided who to hire, there are tools that are popping up trying to make predictions. So what we saw at the sourcing stage was advertising is a huge Thing here, you know, where employers advertise their jobs really determines who even knows that there’s a job opening. And many advertising tools online today are using artificial intelligence, machine learning, to make more precise predictions about who is likely to be interested in a given job. So that’s one area we looked at. There are also matching platforms that are trying to look at candidates past experiences and their skills and match them with jobs based on the job descriptions or other data that employers have have provided to to these platforms. And this is also the case in sort of headhunting tools, helping recruiters sort through potential candidates on LinkedIn or other platforms. And then at the screening stage, we saw a number of tools that are used to parse through resumes that are kind of updates to the standard keyword searches, as well as more advanced tools that are different sort of tests, personality tests, gameplay, video tools to help recruiters, again, sort through people who are applying for the job. You know, usually there are hundreds or even thousands of applicants to any given job, if not more. And a lot of the vendors in this space are offering their tools as ways to make this process more efficient, to help inform recruiters as they move forward in the hiring process. At the interviewing stage, what we’re mostly seeing here are video interviews and video interviews specifically that are augmented by some kind of AI, usually facial analysis, that tries to parse through these videos and capture some of the dialogue that happens to try and match up those characters with what a successful employee might look like. And then finally in the selection stage, we see some attempts to augment background checks and also some attempts to make forecasts about the salary and benefits packages that candidates would accept. Given that the employment landscape today is very competitive, employers are trying to really get their offers right so they don’t lose out on candidates to competitors or other industries. So it really runs the gamut. Everything from advertising, which you might not think about as an AI hiring tool, but it really is all the way through to the end of the hiring process.
Matt Alder [00:08:35]:
So you kind of mentioned that these tools are kind of informing and helping and making the process more efficient. Did you find that they were actually making kind of autonomous decisions in the hiring process, or is. Is it something different to that?
Miranda Bogen [00:08:52]:
This is something else that that also changes throughout the hiring process. Well, I think our overall conclusion is that while automated decisions AI are not often making affirmative hiring decisions, they’re almost never deciding who to offer a job to. They’re frequently deciding who to reject from jobs. And this happens quite a bit early on in the job process, and especially before candidates apply to jobs. Advertising tools are deciding who to show jobs to. And if they don’t show you a job that’s effectively prevented a candidate from knowing they can apply to that job, you know, they might be able to search for jobs on career portals, but it just raises the barrier and that effect can be spread disproportionately across demographic groups. You also, in any tool that’s sorting candidates, if you’re sorted so low on the list that you’re not in the first few pages, that’s effectively rejecting you from a job. And there are also tools that have hard cutoffs in these sorting systems. So if you’re below a certain threshold, you’re automatically rejected. So I think that’s where we see a lot of concern, because a refrain we heard often in marketing materials from vendors is that we’re just the decision aid. We’re helping you make your hiring process more efficient and humans still retain the final decision making power. And that’s true when offering the job. But it’s often not true for people who find themselves lower on these lists or have been judged prematurely not to be a good fit for a job. And this is the place where we see potential error and potential bias creeping in in a way that wouldn’t be necessarily noticeable or as noticeable to an employer who isn’t intending to discriminate. But, but the tools they end up using might have that kind of effect.
Matt Alder [00:10:47]:
Can we sort of dig into that a little deeper? So, so, so how is sort of bias been perpetuated here? And what kind of bias have you, have you seen evidence of?
Miranda Bogen [00:10:57]:
So any AI, any predictive analytics tool, all it’s doing is looking for patterns. And so the problem is that definitely in the United States and also, you know, elsewhere around the world, the employment field, you know, who gets jobs and what jobs people get is deeply shaped by legacies of discrimination in society. And so any patterns that these tools pick up on are going to reflect that, because we haven’t solved that problem. So if you’re, if we’re talking about, you know, an online job board that’s personalized, for instance, what that tool is doing is trying to show jobs to people who are most likely to click on those jobs, maybe most likely to apply to those jobs. But historically, the people who are most likely to apply to those jobs might fit a certain demographic, and making that judgment about who is not interested in a job might be prematurely precluding that, you know, people who would be qualified for a job from having equal access to making a fair, taking a fair shot at applying to that job. So that’s one way we see biases perpetuating. Another is that, you know, tools that are attempting to assist in selection tend to use an employer’s internal data about who’s a successful employee. So there can be two issues there. One is that if the employer has had a relatively homogenous workforce, then the model that it has for success is going to look a certain way. They haven’t yet been exposed to potentially other characteristics that would indicate that would predict success because they just haven’t had employees who bring that, maybe that different background, different experience, or different skills that could lead to, you know, really great employees. And so any model that they build from the data that they have will reflect the picture of success that that company currently has. Even if the company does have a relatively diverse workforce, we still know that internal evaluations can be swayed by unconscious bias, just as hiring decisions can. So, you know, who is seen as a successful employee who’s getting, you know, great reviews internally can also reflect this. These intrinsic biases that we have in a way that maybe isn’t immediately noticeable, but once you feed it into a predictive system could end up detecting that the company tends to prefer and reward people from a certain background and tends to view less favorably candidates who bring a different work mode or skill set. And so any internal data could bring those effects with them. And employers who use this data blindly and don’t really reflect on what their measure of success is that they’re providing to a vendor or using as the basis of their predictive tool could end up falling into a trap that leads to outcomes that they might be actively trying to prevent, especially as companies prioritize diversity and inclusion.
Matt Alder [00:14:19]:
And what implications are there from a legal perspective in terms of sort of laws and regulations relating to discrimination in hiring? And I know that you’ve got some stuff in your report from this, from a US Perspective. So what are the implications?
Miranda Bogen [00:14:36]:
Well, in the United States, not only are employers prohibited from purposely discriminating on the basis of a protected category, they’re also barred from engaging in recruitment behavior, recruitment activities that have a discriminatory outcome, no matter what that process is and no matter whether it was intentional. So if a tool, if a predictive tool ends up having a disparate outcome that, you know, harms one, you know, a group from a particular racial or ethnic group or a gender group, you know, they’re going to be responsible for that. Now, some of some of the tools that are out there in the market do testing to, to try and detect whether, whether their predictive models have this type of effect. But it can be really hard to, to test for, for all of the protected categories, you know, all of the effects here because you might not have data on which candidates come from a particular religious background or who have a disability, even if you have potentially their gender or race. So an employer might not be aware that a tool they’re using has an outcome that might be harming certain groups if they’re not really proactively thinking hard about how candidates are interacting with these tools and you know, what presumptions are going into the building of these tools.
Matt Alder [00:16:03]:
So in the last couple of years we’ve sort of seen, you know, an explosion of these, these, these types of tools and you know, as you’ve identified in lots of different parts of the, of the hiring process and all the sort of predictions for the next couple of years are that we’re going to see even more, you know, kind of automation and, you know, movement in this direction. What would your advice be? I suppose what would your advice be to, you know, to both vendors and to employers? You know, how should people be thinking about developing these tools and from an employer perspective, what should they be thinking about when they’re sort of purchasing this type of technology or engaging with suppliers?
Miranda Bogen [00:16:42]:
So the first thing I would say is any claim that a predictive tool is bias free should be met with an abundance of skepticism. Certainly if a predictive tool isn’t actively considering a protected group, that’s a good step. But that by no means indicates that the tool won’t have an outcome that would reflect discrimination. You know, as tools use, have access to more and more data, there’s more and more risk that the data they’re looking at is closely associated or even a proxy for these protected categories, protected variables and demographics. So I would recommend that vendors stop making these broad claims that, that their tool will enable bias free hiring and rather grapple and with the fact that, you know, they might be taking a step in the right direction, but there’s only so much they can do instead. I think vendors in particular should be really transparent about what steps they’re taking to test their models for bias and discrimination and, and what they do to remediate any effect that they find. There are a few vendors out there, market that take this seriously and are really public with what methods that they use to test their models and even going so far as publishing some of the internal code that they use to test for disparate outcomes. And so I think all vendors should be following the lead of these companies in being more transparent about these steps so that, so that employers can be more confident when they’re trying to acquire new technology. For employers, I would say the very first thing is to look critically at internal data that you might be thinking about using to try and build models for what a successful employee looks like. Before handing over that data to a vendor or even developing an internal model, measure that data to see if there’s any residual impact of implicit bias in employer reviews or in who gets promotions or who has been terminated. Because those same patterns are going to find their way into any model that’s built and you’re going to have to adjust them to make sure that the tool isn’t leading to violations of the law. But also, especially when as companies are trying to promote diversity and inclusion, thinking about tools that are proactively working towards that goal, you know, it’s not enough to just say this tool is bias free, but how can we make sure predictive recruitment tools, predictive hiring tools are working in the service of diversity and inclusion goals? You know, that doesn’t have to be a separate line of recruitment that can be factored in to a company’s mainstream hiring practices and particularly these tools. So I think for both employers and vendors being cognizant of the type of systemic and institutional biases that are really latent in any data that they’re going to be using here is critical to making sure that the deployment of those tools doesn’t end up either perpetuating existing discrimination, undermining diversity and inclusion goals, or worsening the bias and discrimination that, that we still see, especially in the US So final question.
Matt Alder [00:20:20]:
I know that there’s a lot more information and detail than we could probably get into in a 20 minute conversation in the report. Where, where can people find a copy of the report if they want to sort of dive into this topic a bit deeper?
Miranda Bogen [00:20:34]:
So this report and all of our work is on our website@upturn.org it’s under. So the list of our work and the report is called Help Wanted An Examination of Hiring Algorithms, Equity and Bias. And we really go into detail about all of the stages of these, of the hiring process. We look at some of the tools on the market, not to critique those vendors or tools themselves, but to look at sort of archetypes of different technologies on the market. And that way any employers or vendors who are thinking about using similar tools or thinking about building similar tools can get a sense of what the critiques are from the civil rights community from the perspective of trying to promote equity in technology and make these adjustments and think critically about their own process. We also really love to engage with employers and companies to give feedback and advice on tools because we want to make sure that we don’t think, you know, we’re realistic. We don’t think these tools are going to go away. The incentives for employers to adopt technology that makes their workflow more efficient are just too compelling. When there are so many applicants applying through so many work, so many streams online, there’s such a high volume, these tools are going to continue popping up and become more popular. So we, you know, we’re invested in making sure that these tools are built, constructed, have policies that are promoting equity here. This is really our mission and something that we want to make sure that everyone from the public sector to the private sector are thinking about when using data and when trying to make predictions, make decisions in such a high stakes area as hiring Miranda, thank you very much for talking to me.
Miranda Bogen [00:22:35]:
Thanks so much.
Matt Alder [00:22:37]:
My thanks to Miranda Bogen. You can subscribe to this podcast in Apple Podcasts or via your podcasting app of choice. The show also has its own dedicated.
Matt Alder [00:22:47]:
App, which you can find by searching.
Matt Alder [00:22:49]:
For Recruiting Future in your App Store. If you’re a Spotify user, you can.
Matt Alder [00:22:53]:
Also find the show there.
Matt Alder [00:22:55]:
You can find all the past episodes@www.rfpodcast.com. on that site, you can subscribe to the mailing list and find out more.
Matt Alder [00:23:05]:
About Working with me.
Matt Alder [00:23:07]:
Thanks very much for listening. I’ll be back next week and I hope you’ll join me.









 
      