Talent Acquisition stands on the edge of revolution, with AI tools promising to make recruiting faster and more effective. But will it make hiring fairer? Recruiting and HR are already a key focus for governments as they develop legislation for AI, and employers are already at risk of breaking existing laws if they use AI tools that discriminate against protected groups of people.
My guest this week is Commissioner Keith Sonderling of The EEOC. In our conversation, we talk about the benefits and risks of using AI in hiring, what employers need to know to ensure compliance with existing laws, and the new regulations many countries will implement shortly.
In the interview, we discuss:
• Background context of The Equal Employment Opportunity Commission (EEOC)
• Bridging the gap between policy-making and HR practice
• What advantages does AI bring to talent acquisition?
• The potential for technology to make clearer, more transparent hiring decisions than humans
• The dangers AI poses if incorrectly implemented
• How AI is used in hiring already falls under existing employer equality legislation.
• Who is liable, the employer or the AI vendor?
• The EU AI Act and New York City Law 144
• Are there common themes in new AI legislation being developed around the world?
• Reskilling, upskilling and changing dynamics in the workforce
• What does the future look like?
Follow this podcast on Apple Podcasts.
Transcript:
Matt Alder [00:00:00]:
Hi, this is Matt. Just before we start the show, I want to tell you about a free white paper that I’ve just published on AI and talent acquisition. We all know that AI is going to dramatically change recruiting, but what will that really look like? For example, imagine a future where AI can predict your company’s future talent needs, build dynamic external and internal talent pools, craft personalized candidate experiences, and intelligently automate recruitment marketing. The new white paper, 10 Ways AI Will Transform Talent Acquisition doesn’t claim to have all the answers, but it does explore the most likely scenarios on how AI will impact recruiting. So get a head start on planning and influencing the future of your talent acquisition strategy. You can download your copy of the white paper and at Matalda. Me Transform. That’s Matalder. Me Transform.
Matt Alder [00:01:10]:
Talent acquisition stands on the edge of a revolution with AI tools promising to make recruiting faster and more effective. But will they make hiring fairer? Recruiting and HR are already a key focus for governments as they develop legislation for AI. And employers are already at risk of breaking existing laws if they use AI tools that discriminate against protected groups of people. I’m Matt Alder, and my guest in this episode of Recruiting Future is Commissioner Keith Sonderling of the eeoc. In our conversation, we discuss the benefits and risks of using AI in hiring and what employers need to know to ensure compliance both with existing laws and the new regulations that many countries are shortly going to be implementing. Commissioner Sonderling, welcome to the podcast.
Commissioner Keith Sonderling [00:02:10]:
Thanks for having me, Matt. You know, we met a few years ago and I’ve subscribed to your podcast since, and it’s about time I finally have a chance to. To get on. So I’m really excited to talk to you today.
Matt Alder [00:02:22]:
Absolutely. And it is an absolute pleasure to have you on the show. Could you, for mainly the benefit of people outside of the U.S. could you introduce yourself and explain what you do?
Commissioner Keith Sonderling [00:02:33]:
Yeah. So I’m a Commissioner at the United States Equal Employment Opportunity Commission, the EEOC. So in the United States, there is an independent agency that deals with all workplace discrimination issues, all issues related to equal employment opportunities. We were actually created out of the civil rights movement. In the 1960s, Martin Luther King marching in Washington, D.C. led to the creation of the Civil Rights Act. The Civil Rights act then created some of the strongest protections worldwide when it comes to entering and being an employee for providing for yourself, providing for your family, being in the workplace. We have civil rights laws that protect that here in the United States. And it’s my agency that administers and enforces it so we have a bunch of roles we play. First and foremost, we are a civil law enforcement agency. So we’re responsible for actually having federal investigators go to workplaces and enforce these laws. We also do a lot of compliance. You know, part of our mission is not only to enforce and remedy employment discrimination, but we also do a lot of compliance assistance to prevent it from ever occurring in the first place. Another one of our goal is promoting equal employment opportunity in the workplace. So I like to say, you know, our mission is very much what HR’s mission is. You look at HR professionals, our identities are fairly similar in that regard. So when you think about the big ticket items coming from the United States or Even globally in HR, the MeToo movement, pay equity, everything related to Covid and accommodations, disability, religion, race, national origin, age. So that’s my agency here in the United States. And because we were really one of the first globally, a lot of countries really look to our agency as we’ll talk about to model their employment laws. And a lot of multinational corporations really use our agencies HR standards as the standards globally.
Matt Alder [00:04:40]:
Now you’re, you’re very much out and about on the conference circuit at the moment. I saw you at Transform last week. As you say, we met at Unleash. I know you’re going to HR Tech Europe in May. What’s the, what’s your kind of mission? What’s your objective with sort of getting out there into the, into the community via these conferences?
Commissioner Keith Sonderling [00:05:00]:
Yeah, well, you know, that’s something I really made my personal mission when I got confirmed to this job in 2020 to be a commissioner. And, you know, I’m a labor and employment lawyer by training, so I used to defend HR departments in lawsuits. But, you know, what I really found was there was a significant disconnect between policymaking in Washington D.C. and those HR professionals around the world who are actually then stuck with figuring out the complexities of dealing with labor and employment law, dealing with HR law. So when I first got to the eeoc, I wanted to be more innovative and ask and talk to HR professionals across the board, from chros to TA heads, saying, you know, what are the big issues that you’re facing in your practice and you know, what are your fears and what would you like to implement moving forward? And that’s when I was surprised to learn that by far and away, the number one issue was workplace technologies and artificial intelligence and machine learning. And like many on the outside, I had to learn what that actually meant because for me, I thought, you know, well, that is around workforce displacement of having robots replace actual workers. Because you remember, that was all the rage for a long time. And then I realized that really only related to, to certain industries, you know, retail, manufacturing, logistics. And so obviously it was more than that. And that’s when I dove into HR technology and figured out that there’s a lot of vendors, there’s a lot of programs, there’s a lot of companies who are actually using machine learning. And this was way before generative AI to actually make employment decisions, to actually help HR departments find the right candidates, to advertise in the right places, and even to select the candidates and then have once the they become employees of the company, algorithms were playing a significant role in their promotion, in their salary and their tenure at the company. So that’s how I stumbled into this world. And I was one of the first regulators to really start talking about it, because I do believe this is the future. I do believe in the technology. But there has to be significant quote, unquote, guardrails around it to be properly used. So that’s why I’m out there all the time. I found out that you really have to be in front of people and you have to say, here is exactly what your fears are from Washington D.C. and here is actually how you can live within the structure and framework and use these programs both properly, lawfully and ethically.
Matt Alder [00:07:45]:
Absolutely. We’ll talk, we’ll talk more about that in a second. And the guardrails and the various bits of additional legislation that are sort of popping up around the, around the world. Before we do though, let’s just focus on the kind of advantages. So, you know, what advantages you said you think AI is, you said that AI is very much the future. What advantages do you think it brings to technology?
Commissioner Keith Sonderling [00:08:08]:
Talent acquisition, Significant, potentially unlimited advantages. Just not only in the speed and efficiency. Right. For a lot of large companies getting. And with the ease of applying on some of these websites now you can get millions of applications, you know, and there’s just not enough time anymore for talent acquisition or hiring managers to be able to review those at scale. So just from obviously simplistic efficiency, there’s significant benefits. But I look at it in a much more complex way than that. And I look at it and saying, well, how has employment decision making occurred for decades? And that is generally just through the human brain. And with that, there’s a lot of bias that has been baked into employment decision making. That’s why my agency exists. That’s why we get, you know, almost, you know, in the United States probably close to 100,000 charges of discrimination every single year. And why we have litigation. And so there’s, there’s a problem to begin with that, you know, companies haven’t figured out and the EEOC hasn’t been able to stop, despite all of our law enforcement and our guidance that we give out. So if, if technology can help us make more transparent, clear, fair employment decisions that are actually audible and traceable based upon metrics, not just based upon an individual’s bias. And that bias may not necessarily be something that’s unlawful, right? You know, the unlawful categories such as race, sex, ethnicity, religion, et cetera, but other biases that have no bearing on the individual’s ability to perform the job, but has really historically prevented a lot of people from entering the workforce. So I say that, you know, if the AI tools are properly designed and carefully used, and those are two separate and distinct concepts which we can talk about, it can actually help us help, you know, ta actually get to that skills based approach, actually have hires that are based solely on merit and no other characteristics actually allow candidates who were never potentially given a shot to get through the door to get through their first screening of a resume. And you know, all the, you know, historical longstanding bias about men and women’s names, you know, those very basic examples, well, this is their ability now if it’s designed properly and carefully used to actually have employers take a skills based approach to hiring, which is where everyone wants to go and nothing else. But you know, I always talk about the benefits, but in my job I also have to talk about the risk too. Of course, if it’s not carefully designed or it’s improperly used, you just flip those, then it could potentially scale and cause more discrimination than any one individual can do. So so much of it is, you know, we can’t just talk about the benefits, we can’t just talk about the potential liability. We can’t have to talk about, well, how do we do this right to actually have ta go into a much more technology based, more technology based industry with all these very tools that could actually have significant benefits all around.
Matt Alder [00:11:34]:
And I suppose to pick up on something that you said there, which is something you talk about a lot, there’s almost this kind of obsession with new legislation. What are governments going to do to control AI? And you know, I think that the point that I’ve heard you made a number of times is, well, actually there is already legislation that the AI falls under. Tell us a little bit about that.
Commissioner Keith Sonderling [00:11:57]:
And that’s the biggest misconception out there by far is that there’s no laws, there’s no guardrails, there’s no existing framework related to using AI in the workplace. And look, it’s not everyone’s fault that they believe that because, you know, after generative AI and ChatGPT and all the, and moving forward with the EU AI act in the EU and all these governments now having summits and here in Washington, you know, senators bringing in very famous tech CEOs, you know, that’s causing more chaos of saying, well, look, look at the uncertainty related to using AI. Look at the lack of actual loss here. You know, either we’re not going to use it because of that uncertainty, which is not good, or we’re going to use it and not build those governance structures around it and let it go. And hopefully it makes a great employment decision and hopefully it removes bias and we’ll buy these AI tools like we buy other software and eliminate our TA departments and just have computers do it because there’s no laws. Yet. Both sides of those are wrong because I’m going to simplify it. And this is really, for those of you in ta, how you need to look at this, right? You know, there’s only a finite amount of employment decisions that an employer can make. We all know the basics, hiring, firing, promotion, you know, wages, training, benefits. And at the end of the day, AI hasn’t created a new sort of actual employment decision yet. So what it’s doing, it’s actually assisting you or making that employment decision. But you as employers, you as hiring managers, you as our ta, are the only ones who can actually make that hiring decision. And that decision, that employment action, is what’s regulated. And then what we look at, that’s what HR professionals have had training on. That’s what you have policies, procedures and handbooks on, you know, since you began your career. So in a sense, we need just to simplify it and say, what are we asking the AI tool to do? Are we asking it to find the best candidates? Well, what, what guardrails do we have in place? What policies, procedures do we have in place when we ask a human to go find candidates and sort through candidates? So there are laws that apply, and it’s the same laws that you’ve been subject to your entire career in hr, no matter where you are in the world. And that’s what I think we’re losing sight of with such a interest now in AI globally, from state capitals around the world, is that there are existing laws that you can use this properly. Now, be in compliance with the law or be not in compliance in the law. If you’re just going to know it, ignore it. Just like every other framework you have within your organization, whether it’s making a hiring decision or, you know, actually for your core business, there’s regulations around that, and we can’t lose sight of that. And a lot of people are because of, you know, the interests in specific AI legislation which will play a role in how you operate, especially if you’re in Europe or if you’re in New York City, which we’ll talk about. But just don’t lose sight of the fact that at the end of the day, the EEOC is going to look at the employment decision, and we’re going to look to see if that employment decision had bias, whether it was made by a human or whether it was made by a robot. We’re going to analyze it the same way, and liability is going to be the same way.
Matt Alder [00:15:38]:
And again, I think there’s an interesting point there about who. Who has that liability, because it’s the employer, isn’t it? It’s not the vendor who’s designed the system, it’s the employer who’s made the employment decision.
Commissioner Keith Sonderling [00:15:49]:
And that’s a really key concept. And a lot of people don’t understand where that’s coming from. So let me take a step back.
Matt Alder [00:15:56]:
Sure.
Commissioner Keith Sonderling [00:15:58]:
When our agency was created, when Title VII of the Civil Rights act here in the United States was enacted in the 1960s, we were given jurisdiction over three parties. And the law says only those three can make an employment decision. So no one else make an employment decision. Nobody else can actually be, say, you’re hired, you’re fired, you’re terminated. Right. So that’s, you know, employers, companies. Right. Easy one. Unions. And I know the union issue gets a little complex when you get into Europe. Just, you know, American unions. Right.
Matt Alder [00:16:32]:
Yeah.
Commissioner Keith Sonderling [00:16:33]:
And then staffing agencies. So the big staffing agencies, which are all global. Right. Which are, you know, on every continent at this point. Right. So those are within our jurisdiction. And US Law says those are the only three parties that can ever make an employment decision. Right. Within there. You don’t say an AI vendor. Right?
Matt Alder [00:16:52]:
Yeah.
Commissioner Keith Sonderling [00:16:52]:
Or a software tool or an algorithm. Right. So that’s, that’s where it comes from. And it’s not just picking on the employer, saying, we’re going to go after the employer. You’re going to be the one who’s going to be responsible. Even though you didn’t design the tool, even though you don’t necessarily know how to use the tool. The vendor just helps you use it. It’s rooted in the law. Now we’re going to, we’re seeing potential changes in that. And that’s where a lot of this is going. And there’s, you know, in the EU Air act, you know, they’re specifically saying that vendors who build the systems, you know, are going to have liability. There’s proposals in California that also is going to give potential liability to the vendors and saying that, you know, in California, if the, these bills passed, that the vendor, if they’re involved in making an employment decision, they’re essentially going to be an employer too. But for the meantime, you cannot rely on thinking that if the tool does something that you may not have access to, that you may not understand that you will be absolved of liability because it’s through an AI and it’s through a vendor. And that, and that’s critical starting point and why buying HR technology is so much different than buying other kinds of technology. And for you as TA professionals, for you as an HR who are with the vendors who are deciding which products to use and then have to go sell it internally, this is where the conversation needs to be because that liability rests with your use of IT and not the vendors. You have to then build programs and structures around it within ta, within HR to ensure that your own company’s use of IT is compliant with the law.
Matt Alder [00:18:41]:
To focus a bit on the kind of the new regulations and legislation that’s being talked about around AI. It’s interesting that we’re talking about AI in general here, but it always seems to be recruiting and ta the people that there’s always a big focus on in these kind of dispositions, discussions. What are you, from the conversations that you’re having, what are you seeing? What are you seeing happening in this area? What. Obviously, you know, every country is very different. Are there, are there common themes? What’s the, what’s the sort of state of play?
Commissioner Keith Sonderling [00:19:16]:
And this is a really great question. It also gets back to the point I was just making about the additional things that HR professionals need to do to properly implement these AI HR tools. Because, you know, you’re at all the conferences, Matt, you see that it’s no longer a question, are you going to use AI in hr? Right, we’re, we’re way past that. You know, the struggle now is which one are we going to use, how are we going to use it and how are we going to comply, you know, with the law in our own country, in other countries? Because you know, a lot of these systems are really designed and a lot of, you know, talent acquisition heads at larger organizations aren’t just dealing with one state, one city, they’re dealing with the world. And how are we going to build this framework within the confines of existing law, but also to potential future laws that are being passed or being proposed. And I’ve argued now, if you look across the board, starting with the EU AI act, which a lot of your listeners know, places employment decisions in the higher risk category. New York City’s Local Law 144, which was the first law related to actually using AI for hiring and promotions, you know, of course, limited to New York City. It’s important to stay abreast of all these changes and not fear that there’s going to be a new law that comes out that’s going to completely essentially, in a way diminish all the work you’ve been doing in trying to deal with the vendors, vet these systems, implement these systems to take a more technology based approach. Unfortunately, I think that’s how a lot of people are looking at that and I want to dispel that right now because if you look across the board. Let’s start. The first AI HR law was actually in 2020 in the state of Illinois. And that was related to video interviewing, if you remember.
Matt Alder [00:21:15]:
I do remember, yeah.
Commissioner Keith Sonderling [00:21:16]:
Yeah. So, you know, facial recognition and video interviewing, because there was some software out there that, you know, during the interview process on Zoom would analyze an employee’s face. You know, and I don’t need to get into the legal issues right now about, you know, how that may impact certain groups over the other, or how that may impact disabled workers or religious workers, I can go off on that. But look, that’s a state that Illinois basically said if you want to use the facial recognition in a video interview, you’re going to have to disclose it, you’re going to have to have consent, you’re going to have opt out, basically made it so difficult to use in Illinois that in, in essence it basically banned facial recognition interviews in Illinois. And then New York comes along and says, you know, for New York City employers, you have to do pre deployment audit testing. You have to disclose, you know, those audits. And. But then they said that’s only for hiring and promotions, only on race, sex and ethnicity, not, you know, terminations, not anything related to compensation or total rewards, and not on the other categories like age and disability. Right, yeah. So you’re seeing like little state, you know, little differences between all of those. And I think that’s just the decision you need to make. And you’re starting to see common themes. And there’s dozens of state proposals related to AI in hr, but a lot of them are starting to get to employee consent. A lot of them are starting to get to pre deployment audits, yearly audits, or as you saw in Illinois, just essentially almost restricting it in certain areas and even the eu. But what I’ve been arguing is these are all things, you know, essentially business decisions you have to make yourself on knowing that these are coming and knowing that you’re starting to see common threads such as consent, such as pre deployment audit at some point nationally or globally, if you start doing those, you’re going to then be in compliance, you know, essentially across the board. And for instance, on the pre deployment audit side, that is not required under federal law. But we, the EEOC has put out guidance if you do, we, we encouraging you to do that. Why? Because if you do a pre deployment audit and you see that it has, it’s discriminating against certain groups and the reason it’s discriminating against certain groups, whether it’s the algorithm, whether it’s the, you know, your applicant pool, or whether it was just a skill requirement that you put in there that wasn’t actually necessary for the job that winds up having a disparate impact against certain workers, you could test that in advance of ever making an employment decision. You could test that in advance to see if there’s bias and then correct it before you ever cause any harm to any individual worker. Right. So if you do that saying, because I want to be in compliance with New York, because I have to be in compliance with the eu, because I primarily operate out of Europe, you’re going to be even further in compliance with federal law. Because you tested it, you found potential bias, you fixed it ever before you made a decision on someone’s livelihood. Right. And I think that is so important. Instead of fearing all these additional requirements, the more you sort of selectively choose, maybe because you have to, because you operate in those jurisdictions or you don’t, just knowing that this is good governance, this is as far as it’s going to go, you’re going to be just in further compliance and you can have more ease in using these tools. So I don’t look at it as, you know, this is, let’s stop our programs, let’s stop doing this because there’s going to be so many new laws. Well, these are all things HR professionals know how to comply with.
Matt Alder [00:24:57]:
Yeah.
Commissioner Keith Sonderling [00:24:57]:
Already it’s just instituting them now when.
Matt Alder [00:24:59]:
It comes to AI, yeah, 100%. That just makes perfect sense. What would your advice be? So, obviously, you know, there’s a bit of a rush at the moment to develop tools with generative AI. You know, some employers are actually developing their own tools, which is interesting, but obviously there’s also a big ecosystem of new vendors out there building interesting. Building interesting tools, which is always, which is always great to see. What would your advice be to those vendors in terms of how they, how they sort of, you know, how they, how they build their offerings?
Commissioner Keith Sonderling [00:25:34]:
Well, on the generative AI front, there’s a few different ways to look at it for HR professionals. One, there’s actually using generative AI to make job descriptions, using generative AI to do performance reviews, using generative AI to tell you where to do your employment advertisements, actually doing some of the HR functions. And the issues with that are no different than some of the issues we saw early on with machine learning and algorithms discriminated against, discriminating against certain groups. Because, you know, if that data is just coming from the Internet, if that is not your own data, if that is not what you’re looking at from your own vetted, you know, company policies or your own workforce, you have no idea what bias you’re injecting in into that job description you’re asking generative AI to do. So if you say generative AI, go make me the best job description or the best job advertisement for this level of employee, and it does it and it looks better than anything, you can draft it and you put it on the Internet, and for some reason that has injected, you know, another company’s bias that was out there, where the generative AI found it from, or worse, put in job requirements that may be necessary for the job, but not necessary for your company’s actual job, that individual’s job in that part of, you know, in that location, that’s how specific we look at it, then you’re injecting bias in there that you have no control over. And it’s already out there and it’s already potentially causing harm and discriminating. So there’s so much when it comes to generative AI and using it for core AI HR functions to ensure that you’re not injecting somebody else’s bias or somebody else’s requirements or skills or talent within your organization that is not critical, not necessary, that you won’t hold up and you’re going to discriminate. The second part of that is, which I think is really going to be coming for HR leaders is how it’s going to impact the dynamics of the workforce. So let’s put aside about using generative AI in HR to do the functions. But as you see all the reports about how generative AI is going to essentially eliminate so many positions within your company, you’re not going to need to recruit for these positions anymore because the generative AI can do it faster, more efficient. And you’ve seen all those stats out there about all these companies predicting doom and gloom about all these jobs going away. And that’s a really serious concern for HR professionals because like everything else, they’re going to have to manage the change in that in their organization’s workforce about eliminating some of those positions and having generative AI doing or not going out and recruiting on some of those positions that they normally would have. And there’s significant issues with that. If you’re going to do layoffs related to generative AI, you know, how is that going to impact your current workforce? You know, and what we’re starting to see is, well, who normally gets laid off when we have a big reduction in workforce, higher paid workers, which tend to skew to older workers. Right?
Matt Alder [00:28:37]:
Yeah.
Commissioner Keith Sonderling [00:28:38]:
And especially if generative AI can do those work faster, more efficient. So that’s going to have an impact on older workers. Or, you know, what I like to say are the newer workers. I’m not saying younger workers, I’m saying newer workers. And this directly impacts all the work TA has been doing to diversify their workforce. All, you know, the newer workers are much different historically than they’ve ever been before. They’re more diverse, they’ve come from a lot of different places. All that recruiting efforts, whether it’s on diversity, diversity of skill, diversity of background to get them in the applicant pool, have paid off. And if you want to just simply say, well, let’s just lay off the newest crop of workers because they are, you know, the newest in the company. They haven’t ingrained themselves. It’s cheaper to do that. Well, you’re going to. That’s going to have a disparate impact on certain groups. And we’re already seeing the jobs that are going to be displaced by generative AI are going to have a disproportionate impact on women, on African Americans, on Hispanic Americans. And it gets very complicated with, I’m not a labor economist of why that is. Right. About the jobs that are going to be displaced. But that’s all going to fall on hr, both on the front end to make sure. That there is no discrimination. And then when those individuals are laid off, the second part of it too, you know, when it comes to the generative AI equation, which again is in HR’s wheelhouse, is related to now skilling your current workforce to be able to use these tools. And our disabled worker is going to be able to have the same opportunities to be able to learn how to become a prompt engineer if they need more time, if they need different kind of adaptive devices. And same with, you know, older workers. And I’m not trying to be ageist here, but obviously Gen Z, who grew up in their phones may be able to learn these tools faster. Where you want to make sure your older workers don’t say, well, I’ve been doing my job the same way for 20, 30 years, I’m just going to quit now. Yeah, right, yeah. Or, and you’re forcing me out in that sense. And then it’s also producing a lot of anxiety and fear about, well, no matter how old I am, if I’m training generative AI to do my job, at what point am I going to be terminated and generative AI is going to take my job? And then that’s leading to mental health issues. So I like to say that because you could see that you may just see like, well, generative AI, you know, it’s just going to come in, we want to do it. But then it goes back exactly to HR professionals to kind of again take the lead in this, to make sure that these broader generative AI programs, outside of, you know, using generative AI to make a job description, it’s, that’s still within, you know, HR’s job description to make sure these programs are being implemented properly.
Matt Alder [00:31:15]:
So as a final question, and I’m not sure you’re allowed to make predictions, but I’m going to ask you anyway. And I think this, this kind of comes into what you were saying earlier. Where do you hope we’re going to get to with all of this? If we were having this conversation in three years time, what would we be talking about?
Commissioner Keith Sonderling [00:31:33]:
I would like to see mass integration of AI and HR and generative AI in hr. And I would like to ensure that of course, HR professionals are still leading the charge in the integration and using of the software. Because unlike other areas in the business, when you’re dealing with AI and hr, you’re dealing with civil rights, you’re dealing with people’s ability to provide for their families. And that’s just a different level of care that HR professionals are used to doing than other areas of the business. So, as we all know at this point, you know, AI is taking all over all aspects of businesses. And I think that’s why HR professionals really need to be able to be a very significant part of that equation and ensure that all the longstanding policies, procedures on how to hire an employee, how to promote, how to pay employees, that HR professionals use that, and don’t lose sight of the fact that, that all that still applies when using technology, when, you know, having AI make that decision or assist HR professionals with decision. So my prediction is that for this to work, it’s sort of a prediction and sort of a request. For this to work, you know, HR professionals really need to not be intimidated by the changes of the law or the technology itself. Go back to the basics. And, you know, when you’re dealing with the vendors, you know, you need to make sure that you’re asking the right questions of saying, how is this going to work on my workforce? How is this going to work on our specific job descriptions? How is this going to work with our specific employees or applicant pools? And then, you know, what are you going to do to ensure that our own use of it is proper? And that’s half working with the vendor and also internally, and this is where, you know, HR professionals need to take the lead, is saying, okay, we need not only to buy these products, but we need to then build the governance structure around it identically what we already have in our employee handbooks in our procedures. If an employee, you know, if somebody in HR makes a bias hiring decision and we find out that they unlawfully hired, you know, a man for this position and that was their only qualification, generally they would be terminated for that because you have equal employment opportunity policies and procedures. And that same thing needs to happen with AI. It’s saying that the same requirements that we’re using to hire, whether it’s a person or whether it’s AI, are going to be in place and that’s where it needs to go to work.
Matt Alder [00:34:06]:
Keith, thank you very much for talking to me.
Commissioner Keith Sonderling [00:34:08]:
Thank you for having me.
Matt Alder [00:34:11]:
My thanks to Commissioner Sonderling. Both Keith and myself will be speaking at HR Technology Europe, which is taking place in Amsterdam in May. You can get a 50% discount on your ticket by going to Mataulder Me Europe and using the code MAT50. That’s M A T T50. So Matalda me Europe. And the discount code is MATT50. You can follow this podcast on Apple Podcasts on Spotify or via your podcasting app of choice. Please also subscribe to our YouTube channel. You can find it by going to Matalder TV. You can search all the past episodes@recruitingfuture.com on that site. You can also subscribe to our monthly newsletter, Recruiting Future Feast, and get the inside track about everything that’s coming up on the show. Thanks very much for listening. I’ll be back next time and I hope you’ll join me.