Subscribe on Apple Podcasts 

Ep 729: Using AI Responsibly In TA

0

The AI landscape in recruiting is evolving rapidly, with vendors racing to add AI features and many employers eager to embrace transformation. But navigating this shift successfully requires understanding what questions to ask and which foundations to build. From vendor transparency to compliance, from bias auditing to data governance, the path to effective AI implementation is not a simple one.

What do TA teams need to consider to adopt a responsible approach to AI?  

My guest this week is Martyn Redstone, a highly experienced advisor on AI governance for HR and Recruitment. Martyn has spent the last 9 years working with AI in recruiting and has some incredibly valuable advice to share.

In the interview, we discuss:

• Getting the foundations right

• Why false AI confidence is dangerous

• Four key vendor evaluation areas

• Third-party auditing

• Shadow AI and data breaches

• Generative versus decision-based AI

• Global regulatory landscape challenges

• Why guardrails actually accelerate innovation

• The task-based future of work

Follow this podcast on Apple Podcasts.

Follow this podcast on Spotify.

00:00
Matt Alder
AI is a revolutionary technology, but it also comes with lots of risks and this creates a big dilemma for employers. Play it too cautiously and you’re in danger of being left behind. Move too fast and you could expose yourself to serious risk. So how can TA teams unlock AI’s transformative potential in a safe, responsible way that doesn’t hold back innovation? Keep listening to find out. Support for this podcast comes from Greenhouse. Greenhouse is the only hiring software you’ll ever need from Outreach to offer. Greenhouse helps companies get measurably better at hiring with smarter, more efficient solutions powered by built in AI. With Greenhouse AI, you can generate stronger candidate pools faster and source high quality talent with more precision. Streamline the interview process with automation tools and make faster, more confident hiring decisions with AI powered reporting.

01:06
Matt Alder
Greenhouse has helped over 7,500 customers across diverse industry verticals from early stage to enterprise become great at hiring, including companies like Airbnb, HubSpot, Lyft, SeatGeek, HelloFresh, and DoorDash. If you’re ready to put the power of AI into the hands of your hiring team, you can visit greenhouse.com to learn more. Hi there. Welcome to episode 729 of Recruiting Future with me, Matt Alder. Hi there. Welcome to episode 729 of Recruiting Future with me, Matt Alder. If you’re interested in finding out how your TA function measures up in these four critical areas, I’ve created the Free Fit for the Future assessment. It’ll give you personalized insights to help you build strategic clarity and drive greater impact immediately. Just head over to Mattalder.me/podcast to complete the assessment. It only takes a few minutes. This episode is about technology.

02:38
Matt Alder
The AI landscape in recruiting is evolving rapidly, with vendors racing to add AI features and many employers eager to embrace the transformation. Hi Martyn and welcome to the podcast. From vendor transparency to compliance, from bias auditing to data governance, the path to effective AI implementation is not a simple one. So what do TA teams need to consider to adopt a responsible approach to AI? Please, could you Introduce yourself and tell everyone what you do. Martin has spent the last nine years working with AI in talent acquisition and has some incredibly valuable advice to share.

03:30
Matt Alder
So my name’s Martyn Redstone.

03:33
Martyn Redstone
Hi Matt, thanks for having me.

03:34
Matt Alder
An absolute pleasure to have you on the show. Please, could you Introduce yourself and tell everyone what you do.

03:41
Martyn Redstone
Yeah, absolutely. So my name’s Martin Redstone. I’ve been in the recruitment industry for coming up for 20 years now. But for the last nine or so years I’ve been working with recruiters to help them take advantage of AI and automation. Kind of pre chatgpt hype person here. However, more recently that work has been more focused on working with recruiters and HR leaders on getting the foundations right. So good governance, good compliance, good risk management around utilizing AI.

04:18
Matt Alder
Fantastic. And you put lots of fantastic content and commentary out there. So, you know, I was really looking forward to this conversation because I think that people will learn a lot about what’s going on. But before we get into sort of compliance and regulation and all of those kind of things, I mean, just give us your take on what’s good about AI. Why should people be using AI in talent acquisition?

04:41
Martyn Redstone
First of all, a key part of the work I do is to not put people off of using AI. I’m very pro AI. I think that AI is, has been and can be exceptionally transformative if used correctly, if used safely and responsibly. You know, we hear all the old kind of sayings, you know, it’s going to reduce the amount of time you spend on admin. That’s a good thing. Absolutely a good thing. Because, you know, nobody ever becomes a recruiter to do administration and to fill in boxes on an system is going to absolutely change the way that we analyze data, the way that we analyze people, the way that we break down what a job actually means into its sub segment of tasks. It’s going to be absolutely, utterly transformative.

05:27
Martyn Redstone
So absolutely every recruiter should be thinking about how they enable themselves, their teams and their organization when it comes to AI.

05:38
Matt Alder
Absolutely. And I think the challenge for a lot of people is very difficult to learn about because it’s always changing. You know, it’s always a moving target and it’s also an area where I think you don’t know what you don’t know. So, you know, I’ve seen people who consider themselves to be sort of fully AI enabled and trained up and actually they’ve barely scratched the surface. Do you think that’s kind of a big problem?

06:01
Martyn Redstone
I think so. I think the, one of the biggest problems that I come across is that the advent of generative AI, the advent of ChatGPT and all the other kind of, you know, language model based chatbots out there have created a, and I said this to somebody the other day, which is you know, whilst they are full of hallucinations themselves, they’ve created hallucinations in people where people now think that because they get half decent results from a language model based chatbot that they are now, they now know everything about AI. And AI has been around for so much longer than three years and it’s so much more than a language model and generative A. So I think that it’s created a bit of a false confidence in people which is quite dangerous because it’s created that false confidence.

06:55
Martyn Redstone
It means that people are now shortcutting the work that should be done to truly operationalize and enable artificial intelligence. And so that’s why we’re seeing some big concerns when it comes to compliance and governance. But yeah, absolutely. When I first engage with people, teams, departments and individuals, I asked them to rate themselves, you know, on a simple scale, basic, intermediate and expert and the amount of people that call themselves experts. And then by the end of my work with them, they say I thought I was an expert, but I really wasn’t, you know, and I’m still not, but I’m on the right path now.

07:34
Matt Alder
The more that I dive into this and the more I kind of learn about it, the further down the basic scale run rate myself. It’s quite funny. One of the things at the moment I’m running an assessment for tier leaders called Fit for Future and there are some questions about AI in there in terms of what people are doing. And one of the things that’s coming out of the results is that a lot of it’s the majority of TA teams are implementing AI through existing vendors that they have. So as we know there’s been a kind of a vendor gold rush for we’ve got AI and you know, that’s actually where most people are kind of coming across it.

08:13
Matt Alder
So what are the key questions that people should be asking, you know, their existing vendors or any sort of specialist new vendors that they’re looking at to make sure that they’re getting the products they’re being sold and they’re kind of really doing the right thing.

08:27
Martyn Redstone
Yeah, so key governance piece for me is exactly that. You know, working with my clients to make sure they’re asking the right questions or even asking those questions on behalf of them because unfortunately a lot of the time they’re not experienced or qualified enough to understand the results. The answers that come back. So I break down key questions to ask vendors into four areas really which is kind of AI system usage and dependencies, which is ultimately Asking a vendor, where and how do you use AI in your systems, including large language models? And do you have any core functions that are dependent on third party AI vendors? So are you using OpenAI, Google, Gemini, et cetera? Then we have compliance, governance and ethics.

09:12
Martyn Redstone
So asking a vendor what their governance frameworks for ensuring that their AI complies with those evolving regulations, because they are evolving and changing continuously. All the old kind of compliance stuff, old but still important, like data privacy laws, ethical standards, those kind of things. But more importantly, nowadays more than ever, training data. Training data is the source of all bias. So can a vendor share the source, privacy considerations, legal status of their training data, and do they have human oversight availability? So what role do humans play in governance and oversight of the AI’s development, ongoing performance and especially any handling of exceptions? Can we have humans in the loop? Then we have the third one, which is model transparency, fairness and security. So asking for any kind of any incident history or security history issues, bias and fairness, really important.

10:13
Martyn Redstone
How do they define bias and fairness for the AI models? You know, how are they tracking it, how are they auditing it, etc. Reliability and consistency, which is a really important thing that people don’t think about. How do you as a vendor measure and mitigate model instability? So one of the things that I did recently was some research that showed that large language models, if you’re using them for ranking of candidates, they will change on a daily basis. So how do you measure that, how do you manage it, how do you report on it? And also we look at completeness and shortcuts, which is how do we ensure that systems process all relevant information? Because we also find that large language models don’t always process all information. And then finally in that area we have accuracy and hallucinations.

11:03
Martyn Redstone
Another key point that people don’t think about is that large language models have hallucination indexes. So ultimately, how are they measuring it, how are they mitigating against hallucination? And finally, the fourth area, I know it’s a bit long winded, but it’s quite important, is data management updates and user support, which we should be asking any vendor, which is how are you handling data from a GDPR or data security perspective? What’s your kind of deployment implementation partnership model? And also from an AI perspective, how do you handle explainability? So not only how can we contact you from a support perspective, but if we’re asking you to provide meaningful explainability on how an AI model works, can you provide that in a natural language way? And Those are kind of the key questions that we tend to ask.

11:56
Matt Alder
Is your TA function fit for the future? Over the last 10 years, I’ve done podcast interviews with hundreds of the world’s most successful TA leaders and discovered exactly what they all have in common. Four critical capabilities that drive their strategic foresight, influence, talent and technology. I’ve created a free assessment specifically designed for busy TA leaders like you to benchmark your capabilities against these four essential areas. You’ll receive clear insights into your current strengths and opportunities with practical guidance on enhancing your strategic impact. The assessment is completely free, takes five minutes to complete and you get your results instantly. Benchmark your talent acquisition capability right now and start proactively shaping your future. Visit Mattalder.me/podcast that’s Mattalder.me/podcast.

13:04
Matt Alder
I suppose just in that explainability bit, what level of transparency should people expect? Because you know, it can be difficult sometimes when talking sort of technology to kind of really sort of really kind of appreciate what you should being told. So you know, first of all, what level of transparency should people expect? And then, and secondly, tell us about the sort of the third party auditing or how can people be sure that they’re getting accurate information?

13:31
Martyn Redstone
Yeah, so transparency is really important as far as I’m concerned in all parts of your AI operations and AI enablement, from telling a candidate on your career page how you use AI and how you use technology in the process through to and to answer your question, if you’re speaking to a vendor and you want to know how the AI model makes a decision, how the algorithm works, they should be able to explain that to you in a natural language way that a non technologist should be able to understand. That’s the kind of basic level as far as I’m concerned and that enables the recruiter to be able to reiterate that back to a candidate as well. It should be understandable to a non technical person in a natural language way. So third party auditing is really important.

14:17
Martyn Redstone
So what we’ve seen recently in the industry has been a lot of news around not only alleged bias in systems, but also we’ve seen recently around some data security issues as well. So I think it’s super important to make sure that vendors and also internal teams are seeking third party assurance. There are plenty of solutions out there, Fairnow Warden AI, they’re all great partners to provide that kind of third party assurance around bias auditing, around data security, information system management, absolutely everything. But I think the thing to remember on that is that especially in Europe Anyway is that not only vendors that are responsible for that, but also internal teams are also responsible to ensure that they’re also being audited and checked for any bias in their systems as well.

15:18
Matt Alder
I know it’s the same in the US and I suppose that kind of brings us to fairness and ethics and all of those kind of things. But, but in terms of kind of actual regulation, I mean, what’s the state of play at the moment? Because I know that we’ve, you know, we’ve seen there are state laws in the us, there’s the EU AI act which is coming on stream at some point. The UK seems to still be very much in a arguing with itself stage about where to draw the line with the current state of regulation and what can we expect in the near future?

15:45
Martyn Redstone
Yeah, I mean, look, you know, we can sum it up by saying it’s really messy right now. The US changes its mind every few months around what it’s going to do from a regulation perspective. Whether that be, you know, federal regulation, whether that be state based regulation, whether that be federal regulation. Banning state based regulation is all over the place. We saw last week the US Federal AI action plan coming out from the White House, which I think really set the scene for what’s going to happen over in the us which is pretty much deregulation, pro innovation, let’s move fast, let’s win this AI war type of situation. And that’s totally nutty at loggerheads with the eu where the EU some would say have over regulated artificial intelligence, but they’ve done it from a trust and responsibility perspective.

16:40
Martyn Redstone
And that’s just the two big players. We then have other areas like South America, Australia, Japan that are all bringing in regulation that is somewhere between the US and Europe on that spectrum. Over here in the uk, what we have seen is a new hope that there’s an AI Bill moving through our Parliament right now, which is being pushed out of the House of Lords by Lord Holmes. What we’re seeing there is potentially a new regulator being created or most of the responsibility being pushed on a former regulator like the Information Commissioner’s office who have done the most when it comes to AI so far, but we’re not sure what that’s going to look like. It’s definitely going to be, I think, kind of halfway between Europe and the us.

17:40
Martyn Redstone
We’re still very pro innovation here in the uk, but we’re also very pro regulation as well. So, so it’s going to be an interesting, I’d say six to 12 months here in the UK to see what happens. But yeah, I think, you know, the advice that I tend to give to my clients is if you’re global, then I would regulate based on the EU AI act, because it’s pretty much the strictest out there. And if you’re doing that and you’re in a jurisdiction where there is no regulation, then you’re covered, ultimately. So. So, yeah, so it’s a bit of a mess right now, regulation. There was something in the media over the last couple of days where I think one country was pushing for a global regulation, when I think it was China, if I remember correctly. So.

18:25
Martyn Redstone
So yeah, there’s lots going on, really. Lots going on all over the world. It’s very messy place. There are some great regulation trackers out there. I provide one privately to my clients. But Fair now have a great one on their website as well, so you can keep up to date on what’s going on.

18:40
Matt Alder
And kind of, to reiterate as well, something that you said earlier in the conversation, just because the AI regulation is in a certain state or state of confusion, existing laws still apply around data protection and discrimination and that’s where people seem to be falling down, isn’t it?

18:54
Martyn Redstone
Yeah, yeah, absolutely. You know, I think that I said this a while ago, which is where I think, you know, as I said earlier, you know, the kind of the new AI, you know, the post chat GPT magical hype that’s come out has almost created a forgetfulness around previous regulation. And what I’ve seen happening time and time again, which is really worrying, is this forgetfulness around things like information security, data privacy, breaches of gdpr, breaches of your own internal commercial information protection. So there’s lots of scary things going on, but absolutely you’re still liable under current law. But at the same time, it’s a good thing to do things responsibly, safely and ethically, because you’re setting yourself up for the future.

19:49
Martyn Redstone
There’s a year to go, literally just over a year to go until the high risk element of the EU AI act, which includes employment, comes into play. That doesn’t mean you shouldn’t be doing things now, because there’s no point, if you’re working on AI transformation projects now, to not do anything involved in that regulation, when in 12 months time you’re going to need to. So you might as well be planning for regulation now in the work that you’re doing, rather than putting things on hold.

20:14
Matt Alder
Yeah, I mean, I think that makes perfect sense. And what are the common mistakes that you see employers making and you know, which of them could be really significant in terms of what’s going to happen?

20:25
Martyn Redstone
Yeah. Actually this is one of the reasons why I started working more and more on governance and compliance. Because the mistake, whenever I run AI projects over the last nine years, I’ve always started with good governance and good compliance. And that’s been the biggest mistake that I’ve seen people happening is they’re jumping in with both feet and not thinking about the foundations of what makes a good AI project. And so I would say start with the guardrails, because guardrails for me aren’t there to stop innovation and stop your AI projects. They’re there to help you go faster safely.

21:03
Martyn Redstone
I always use this analogy, which is guardrails literally in the middle of a motorway or a highway, depending on where you’re listening from, you’ve got those big metal structures in the middle of the road and they’re not there to slow the traffic down, they’re there to let people go faster safely. Because if they weren’t there, everybody would be going really slowly, scared that they were going to veer off into the oncoming traffic and cause a horrific accident. So the whole point of a guardrail on a highway is to allow people to go quicker, safely. And that’s the whole point of having guardrails when it comes to AI in an organization, it’s to allow people to understand how to go quicker, safely and what it stops.

21:44
Martyn Redstone
And what we’re seeing again and again in the media is this concept of shadow AI where because organizations aren’t laying down the rules because they’re not supporting the rollout of AI properly, they’re seeing people using their own AI, bringing their own AI to work B O AI. And what that means is that we’re now seeing the breaches of gdpr, breaches of personal security, breaches of commercial data happening because people are using their own AI in a very unregulated way and the unmandated way. So that’s the biggest mistake I see is absolute that. The second biggest mistake I see is this issue where people are now trying to use generative AI to make decisions. And for me, there are two different types of AI in the world. There’s generative AI and there’s decision based AI.

22:41
Martyn Redstone
And what we’re trying to do is fit a square peg in a round hole. A lot of the times where we’ve got a lot of vendors out there who are building Large language model based screening tools and that’s a poor idea because all of the data that’s out there, all the research out there shows that large language models shouldn’t be making decisions. They’re text generation engines at their very core. They don’t reason, they don’t have decision making ability and they’re full of bias as well. So, so those are the two biggest mistakes that I’m seeing people doing nowadays.

23:14
Matt Alder
That’s really interesting and really kind of valuable information. And as a kind of final question for you, I mean it’s impossible to predict the future. It’s even more impossible to predict the future with AI because something could have been announced while we’re recording. This changes the game.

23:27
Matt Alder
But with that said, where do you.

23:29
Matt Alder
Anticipate we’re going to go with this? You know, what does the future look like for you? You?

23:34
Martyn Redstone
Yeah, it’s a great question. I get asked this quite a lot and you know, I, I think that, yeah, we’re all talking about it now, which is kind of agentic AI. So I’m going to move around, move away from that because I don’t think we’re going to see true agentic AI in our industry for some time due to the challenges around human in the loop. But what I do see is a future that’s in the world of work that is going to be absolutely driven by AI. And I mentioned it very much at the beginning, which is, look, I see AI as a great way to analyze data, to break down work into its constituent parts, to break down a job into its constituent tasks.

24:12
Martyn Redstone
And that’s where I see the data going, the future going is being able to say, okay, well as an organization we no longer need jobs doing, but we need tasks doing and we’re able to understand using AI, you know, what the constituent makeup of that task is. Because for me tasks have always been different skills being done at different competency levels. And our job will probably end up being to make that decision on what’s best to do that task. Is it a human or is it a machine? And so that’s where I see the future is task based projects, ultimately gig economy, where we as, as talent acquisition professionals have moved more onto how do we resource a task, how do we resource a bunch of tasks and making that decision between engaging a machine or engaging a person.

25:08
Matt Alder
And lastly, how can people contact you or connect with you or follow your content?

25:13
Martyn Redstone
Yeah, absolutely. LinkedIn is always the easiest place. So Martin Redstone on LinkedIn, Martin with a Y, you can also go to my website that takes you wherever I am which is martinredstone.com but LinkedIn is always the easiest place to get me in the first instance.

25:28
Matt Alder
Just head over to Matalder.me/podcast. It only takes a few minutes and you’ll receive valuable insights straight away.

25:31
Martyn Redstone
Really really enjoyed it.

25:33
Matt Alder
You can search through all the past episodes at recruitingfuture.com where you can also subscribe to our weekly newsletter Recruiting Future Feast and get the inside track on everything that’s coming up on the show. Don’t forget if you haven’t already, you can benchmark your talent acquisition capability quickly and easily by completing the free Fit for the Future assessment. I’ll be back next time and I hope you’ll join me. You can search through all the past episodes@recruitingfuture.com where you can also subscribe to our weekly newsletter Recruiting Future Feast and get the inside track on everything that’s coming up on the show. Thanks very much for listening. I’ll be back next time and I hope you’ll join me. This is my show.

26:39
Martyn Redstone
Sam.

Related Posts

Recent Podcasts

Ep 739: TA’s Challenges and Priorities for 2026
October 18, 2025
Ep 738: How to Implement AI Successfully In HR & TA
October 15, 2025
Ep 737: Building A Team Of Talent Partners
October 11, 2025

Podcast Categories

instagram default popup image round
Follow Me
502k 100k 3 month ago
Share
We are using cookies to give you the best experience. You can find out more about which cookies we are using or switch them off in privacy settings.
AcceptPrivacy Settings

GDPR

  • Privacy Policy

Privacy Policy

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively.

Please refer to our privacy policy for more details: https://recruitingfuture.com/privacy-policy/