AI is the most disruptive technology we’ve seen in our careers, but most industry conversations stay at the surface level of use cases and future predictions. This special bonus episode goes deeper.
These four interviews, taken from a behind-the-scenes film I recently made, explore how AI recruiting products actually get built. We’re talking to the people who created SmartRecruiters’ AI-first platform Winston to uncover the lessons TA professionals need for their own AI transformation.
Featuring:
• Shefali Netke, VP of Design, on how design decisions shape AI adoption in enterprise recruiting
• Thibaut Allard, Global Data Privacy & AI Lead, on navigating AI compliance where global regulations and rapid innovation collide
• Dave Novak, Global Director of Managed Services, on what it actually takes to turn AI ambition into operational reality
• Nicole Hammond, VP, Center of Excellence, on building true AI readiness from early mindset shifts to sustained capability
If you want to understand the architecture, the legal landscape, and the human side of making AI work in recruiting, this is the episode.
Transcript:
Matt Alder 00:00
Hi there. This is a special bonus episode of recruiting future featuring four interviews taken from a film that I recently made that looks behind the scenes at how AI products are actually being built for recruiting. I really hope you enjoy the conversations.
Matt Alder 00:21
AI is the most disruptive technology that we’ve seen in our careers, and it’s heralding a whole new future for talent acquisition. Most of the discussion in the industry around AI is about use cases and what the future might look like, but it’s really important that we understand how the technology works. What are the implications for the way we design software? What about the way we interface with recruiting systems? What are the legal issues? How do we adapt our processes to really make AI work and get its true value? And also, how do we educate and move our teams forward to really adopt this new technology in a way that really works. The most important thing to me in all of this is transparency. Smart recruiters recently launched Winston, a recruiting platform that’s been built AI first from the bottom up, and I’m delighted that they agree transparency is just as important to them as it is to me, in this series of four interviews, we’re going to be talking to the people who built Winston and continue to evolve it, so as you as talent acquisition professionals can learn the lessons that they’ve learned to superpower your AI transformation. You
Matt Alder 01:44
Matt, hi, Shefali, it is fantastic to be talking to you. Could you just introduce yourself and tell us about your role at Smart recruiters?
Shefali Netke 01:53
Matt, it’s really great to be here. I am Shefali netke, I am VP of Product Design at Smart recruiters. Fantastic.
Matt Alder 02:00
Now, you’ve been working on the AI platform for a number of months now, and it’s obviously been a massive project. Give us a bit of background around smart recruiters, approach to user experience design. What are the key principles that you’ve kind of traditionally followed?
Shefali Netke 02:20
Yeah, we actually have an acronym that we call well for our design principles. So that’s we say our design should be worldwide, easy and lovable. Worldwide is about creating experiences that scale to any type of organization and also accessible to any type of user and audience. Easy, pretty self explanatory and lovable. You know, it should not only be easy, but folks should enjoy using our platform. So those three are quite core to how we design here.
Matt Alder 02:47
And I’m sure designing for AI has thrown up a number of challenges and differences from the way that you sort of worked in the in the past. When did you realize that you were going to have to kind of change the way that you think and the traditional sort of design principles for digital weren’t going to work in quite the same way for AI. Yeah.
Shefali Netke 03:06
So about a year ago, maybe a little bit more, we had this very intentional shift towards moving to AI products, thinking about it, and we had a workshop where we did a lot of vision typing to just dream and explore. And as we were doing that and coming up with ideas, we soon realized the way we’re working is not what was going to take us forward, even just to prototype and prove out those ideas was showing to be really different from our previous approach. So we learned pretty quickly that we would have to pivot
Matt Alder 03:37
and how. I mean, how long did it take to do that? I mean, how much time did you have to invest in research and planning and experimentation to kind of really make that shift to learn what you needed to learn? Yeah.
Shefali Netke 03:50
So once we decided we wanted to make the shift, we actually took a full quarter to do research, which was amazing for our product and design org. We talked with a number of customers, spent our whole time in customer interviews. Can’t learn everything in a quarter of course, that we accelerated our learning that way and went really deep with folks, and we’re continuing to learn along the way. But this dedicated time was really game changing for us.
Matt Alder 04:14
And is that the first time you’ve had that much time to think about something definitely, yes, absolutely. And what is it that’s so challenging, challenging about, you know, designing conversationally. I mean, what, what are the sort of the early assumptions you had? How did you need to rethink? Did you need sort of additional expertise on the team to make it work?
Shefali Netke 04:37
Yeah, so a traditional design, there’s, you know, there’s you know, there’s happy paths, and there’s a lot of different flows you can take, but they’re fairly predictable. We can sort of map them all out, and we create all the different states with conversational design. You never really know, right? Humans are unpredictable. It can go in a lot of different directions. It’s nearly impossible to figure out everything. And we started working. This way, and soon realized that we needed to bring an expert in. So we actually brought on a conversational designer whose specialty is in linguistics and has built different tools and software around that specific area, and had her work with our product designers and integrated her into the team to really accelerate our knowledge there.
Matt Alder 05:19
And I think we should probably kind of explain a little bit about what conversational design is, how it works, and how it sort of is so important to your to your product. Can you kind of give us a little little bit of background on that?
Shefali Netke 05:33
Yeah, so I think it’s it’s one of the many ways that AI plays into products, and for conversational design in particular, it’s a different way of really interfacing with smart recruiters. So instead of clicks, it’s more, you know, typing in or using prompts and working with who you know, we call Winston in our system, and having him help you throughout the process. So there’s a lot of different paths you can take to a particular solution versus something you know, you do a click and you know exactly where it’s going to go. It’s much more like, you know, a conversation as it sounds.
Matt Alder 06:07
And I mean, how does that sort of, how do you sort of prototype around conversations? Because thinking back to sort of the early days of of chat bots and these kind of interfaces, they were all sort of heavily scripted, so presumably you could kind of plot the path through it. How does it work when the AI is generating conversation, but also when you’re not, perhaps quite sure what the user is going to ask or say? Yeah.
Shefali Netke 06:31
So I think this is where our AI prototyping comes in. And we shorten those cycles a lot. We can’t wait to build out, you know, this perfect prototype, or try to think of all the possibilities and then test it. We build smaller versions that we can iterate on and actually do live testing. So we need prototypes that folks can actually type in and get a response, versus having really predictable paths there. So we try to shorten the cycles to learn quickly and move forward.
Matt Alder 06:59
And did you, obviously, Winston is the, you know, is the kind of the name, the name of the product, the name the people are talking to, I suppose. Did you? Do you think about the personality of Winston when you were, when you were designing this?
Shefali Netke 07:15
Yeah, absolutely. It’s always such a fun and interesting conversation on what’s the balance like Winston, right? He’s there to help you along the way. He’s someone who’s knowledgeable. We did a lot of sort of Persona mapping and thinking around what that feeling should be when you’re interacting with Winston, which really, really fun exercise, I would say,
Matt Alder 07:35
Yeah, I can imagine. I can imagine. And you kind of had a design partner program within this. So you were sort of CO creating it with with users, with customers. Tell us. Tell us. Tell us about that.
Shefali Netke 07:47
Yeah, absolutely. So we had about 18 to 20 core customers. We worked with. We talked to several more. But with the speed of how the AI industry is moving and innovation and everything we’ve been wanting to test, we realized we needed folks that were there with us more along every step of the way, versus sometimes we talk to customers on a longer cycle. So in this case, we started our design partner program, and folks were technically signing up to talk to us maybe every other month, but it was actually every couple of weeks. It was what we ended up doing because folks were so eager, and our cycles were much shorter. And the reason we did this is so we could test faster, so we could iterate with them and really take them on the journey together. Because when we started conversations with them, they also came to us and said, we’re trying to figure out our AI strategy and how it impacts our systems and our HR strategy, and it was really interesting to have that conversation and build together along the way.
Matt Alder 08:47
And how important were their voices in shaping what this what this eventually became?
Shefali Netke 08:52
Yeah, super important. So they were there throughout the journey, right? So our earlier conversations were a bit more user research and contextual, and then as we started developing, the sessions turned into a bit of a mix, right? We would have a conversation to understand their struggle points, and then for a part of the conversation, they would test the latest thing we’ve been developing, and in the next conversation, we’d already iterated on that and put their feedback. So it was really nice to have them just in the cycle of how we’re working. And was
Matt Alder 09:19
there anything that kind of really surprised you from Matt, from that process, oh
Shefali Netke 09:23
gosh, the good question, I think that it varies quite a bit. I think that there’s a lot of different types of users, and everyone has different levels of comfort with AI. So we were talking to one of our design partners, and we’re describing these, you know, really complex use cases. And one of the people on the call said, What about just write my emails for me? Like, really simple use case. We’re like, oh, we can already do that, though. And, you know, we made that clear to them, and really finding that balance there, right? So, how do we make sure that we’re. Solving for different comfort levels and different expectations. And I think that has been a really interesting challenge, because there’s different levels of adoption with AI, so figuring out where the team is at, and then even within that team, there’s a variance of comfort and readiness to adopt. So that’s been really interesting to see, and we have some people who are very eager, and then some folks who are more cautious, and that’s where a lot of the trust building comes in, and really understanding how to do that, and even starting to put that directly in the product, to build trust, because we can’t talk one on one with everyone, right? So it needs to be built into the experience. So I think that was really interesting to see play out.
Matt Alder 10:39
Yeah, absolutely. And let’s talk, let’s sort of dig a bit deeper into trust, because I think that a lot of people are suspicious about about AI, you know, worried about that, that kind of thing. There’s, you know, huge issues around compliance and all these kind of things. So sort of trust, obviously, is really important. How did you did you kind of really make sure that that trust was in was in that and it wasn’t doing things that would erode people’s erode people’s trust in the system. Yeah.
Shefali Netke 11:15
So in addition to our design partner program, we do have a trust and safety committee that our SVP of engineering runs Mihal, and in that, we have a panel of folks we’re talking to and really staying on top of everything related to compliance and how folks are feeling about this. And then directly in the product, we worked pretty closely with our legal team to understand transparency and what that what it should look like in the system. So we developed, you know, a specific AI badge, as all folks do, to make sure whenever we are using AI in the product, it’s super clear as to when that’s happening. And also finding the line between up to when should AI be taking specific actions. Because at the end of the day, for any big decision or anything really critical, we do want the human to decide. So finding that balance and making sure folks know that that’s not happening on their behalf, unless they want it to for a particular reason. So really providing context every step of the way, and making it really clear where AI is or is not involved.
Matt Alder 12:16
And on that the AI is obviously the tool here that is, is powering parts of the platform, and the platform is there to get outcomes and drive value for your for your customers. How do you sort of make the decision about, you know, what is AI to what’s not AI? Do you know what AI does and doesn’t do? And you know how much it’s kind of deployed in in the platform as you’re kind of designing it?
Shefali Netke 12:44
Yeah, absolutely. I think it’s a really interesting conversation. I was at a leading design conference a couple months ago, and there was a big conversation around we don’t have to put AI in everything, and being intentional about does it need to be there or not, right? And it can be helpful in a lot of cases, but there’s no need to force it into every situation. So I think being very intentional has been our approach where, where is it helping with the process? And I think a big part of it is also looking at experiences end to end. So if you always are chopping and looking in narrow pieces, sure it might make sense, but when you look at it end to end, what does that flow look like? Is AI supporting throughout? Is that a clear pathway, in what cases you know, should a human go in there? Is that human touch more critical or more important, and in what cases is supporting? So it really should help you not be working against you. So that end to end is really great for that.
Matt Alder 13:38
Obviously, this is a an ongoing process. It’s not something you just finish and then, and then kind of set aside and sort of revisit a couple of years later. So what’s the biggest challenge that you’re that you’re still working on, now that there are customers on the system and you can see how everything’s how everything’s working?
Shefali Netke 13:55
Yeah, I think it’s always fine tuning to that end to end experience and adjusting, because we have, you know, new products and new features always coming out, and figuring out how it fits in that journey, and making sure that we are checking in with our users along the way. So I think a lot is constantly changing. Folks are interacting with AI, so much more now as well. So we’re seeing like a quick pickup in AI literacy. So versus a year ago, the way our folks are interacting with our system is very different. It also means expectations are different from what they’re looking at for Winston. So I think a lot of it is keeping up with that in the small pieces while keeping the flow consistent and still smooth and true to our design principles.
Matt Alder 14:46
Yeah, absolutely. I mean, we’re, we’re kind of at a stage where where technology is evolving really quickly. But as you say, the way that people are using it and getting used to using it is also, is also evolving. It kind of. Stepping out of the sort of everyday just for a second as a kind of a final question. Where do you think this is all going? What does the what does the future look like? Where you know, where might this system end up in in a few years time?
Shefali Netke 15:17
Interesting question. I think the conversation will be less central around AI. It will just sort of be implied that it’s running in sort of the foundation of the system. And I think we’ll continue to figure out how to make experiences really seamless, making sure that folks can interact with different types of experiences, maybe from different devices, different modalities. And I think it will sort of fade a little bit into the foundation where it’s not, you know, the initial reason or pull point for folks to come in and sort of expectant, right? Of course, AI is in the system. It’s powering a lot of automation and these conversations. And I’m also really curious to see how the experience evolves. I think it’s going to be much more conversational driven, and I think it’s going to be great for candidates as well. I think the candidate facing side is going to evolve a lot. And overall, I think the experience is going to change quite a bit, and I’m super curious to see how it goes.
Matt Alder 16:24
Absolutely, it’s going to be such an interesting few years watching how everything develops. Shefali, thank you very much for talking to me.
Shefali Netke 16:32
Yeah, absolutely. Thanks so much for having me.
Matt Alder 16:42
Hi, Matt, hi, thibo, it is fantastic to be talking with you. To start with, could you introduce yourself? Tell us a little bit about your background. You know what you do, and you know what a privacy and AI analyst does in smart recruiters.
Thibaut Allard 16:57
Hello, Matt, thank you for having me. It’s a pleasure to be here and an honor I work in at Smart circuits since three years, but it’s bit tricky for the classic way to work and to learn, because I followed my master of digital law at Paris University at the Sorbonne while also working for smart circuiters. So two, three days a week, I was studying or working for smart computers. So I achieved my master last year. And it’s been one year that I work full time at smart computers, and my position of privacy and AI analyst is consisting in, first of all, assessing the regulatory impact on the smart, circuitous product, also to ease the sales process and to transform the law and the compliance into a competitive advantage and into also sales deliverables. So a lot of my work also is containing the creation and drafting of deliverables, to listen to the expectations of the market and to transform a legal requirement into a strongness, and also, finally, to foster a culture of trust and transparency at
Matt Alder 18:14
smarters, absolutely and you know, it’s an important job, because there is just so much confusion about AI and regulation and ethics and all those kind of things as a as a kind of bit of context, just tell us a little bit about the current regulatory landscape for for AI and recruiting.
Thibaut Allard 18:35
Yeah, so the first real AI law that was published, especially in hiring, was not the EUA act as we could think. It was more the New York City Law, a local law, number 144, that was concerning automated employment decision tools, and that is still requiring tech providers, especially in hiring, to realize a bias audit, algorithmic bias audit each year by an independent third party auditor, and to realize discrimination reports based on the US methodology of the EEOC. So this was the first New York paved the way on this. After this, obviously you have the EUA act that was also a big piece of the AI regulation, and that is inspiring. Also other countries, the EUA act is based on risk classification, so you have unacceptable risk, high risk AI systems and so on. For example, South Korea on December 26 adopted also an AI act that was largely inspired by the EUA act, and it is the same also for the Colorado states, because in the US, there is no federal law in AI despite their. Is some efforts into doing this. But for now, the US states are having the autonomy on this. And also the EUA act inspired drafts, because we have to also follow the laws that are passing or not. And for example, in Canada or in Brazil, they were some AI drafts that did not pass due to government blocking and so on. But yes, long story short, we have some laws in the US at the state level, the EU as a centralized level Asia that is taking more freedom. Some countries are following the European way. Some are just creating their own based more on innovation, and they are more flexible, example of Singapore, Japan, and after you have the very specific case of China, and China is more protecting the state’s interest, and is also really careful about the impact of AI that can have on the Chinese public.
Matt Alder 21:02
I mean, it sounds just so complicated. Basically, I think there’s a couple of things that I really want to sort of dig into a bit a bit deeper there. Let’s talk about the, I suppose, the ambiguity around this. So you kind of mentioned, sort of a couple of countries that have draft legislation that didn’t happen. I know that the UK is working on something at the moment. It seems to be working on something for quite some time, but it’s working on something at the moment. How do you deal with that, that kind of ambiguity, but also sort of likely future developments? I mean, how do you prepare for regulations that don’t actually exist yet?
Thibaut Allard 21:37
Yeah, so first of all, I have to follow the legal news. Hopefully the AI compliance, tiny world, is really helping each other. So we have our own influencers that are publishing and some news, etc. So for example, just this morning, Italy passed a new AI law to implement the EUI act, so I could not have this information without other colleagues that are not working in the same company. Also, we are setting up some news. I am probably maybe the only one that is subscribed to the Australian Parliament newsletter. And also we have at Smart competitors. We are subscribed also to a very helpful tool that is called Data guidance, the feature of one trust, and it’s basically centralizing all the legal news across the world, and they have one special lawyer per country, so we can keep track of jurisdictions that we are interested in or not,
Matt Alder 22:42
and you’re it’s obviously a global company and a global product, and there’s obviously some similarities there, but there are geographies and countries who are going off in a completely, completely different direction. How does that complicate product development? I mean, how do you deal with that sort of aspect of it, different rules and regulations in different places.
Thibaut Allard 23:03
Yeah. So first of all, to develop the product, we need a roadmap. And so the battle is and the roadmap to identify and anticipate new requirements that could arrive, and balancing this with the roadmap and the needs of the engineers, the products and so on, the innovation will always come first, and the lawyers will always be late, and it is the the lesson of the whole history of the law. And so first of all, you have to identify trends and also movements of the courts. For example, if you see that the Court of Justice is going in that way, and sometimes a local court in the US is joining this position. Or you can see also the news and the actualities of the authorities. For example, the authority of California is really joining some European positions, some not so. First of all, you have to stick to the roadmap. After you have to identify the priorities of your customers, of your prospects when you see coming questions in the RFPs for me, the RFPs are also a very good tool to understand what are the companies needing for and what is their expectations. And by combining this, let’s say, breaking news aspects, with listening to the market with the RFP, comparing also different products, or discussing with with colleagues, you can have a certain idea of what you should put at the next quarter.
Matt Alder 24:49
And I suppose that looking at the kind of the actual regulations themselves, what are the high risk areas when it sort of comes to hiring at the moment, what are the what are the real. Things that everyone needs to keep an eye on.
Thibaut Allard 25:03
So the first high risk classification was created by Brussels, by the EU a act. It’s the first law that is introducing this. Now we have this also in South Korea and in Colorado. They and also in Texas and for EU, Colorado and South Korea, they have set a very specific list of use cases of AI. They are the use cases that are not banned from the society, and we see the concern of this most western oriented mind placed on the fear of social credit score, for example. So outside of hiring, having banking credit loan systems would be high risk for all of these countries, having the idea of an AI that could reject without a pair of human eyes. Your loan is a fear that is deeply filled in these countries and in this continents. More specific on hiring lawfully and legally, it is considered that applying to a job is an important part of your life, an important moment, and we all knew when we deserve we obtained a new job and we started a new job, it was a new chapter of our life. And this is the same, and the governments are not willing to have a fully automated process when just machines are giving jobs to humans. So high risk is not banned, but it’s under condition.
Matt Alder 26:49
And I think one of the, you know, one of the most interesting things to me, when I sort of looked at that, that New York state law, which seemed to be the, sort of the first one that came out. And I think this is, this is true in the EU as well. You can, you can tell me whether I’m right or not. Is kind of where the liability for this lies. So it’s not just with the software provider, it’s with the NGS or the employer as well, isn’t it?
Thibaut Allard 27:13
Yeah, actually, we are in a gray zone for the responsibility, and this is, in my opinion, the debate of the next five years the liability of the tech providers and of the AI providers. So now providers is a legal term under the EU AI act Colorado, AI act, you are the provider of a technology, and so you are starting to be considered as a provider of a car. Now AI and software in the trend, and we come back to the trend because actually there is no except in the EU we don’t have a international standard on this, but actually providing an AI or software is like providing a car. And so you have to respect some norms, how the door is working and how the window is going to also avoid any harmful outcome to the user.
Matt Alder 28:06
Yeah, that’s an interesting that’s really interesting analogy there. And then the user is effectively driving the car. So they’re, they’re subject to kind of regulations as well, moving on from the kind of the regulations themselves you mentioned. You know, right at the start, the big bit of your job was to really kind of translate this into, you know, design and development of the products and all that sort of stuff. How do you work with that aspect of the business, to make sure that things are, you know, think things are compliant, but also, you know, to make sure the product is being built in a way that that kind of really meets the needs in the market,
Thibaut Allard 28:41
first of all, you have to create a relationship with your engineers and your product team. You have to be included in the first ideas, and when the ID is going serious, then you have to loop in your compliance team to do in one shot the perfect job, and so you don’t have to do further development to rectify some mistakes that you have to make after that is making losing efforts times money to all the stakeholders. So when the idea is going really serious after you we are working, for example, with product requirement documents, and so every team is working on I need from this product, this feature, this outcome, and so as another team, the compliance, is doing so after, we have to keep track on the events. We have to dig more deeper after, to understand which data will be processed, what documentation we are we building? Which features do we need to implement? What is the risk? What should we prioritize and after to embed this compliance in the UX. And now compliance is more and more getting to the UX, and for example, the articles. 50 of EUA Act requires you to label AI generated stuff, so if you interact with a chat bot, you have already the right to request an information that you are actually interacting with an AI
Matt Alder 30:16
and regulation compliance they’re only sort of part of the overall picture here. What about ethics? How does smart recruiters deal with the ethics of using AI in hiring?
Thibaut Allard 30:28
It’s very difficult this question, because ethics is not the law and ethics is personal. Ethics is related to your culture, sometimes with your religion, sometimes with just your childhood. It is very personal and relative. And especially in this business as we operate globally, these things can really be relative. For example, just to give you an overview of the cultural discussions that we will have in the next year, the relationship with AI in Asia is absolutely not the same in Europe. And the BCG made a study on the appetite of AI regarding the countries. And for example, in Asia, there is a lot more of person percentage of the population that is willing and would enjoy to have a chip in the brain to speak more languages, for example, and for its child in the West, it is something that is not acceptable or really a needed discussion, but there is an appetite in Asia parties. So this is the ethics. And after, we have to align. And so we have at launch leaders and AI and transparency committee. So we are discussing this. We are bringing, bringing some customers that are from different profiles and different functions, different industries, also to discuss. And after you have to question. And as a legal professional, you don’t have to necessarily make the decision, but give all the information to the decision maker to make the best one, and so you are just questioning and try to ask the most important questions to make your decision the best. What is your relationship to the candidate? What do you want to enhance? What is a perfect world for you? This is very personal questions that you have to answer with a lot of context.
Matt Alder 32:26
You mentioned earlier in the conversation that you were kind of looking at what was coming over in the in the RFPs, from the customers, potential customers, in terms of, you know, regulation, ethics, compliance, all of that kind of stuff. You know, what are you sort of hearing back from employers out there? What are the sort of big concerns of of their legal teams? What is, what does that sort of look like at the moment?
Thibaut Allard 32:51
The real difficulty, actually, is the gray zone that we have on liability, and so everybody is stressed about being held as responsible for a bad AI output. So we are starting to build a framework on this. But for example, just to give you an example, in the US, we have a fascinating trial with direct mobile actually in the US law, you have the employers and the employment agencies. So employer would be a customer of smart computers making the hiring decision to hire a person to do a specific job and employment agency, as you know, Michael Page and so on, but there’s no actually legal status to give of crime work and to put some burden on the tech providers and all the negotiation and all the expectation, and finally, all the discussions that we have with customers is about, how are we defining ourselves a safe framework for me as a customer, my interest, and as a tech provider, my interest, and what is The interest of my candidates? And so we are agreeing ourselves and dealing to securitize this part,
Matt Alder 34:06
and for, yeah, employers who are, who are watching, watching this, this, you know, will seem complicated and worrying and, you know, frustrating as well. I would, I would imagine, what would your advice be to talent acquisition leaders who are evaluating AI solutions from vendors, what kind of questions should they be be asking for? What sort of red flags should they be looking for
Thibaut Allard 34:33
on the customer side? I think the first, the most important thing is to assure that your staff is trained on AI. Now you have in the EU AI actor requirement to fulfill every company using AI, whatever your your profile, you have to train your organization on the AI. Understand what is an AI output, what could be a deep fake? What could be an. And harmful output. And I think this is really the basis of our work to start using AI after this. I think also huge priority is to define your risk appetite and your AI appetite on this, and to define your use case. And after when you go to you come to selecting AI vendors, then you can go deeper, and you don’t have to ask if the company that you want to use its services is compliant. Compliance is not a question that can be answered by yes or no. You have to evaluate the maturity of the stakeholder that you are wanting to partner with, you have to understand what is the team behind the product that you want to acquire, because you will need collaboration on this. And in fact, there is a whole new AI supply chain, and there are new operators and actors everywhere, and we have to discuss between humans to be sure that I know this guy at this company, and I trust him, and he will do the work to securitize the whole stuff, the whole deal.
Matt Alder 36:10
That makes perfect sense. I think that’s that’s a really interesting perspective on it, definitely. And as a final question, and it’s a really difficult question to ask someone with a, you know, who kind of works in a kind of legal framework. But where do you think we’re going with this? Do you think we’re likely to be seeing, you know, more regulation in the future, or as people get more comfortable with with with AI being everywhere, will governments kind of step the step, step, step back a bit from it?
Thibaut Allard 36:39
Yes, absolutely. I expect more and more AI usage in the whole entire world society. Governments are starting using AI also for their own recruitment. We we hear sometimes from public bodies and so on. They want also, and the governments will digitalize themselves a lot, I think, asking or waiting or going to an office to get its ID will be something that will never exist again. Everybody will use AI, and we will see more regulation and more regulation that will be stricter on the private sector and prioritizing the public services as always, I made some public laws, and the state is always true when it goes to Public Law, and it is also to slow down the pace of innovation that is going absolutely crazy since five years in the AI world, and specifically on the recruitment we are facing, in my honest opinion, and it’s only personal, an AI arm race between candidates and companies that is completely blocking because we have some ais that are just talking to each other. We know that candidates are writing some resumes on chat, GPT or other AIS we have the candidates are actually AI testers to evaluate their eventual ATS score, and so we will have more regulation to unblock the labor market that is actually quite hard in the West.
Matt Alder 38:18
No, I think that makes a lot of sense. It’s going to be it’s going to be interesting for years. Thibault, thank you very much for talking to me.
Thibaut Allard 38:27
Thank you as well. Thank you. Thanks a lot.
Matt Alder 38:37
Hi, Dave, it’s a pleasure to be talking to you. Could you introduce yourself and just tell us a little bit about your role at Smart recruiters?
Dave Novak 38:46
Yeah, hi Matt. Sure thing. Thanks for having me pleasure to be here. So my name is Dave Novak. I am the director of managed services here at Smart recruiters. And so what that means is I’ve got a team of people that work with our customers to make sure that they realize the highest potential of their smart recruiters investment. So that takes many different shapes and forms. It could be career site related, it could be reporting and analytics, it could be functional, could be technical, but we work with all of our customers and just make sure that they’re seeing the highest return on the investment that they’ve made with smart recruiters.
Matt Alder 39:19
Tell us a little bit about your background. What did you do before? Before you did this? Yeah, yeah, absolutely.
Dave Novak 39:25
So I am a career recruiter and recruiting leader. I’ve been recruiting for almost, well actually now, over 20 years, and so I’ve, I’ve been practicing in hospitality, e commerce, consumer goods, and yeah, about five years ago, so I managed some pretty complex implementations, HR tech implementations from the customer side. And I thought to myself, I really like this. I really like the problem solving. I like the idea of leveraging technology to make yourself more effective with hiring and. So that’s why I decided to join smart recruiters. Fantastic.
Matt Alder 40:03
So everyone is talking about AI seemingly all, all the time. So much, so much going on. What are you hearing from the market? What’s the TA leadership perspective on this? Is there a gap between expectations and reality? You know what’s what’s kind of coming out of your customers?
Dave Novak 40:22
Yes, yes. So having, having spent some time in HR tech implementation, I’m very well versed in the gap between expectations and reality. And there certainly is a lot of chatter out there in the social media sphere, and, you know, at talent acquisition conferences and such. And obviously, for good reason, this has an opportunity to be absolutely transformative, not only in hiring, but in like all aspects of of work and in daily and of daily life. So I think that the buzz is real, and I think there is a huge opportunity to transform the quality of our lives and the effectiveness of how we do our jobs. But the reality, I have no doubt, is going to be a little bit more tempered and probably a little bit more slow moving. So you’ve got, you know, I’m hearing from customers and from people in my network and the talent acquisition space, a wide range of different sort of outcomes that they’re expecting to get. You know, on one hand, you have the pie in the sky. Hey, we’re going to we’re going to connect this. We’re gonna press a few buttons. We’re gonna totally upend our entire recruiting department. We might not need recruiters anymore. We might not need human beings anymore. The AI might, might actually clean our desks off for us at the end of the day as well, you know. And on the other end of the spectrum, you have people who have gone through technology implementations before, and they’ve been maybe burned by transformative technology implementations before, and they’re thinking, Okay, this has the potential to be really big and really impactful, but how are we going to manage this in a way that provides some impact and results but doesn’t set the bar too high and expectations too high and then under deliver. So there’s, there’s super pragmatic people who are practical and saying, we’ve been saying this in HR tech for 30 years. And then there’s people that are going, This is it? This is a big one. Everybody. Get ready. Strap on. Start looking for a new job. Because we’re there’s no more recruiting. It’s, you know, it’s all going to be robots hiring from this point moving forward, so
Matt Alder 42:19
and so cutting through the, you know, the hype and the talk and the spin and the arguments that are going on and all that sort of stuff, what do you feel this really enables, from a business perspective, and how is it going to evolve the role of recruiting, of talent acquisition,
Dave Novak 42:37
I’m super excited about it. I have to be honest. I Am. I Am, by nature, a little bit cautious and fairly pragmatic, and again, working closely in technology implementations, you kind of become that way. But I am really excited about the potential here. Having worked as a recruiting practitioner for so many years, the holy grail of the talent acquisition or the talent role in general has been this. We need to be a strategic advisor. We need to be a somebody who offers real business solutions. But we’ve been hamstrung for 25 years because by nature, we’ve been kind of forced into this role of managing a process, and oftentimes the process is very administrative in nature, and so the promise of AI is, you know, all these things that we have to do in talent acquisition, you know. And this is, this is not new news to anybody. But, you know, you clock in for the day. You’ve got to stare at 200 resumes to make sure you you catch the one diamond in the rough, right? Then you’ve got to figure out which job advertisements are actually providing some some traction. And then you got to figure out where your job is posted. And then you’ve got to sort through resumes, and you got to figure out how you’re going to schedule a phone screen, and who are you going to share with the hiring team. And then, once you do that, how are you going to, you know, upload your notes and, you know, share your feedback with the hiring team. Then how are you going to schedule interviews? This is like, this is the dreadful, you know, seven step, you know, 14 person interview scheduling. Then once they get through that, you’re gonna, you’re gonna figure out that we gotta put them through an assessment. Then we gotta, you know, we gotta make an offer, we gotta go through compensation, all these things that you have to do. It’s a lot of blocking and tackling, right? What I’ve always worked with our recruiters on, with recruiters on my team, on is becoming a true talent advisor. We know the job market, we know the talent, we know our internal like the way our team, our organization, is structured, and where we have strengths and where we have weaknesses. We’re trying to find people that are going to add value to our organization, and that is, by nature, a consultative task. What you’re trying to do as a really effective talent position person. So if AI fulfills even a little bit of the promise that it has right now, it’s going to bring us a lot closer towards that goal, sort of that, that holy grail of strategic talent advisor, and less away, you know, more more away from managing. Doing a process and more into helping advise our business on how to match talent with business goals.
Matt Alder 45:07
Absolutely, and I think it’s, it’s quite a common vision that I hear from, you know, from recruiters, from ta leaders, from employers. And, you know, it’s fairly clear that we can see that AI could really help us get there. How far is there to go? I mean, how far away are we from realizing that that kind of vision? Are you seeing anyone who’s who’s there yet, and for the people who aren’t, you know, what’s the, what’s the roadmap to get there?
Dave Novak 45:35
Yeah, yeah. So, so there are, there are certainly organizations that do this really, really well. And there are pockets within organizations, within firing organizations, that do this really, really well. And I find that it often has to do with leadership, talent and execution versus, you know, having the latest talent stack, necessarily you need to have both in harmony. So how far away from me are we from this right now? I think it depends on the organization and on the team. Actually, even at a more micro level, like on each team within or each organization, you can find people who are doing the strategic talent advisement really, really well. You can find other hiring organizations where they have no hiring threshold for recruitment or talent acquisition. They just want to hire people to manage a process, and you can do that for kids coming right out of college, or anybody without the right tools or enablement to do that effectively. So it runs the gamut now AI fits into that because it’s going to bring more people. It’s going to give people more space to get to a place where they’re adding the right kind of value for candidates and for hiring teams and for organizations. Some organizations are already kind of ready for that, right? They’re like, they have strategic people doing the right things, having the right conversations, but even when those people exist on your team, they don’t have the space to do it. And this is what AI is promising to do, right is provide this space to do the things that we should really be doing, to add the maximum amount of value. So there are, you know, it runs a gamut across our customer base that we see, we have people who are totally ready, and I think, like when, when you’re enabling, AI, you’re expecting it to kind of add to what they already have, and then you have others that are looking for it to replace what they have, and that is where you might run into a little bit of trouble. So that’s, that’s kind of an observation that, you know, I have up till this point.
Matt Alder 47:33
I think one of the things that I’m I’m also saying with, with so much AI and AI enabled, and AI first products on the market. At the moment, there’s a temptation to reduce recruiting to its kind of constituent steps, and then AI each one of those, each one of those in turn, which to me, just seems to potentially making things even more complex, rather than sort of adding value and making things better. Where does AI truly add value to the hiring process? And you know, what’s just adding more complexity to an already
Dave Novak 48:08
common situation? Yeah, good question. I mean, I think so the AI products that are out there, you know, they’re all looking to solve different problems within your hiring process, right? So it could be scheduling interviews or sorting through the 1000s of applications and resumes you get on a daily basis, or it could be something more assessment related, you know, you need to start small and make sure that your recruitment process is in a place that it can handle it. You’ve identified where the opportunity or a bottleneck, or, you know, a non value added thing is happening in your recruitment process, and people are spending a lot of time on it, and you have to be really clear and aligned on what that thing is. And everybody says, Yeah, we need help here, right? And then you select the right product that’s going to help you overcome that bottleneck in your process. And I, I would definitely recommend going through that type of analysis and a thorough implementation before you try to AI every piece of your recruitment process, right because there could be some things that you’re really efficient at and you’re really positive and feel good about in your process right now. And you know, just adding AI to every aspect of it isn’t necessarily going to make you more effective right away. So you have to be thoughtful, analytical, understand where you’re you know, you have to know what’s going on with your recruitment process, and be able to pinpoint that and make sure that you’re buying products that solve real issues for you or enhance real areas of need within your hiring work.
Matt Alder 49:38
And digging into that another level, this is, this is complex stuff that to get the benefits from a kind of a full AI transfer transformation, you kind of have to be in a certain place as an organization. What do you think determines the companies that are really ready, that can really fly with this get those transformative results? What. What is it that that you kind of look for in an organization to able to do that?
Dave Novak 50:05
So if I was going to interview a TA leader, and I wanted to try to assess if they were ready for AI, I would start peppering them with some questions about the recruitment process. I would ask them, How many, how many candidates do you need to interview until you get an offer. How long do people How long do people stick in each stage of your process? What’s your offer conversion rate? You know? What are the hottest job markets right now? In which areas is it most difficult for you to hire? If they have trouble answering those questions with authority and with accuracy, I would say they’re probably not quite ready for AI. Your recruitment process needs to be a perfect mirror of what’s happening out in the field and in the market, or with your recruiters, with your hiring organization, right? So all that stuff we talked about before the blocking and tackling, you need to have insight into that. You need to know what’s going on, because you’re not going to make anything better if you don’t have a baseline. You don’t you don’t know what’s going on. So a lot of times we have implementations and smart recruiters where you know people want to, they want to implement a CRM, or they want to, they want to implement a new cutting edge ATS, but they don’t actually know what’s going on with their organization right now. So their data is just not telling them what they need to know. So instead of a perfect mirror, it’s a little bit more like a fun house mirror. Data is not there. It’s not a reflection of what’s happening. If the data is not there and you slap an AI on it, and the AI is drawing off something coming out of a fun house mirror, you’re gonna get fun house results and like, you’re not ready if you have a fun house mirror coming from your ATS and not really understanding what’s going on with your hiring organization. So you’ve got to get that right. Your system design and all of that supplemental technology that you’re using with that AI needs to be right. It needs to be feeding your AI and your leadership the right insights and the right data. And once you have that, and you feel like, yes, this system design is a good reflection of our actual business process, you’re going to be close to being in a good place to implementing AI, because if you don’t have that, you’re putting a really expensive band aid on and potentially making the wound bigger if it’s drying off of bad data.
Matt Alder 52:23
I think we also need to recognize that there are lots of TA teams, ta leaders who are under pressure over this. It might be pressure from the top of the organization. We need to implement AI. We need to be more efficient, you know, without any kind of sort of clear instruction about what that, what that means, and what that, what that actually entails. And there’s obviously a huge amount of conversation in the market, whether it’s on LinkedIn or at conferences, where there’s a kind of a real pressure that people should be doing, doing AI and doing it right now, what are the sort of tail telltale signs that you see where someone’s just buying something for the sake of it, and they haven’t really sort of thought through exactly what it’s going to, going to going to add or take away or transform in their process. Yeah.
Dave Novak 53:12
I mean, this is another takeaway I’ve learned through working in HR tech and implementations, but specifics around the problems that you’re expecting it to solve, right? If you have a leader that is looking for, you know, a TA, a transformational, you know TA, AI person, and they say, We want AI. We want to be top of the competition. We don’t want to lose to any other companies when it comes to finding the best talent, we need to implement AI as soon as possible. You have to be very clear about what technology you’re implementing and what problems you expect it to solve, or where you expect it to enhance or optimize your recruitment or hiring process, right? Like you’re not overnight, gonna have a terrible employment brand, a broken recruitment process, untalented or untrained recruiters who don’t know what they’re doing. You slap bad data, you slap AI on it, and all of a sudden you are cutting edge, the place where everybody wants to work, and it’s easy to find candidates, and you’ve got amazing hiring outcomes. It just doesn’t happen. That goes for any technology, because they all promise something like that, right? And AI is super promising, but it’s still something that needs to be aligned to people and process, and it needs to, you need to be very clear on the definition of what problems or what you’re trying to solve with it, and what it’s trying to enhance. It can’t just be better at hiring, just you know, like, you know, leaders, business leaders, are going to expect you, all of a sudden, to be, you know, just the most amazing recruitment, talent acquisition team ever, because you put AI in there just doesn’t happen that way. You got to be really clear. Set the right expectations on timing, impact, you know, where you expect to see improvements, what your your stated goals are for the, you know, the improvements that AI. Is going to bring and you’ll be able to see some real results, but it’s probably going to be more incremental than people you know, who aren’t in TA and haven’t seen this before would expect.
Matt Alder 55:11
And on that theme, I think we both know from our experience that the talent acquisition doesn’t transform overnight. I think we can safely say that AI is probably the fastest moving technical innovation that we’ve, we’ve probably ever seen. But even if time acquisition moves faster than it’s ever moved before, it’s still not going to keep keep up with that, really. So it probably means that you’re talking to lots of employers who are lots of different stages of this, of this journey. I mean, as a vendor. How do you meet them where they are and help them with that transformation wherever they might be? Kind of on that Samuel,
Dave Novak 55:48
yeah, so, so we have, we have a consulting tool that we use as far recruiters. And, I mean, there is, you know, it’s a lot of different companies will, you know, consulting firms will have something like this, but it’s really important that you identify what you’re trying to do, like, really clearly, like, what you’re not going to boil the ocean with this implementation. You’re not going to bring this, this technology on and all of a sudden every problem, every ill that you’ve ever had is is resolved. You need be able to do some kind of assessment as to where you need to improve the most. So it’s it’s bringing in the talent leaders. Some of them might be HRIS professionals. Some might be actual talent acquisition practitioners. There could be HR people. There could be business leaders. These folks need to become aligned on where their biggest concerns are with how they sit today in their hiring organization and where they need to go based on business goals. Like, okay, this is the direction we’re going. Here’s our gap right now. What tools or tool is going to help recruitment get there so we can achieve our business goals, right like you, and it can’t just be we need to be better hiring. You got it. You got it. You got to figure out what you need to do. Our recruiters spend too much time doing X, Y, Z. We’re not getting we’re not our employment brand isn’t strong enough. We’re not getting enough talent. We don’t get enough inbound. We don’t spend, you know, we’re terrible at assessing talent. Our turnover is awful. It takes us too long to get people all the way from, you know, from point A all the way to point Z. There are so many different diagnoses that you can have, and you need to go through that diagnosis before you can bring something on. So it’s going to take time. It’s going to take it’s going to be a little bit slow, but I think at the end of the day, you need to have alignment on what those areas are, and then select you know timing and what you know, tools and the timing based on on the alignment of where you need to go.
Matt Alder 57:46
So it’s a massive change management exercise for everyone here. What would your advice be to employers around that? What do they need to be thinking of when it comes down to the the practicalities of change management to get this kind of adoption, an AI transformation?
Dave Novak 58:04
Yeah, yeah. I, you know, I gotta tell you, with change management, it’s a funny thing. I mean, I’ve been through so many courses and so many consulting sessions and have spoken with so many experts in change management over the years, and after all of those hours of doing those things, I started to believe that people just don’t listen. So, like, the more emails you send to give people warning that something is changing and something is coming, and you have to get ready and look at this material, look at that. Read this, listen to that, you know, watch this video, and then you roll out, and the same thing, you know, the same thing. You go through a little bit of pain, right? And so what I found, you know, that works is involving as many people as you can, so they have an ownership of it, and they felt like because they have representation that do the thing and understand how it works, and understand what problem it’s trying to solve and what it doesn’t solve, right, and why it’s impactful to each one of them throughout the organization. You can’t just tell them it’s going to be impactful. You can’t tell them to press buttons and to watch something and to expect it to work. You need to actually involve and get stakeholders, people who might not even be directly responsible for the technology. They need to know why it matters to them. And it will not matter if you tell them it matters. They need to see it. Involve your business, involve stakeholders in any decision that you make that’s going to impact them, because they’re the ones that can go to their teams in team meetings and say, Hey guys, I want you to try this. I want you to look at the like, by the end of the by the end of the week, let’s have, you know, set some goals on how we’re going to interface with this product, etc, but the, you know, the organizational, wide change management initiatives, I’ve seen marginal success with those things. You have to be a little bit gorilla with it and get people involved early and often.
Matt Alder 59:51
I just want to talk a bit more about data, because you, you sort of mentioned the real sense that people have to understand what’s going on and have the right data. That sort of. Stuff. AI is quite revolutionary in terms of how people can can look at interpret and interpret data. Do you think it’s going to change the way that talent acquisition thinks about and uses data?
Dave Novak 1:00:14
Yeah, I think it’s going to shine a huge spotlight on it. I mean, even a bigger one that’s already there, because the more we rely on AI to guide us through to an outcome, which is like, you know, in this case, hiring a successful candidate for a job, the more data it’s going to need access to to do the things it needs to do to be effective, right? I think we’re just scratching the surface right now. I don’t think we’re there yet, but, I mean, I just a quick anecdote, like, I remember when I first got access to chat GPT. I’m a baseball fan. I grew up in Minnesota. I’m a twins fan. I asked it a question about the 1991 World Series, and I and it told me that the Braves Beat the twins in the 1991 World Series. And I mean, honestly, I considered never using it again after that, because I just thought, how unreliable is this? Yeah, it was drawing on it probably got its data from an Atlanta Braves fan site. I don’t know, but it was wrong. It was dead wrong. And I know there’s still, you know, some hallucinations and things like that, but obviously the advice, particularly when it comes to making hiring decisions and how to guide, you know, recruiters or candidates or anything through a process is going to be super dependent on the accuracy of the data, and you’ve got to get that right before you start relying on any kind of advanced technology to represent your hiring process, represent your recruiters, represent your candidates. Otherwise, if people lose confidence in that, you’re going to have a long road ahead, and it’s going to be another HR technology failed investment. So I just think that that is and honestly, implementing AI, I would almost run in like, a full on, like, data audit to make sure that you feel really good about what’s happening out there with your recruitment teams, and that you have a good, accurate recording of what’s happening out there, so that once you connect AI to whichever aspect of your process, you can feel good about what it’s drawing upon, what it’s recommending, what it’s telling you to do, what it’s telling candidates to do, and you know so you can get the outcome You want.
Matt Alder 1:02:19
Final question for you, what does the future look like? Do you think that this is going to be a really disruptive change? Is it going to be a gradual change? What are the next few years look like?
Dave Novak 1:02:32
Oh, you’re asking me to be Dave stradamas here a little bit, huh? I Yeah. So it’s hard to predict the future, and usually, being the pragmatic person that I am, I will evaluate the past to try to predict future. And you know, where we are today, from HR and recruitment technology perspective, is is, you know, actually quite far ahead of where we were 10 or 15 years ago. But I don’t think we can point to one single time where we said, Wow, that was the monumental shift that we were all waiting for. It just, I mean, where we are today is much different than we were, you know, 20 years ago, but it happened kind of without noticing, slowly and gradually throughout the year or throughout the years. Rather, I think, you know, this piece of technology, everybody’s bracing for this to be something totally different, on a totally different plane, than any other technology that we’ve we’ve had. And I don’t know what the future’s gonna hold, but based on what I’ve seen, I think it’s gonna be similar to how it’s been, but accelerated. So it’ll be, you know, five years from now, we’ll we’ll look back to 2025 and we’ll go, oh, wow, we are light years ahead of 2025 and I think there’s, I think there’s going to be some much bigger chunks of change and and transformation happening because of the nature of how quickly the technology is evolving. So historically, recruitment, HR talent has been a little bit behind the many other sorts of services and industries when it comes to technology adoption and implementation things like that. I think we probably expect that to continue at some level, but I expect our skill sets and the way that our teams are shaped to be noticeably different five years from now based on the technology. So how we interact with talent, what our role is in the business, and the types of things that we do on a day to day basis, I expect to be able to look back five years from now and say, remember that when we had to do these things and this blocking and tackling, that our job used to be, you know, it used to entail, I think we’ll, we’ll be able to say, you know, that was, those were crazy times, man, you know, like, so I expect it will be incremental. It will feel at sometimes a little bit plotting, but I think it’s faster. That’s just my best guess. I think it’ll take a little bit of. History based on how we’ve done things, but also based on this technology and how fast it’s evolving, get us there a little bit faster.
Matt Alder 1:05:08
Dave, thank you very much for talking to me. Been a pleasure. Thanks, Matt.
Matt Alder 1:05:20
Hi, Nicole. It is fantastic to be talking to you. Can you introduce yourself and tell us about your role at Smart recruiters?
Nicole Hammond 1:05:29
Absolutely great to be here. Matt. So my name is Nicole Hammond. I am our VP of center of excellence, and I have been at Smart recruiters for over 10 years in a number of different roles. What it means to be the VP of the Center of Excellence is a lot of things, but I would sum it up to say I am here to ensure we are continuing to elevate and thrive as a go to market motion and truly help to ensure that our customers and our prospects understand the impact that We can serve them with
Matt Alder 1:06:00
fantastic and how has building Winston changed the way that smart recruiters works as an organization?
Nicole Hammond 1:06:07
It’s a great question. When we first heard about Winston, there were a lot of things behind the scenes within our organization that were happening. As you know, our head of product, Sharon, was a former CEO, and Rebecca. Now our CEO was a former head of product, and so that synergy truly helped us to not only change organizationally, how we worked as an R and D organization, but also how we collaborated across with the Center of Excellence, with our go to market, et cetera, with the infancy of Winston, we have had a very strong relationship with product in a and been able to truly move at a fast and effective pace. Fantastic.
Matt Alder 1:06:50
And what does running a center of excellence mean in the context of an AI transformation? Wow.
Nicole Hammond 1:06:58
It means a lot. As you are familiar with, the world of AI is very broad and very deep, and everyone’s level of maturity with AI is different, both within our organization and externally, leading the Center of Excellence, I think foundationally, the first place we needed to start was with the awareness and understanding of AI baseline information for our employees, so that we are all aligned on what we were saying. But then, as we built Winston, this AI layer across smart recruiters, how it was differentiating in the market, how it was impactful for all users, not just candidates, and truly trying to take that education and awareness and help all of the roles within go to market, to better understand but be able to also confidently communicate and influence with that education and awareness with our customers and prospects.
Matt Alder 1:07:54
And I kind of want to dig into the enablement around that in a second, but before just kind of interested in exploring the sort of the co creation part of this. I mean, how important was the voice of the user, of the customer, in the development process?
Nicole Hammond 1:08:09
Huge and appreciated. I think some companies miss the mark when they just assume and they don’t truly not only have a feedback mechanism with their customers, but also that continuous cycle for engagement involvement as you continue down this journey of building, especially when it comes to AI. So at our organization, we had a design group that was made up of a number of marquee customers across different industries that truly provided us value as we not only built Winston, but continued to expand upon him. Our product team led pieces of this, and so not only did they have the data points from the system, but also just the feedback from these prospects and customers. And I think the most impactful part is that they listened right. We continue to build out our roadmap, and we’ll make adjustments accordingly, based on the feedback that we are receiving from our customers, and I would say I’m very proud of that, as a lot of organizations say they do that. But the reality is a different story,
Matt Alder 1:09:10
and part of that is, is obviously trust, and trust is just such a big part of this whole AI transformation, whether that’s you know, candidates or customers or you know what you do, kind of internally now, I know you have a trust committee. Tell us a little bit more about that and and why it’s so important.
Nicole Hammond 1:09:29
Yeah, as you mentioned, trust and governance is very important as we go through this world that’s evolving with more and more AI. And one of the beautiful things that our organization did was set up a trust committee that’s made of many stakeholders, representing security, representing product, representing go to market, and also including our customers. Monthly meeting and a lot is discussed, whether it’s trends in the market, new legislation, new certification. Trends, but also an opportunity for there to be collaboration and questions that we may be getting constantly from our customers to ensure them that reliability, but also that we’re proactive and leading the market with the governance, legislation, etc.
Matt Alder 1:10:17
Things are moving so quickly in this space, and there’s, there’s, so much confusion. And you know, really, every company out there needs to sort of ensure that that their teams are being educated and enabled and being up to speed with AI. How has that worked for you, kind of internally? How have you managed to get the broader team to a point where they can really effectively talk about AI to your customers.
Nicole Hammond 1:10:43
I think it’s two parts, right? One, you have to practice what you preach. And so not only with the use of Winston, but other AI tools that we are adopting to provide efficiency in our daily job is important. And so getting the adoption of using AI tools so that you can then speak to hey, I’m on this ride with you, but in a different, different way, but also too. I think what’s important is that everyone comes along this journey differently, right? Some are very familiar with AI. Others are very adverse to it, and we have to be ready for that, not only in our enablement, but also in our tool. And so what we’ve done is truly built that foundation, also built role specific components that are relevant to each person in go to market. And then we’ve continued to practice right we are using an AI tool where you can practice in a safe zone, your value messaging, your responses, and so it, it brings it to light, because we are truly practicing what we preach.
Matt Alder 1:11:51
And I suppose to dig into that, because I think it’s such a critical thing, because you, you’ll have customers. There are lots of different sort of phases of of understanding and adoption. I mean, how else do you kind of really make sure you’re you’re meeting them where they where they are, and helping them with that, with that transformation?
Nicole Hammond 1:12:11
Yes, so from the customer feedback in the design group, we received a lot of information around internal tier organizations that have the user that has been in the market for 20 years and just likes the classic way of doing things, they want them to upload a resume, and they want to go through those 100 applicants, okay? And then you have the others that are all about AI and efficiency and automation and want to use Winston to chat and help them with their daily priorities. And what I truly appreciate about smart recruiters, and especially our design team and our product team, is that we have created an AI hub that allows for that. So whether you’re at the beginning of your AI journey or you’re very comfortable, you can set the parameters within smart recruiters to support that not only by user, by department, by country, et cetera. So we hear them, and we’re responding to ensure that we make it flexible.
Matt Alder 1:13:11
I think maybe one of the biggest challenges around this is where we’re dealing with technology that’s innovating very fast. Use Cases are innovating very fast, and you know, a talent acquisition employer community who may not be moving quite as quite as quickly and also have very specific, you know, needs as businesses in terms of recruiting and talent acquisition, how do you strike that balance between innovation and kind of building that path for people and showing them what’s possible and what they actually need from you. Right now, today,
Nicole Hammond 1:13:47
there’s a lot of different approaches and strategic directions that we go with this truly depending on what we’re hearing from her customer, but one thing that I would say is 70% of the businesses out there are using AI, but only 28% of them have a plan. And so that is a bit of a disconnect as you think about ensuring you have trusting, reliable, adopted AI in your market. And we both know that the TA space is the one that is kind of leading the charge when it comes to AI in businesses. Having said that, I think one approach that has resonated a lot with our prospects is, if they don’t do it right, if you don’t make this transition, what will happen? Your competitors will lead. You will lose a significant amount of money. And so it’s just kind of showing them, not only the ROI that can happen with AI and automation, but also what happens if you don’t take this action Absolutely.
Matt Alder 1:14:42
And having sort of been through this, this process, there are lots of businesses out there. You know, every, every week, we see a CEO in the press sort of demanding that their company becomes AI first. And from what I can see, very often there’s, there’s very little kind of infrastructure. A road map or learning about how companies can can do that. So what advice would you give to the companies themselves when it comes to, you know, employers and AI adoption? What lessons have you learned? How can they take their teams on a on a similar journey?
Nicole Hammond 1:15:18
Yeah, my background is change management. It is also my passion, and it is very relevant to this point in time with AI. My advice would be to truly think about this holistically. Not just think about the technology of AI, but the people and the supporting processes, right? This is true transformation, and you need to account for all those aspects for it to be successful, but also adopted. And we all know with any technology that adoption is key, and sometimes is overlooked. And that’s why we go to market for new technology. That’s why things fail. And so my advice would be to truly think of this as transformation, and take into consideration people process and the AI technology,
Matt Alder 1:16:05
and I suppose on that, on that people side. You know, we talk about companies being at different stages with with technology and development, but people very much are as well. Is there anything that you’ve kind of learned from, you know, watching the individuals that you work with go on that AI journey and become comfortable with technology, particularly if they were skeptical or suspicious or terrified in the past.
Nicole Hammond 1:16:29
Yes, it’s kind of like dipping your toes in the water and then being okay with putting your entire foot in. We have seen a number of customers that come back to us and just say thank you. And that’s the impact. That’s why we stay right. We get to see this not only return on investment financially, but also just the impact it makes for people. I think about that hiring manager that has 100 things to do, and hiring is a priority, but it’s not the number one thing they want to do with their time. And we’ve seen a number of them come back and say, Thank you. You know, I have this on my app. I was on a train, and at that time, I was able to approve a rec I was able to review candidates, and, oh, by the way, Winston reminded me to take a look at my hiring plan. And so I think for me, while it may not be a data point that we can measure the feedback and the thank you that we get, and then leaning into more right, wanting to know where we’re going from a product strategy with AI with Winston is truly how I personally have felt us being impactful with the AI journey we’re sharing with our customers.
Matt Alder 1:17:38
And I guess everyone has that AI moment, don’t they there when they’re but they’re but they’re using it, and it just does something surprising, or that makes their life, makes their life easier. And I, I suppose it’s kind of accelerating people towards that, that kind of personal point where they they just really see the benefit,
Nicole Hammond 1:17:53
yes, and I think we can’t forget to your point, those personal moments through the hiring journey, right? This AI and Winston, our friendly sidekick, allows us to be efficient and to prioritize accordingly, but he also allows us to have those human touch points and those moments that matter. Right? Imagine a world where you go through this entire hiring process and then you get to have that real life moment with your finalists to say, we would love for you to come on board. I think that’s special, not to lose those human moments and Winston, and this process truly allows for that.
Matt Alder 1:18:29
And as a final question for you, what does the future look like? How? How is the Center of Excellence going to evolve?Nicole HammondWhere’s all this going?
Nicole Hammond 1:18:39
Oh, wow, if I had a crystal ball and I knew charon’s roadmap plans. No, honestly, AI is continuing to evolve. And I think as we look forward, there’s going to be a lot related to agent, to agent and infrastructure, and even more, deeper dives into governance. And so the COE is going to support that. We are going to get broader with our capabilities related to technical infrastructure. We are going to be able to support go to market with those conversations around governance and trust and truly what that infrastructure of all these agents looks like for them, and ensuring that it is a seamless experience.
Matt Alder 1:19:23
Nicole, thank you very much for talking to me.
Nicole Hammond 1:19:26
Oh, thank you for having me. Matt, it was wonderful. My
Matt Alder 1:19:29
thanks to all four of my guests for being so open and honest about their challenges and their experiences. Building Winston, I think that we can see AI is very, very complicated, but I hope you’ve learned some really valuable lessons from the conversations that we’ve had. And I think the most important thing, if you focus on strategy and the value that it’s going to bring to your organization, that’s how you’re going to supercharge your talent acquisition, AI transformation






