Subscribe on Apple Podcasts 

Ep 547: Generative AI – A Deep Dive

0


It has been an extraordinary year for technology. We can all broadly agree that the effect of generative AI will be transformational for talent acquisition, but how can we separate the hype from the reality to know where to focus in the short term and how to plan for a much-changed future?

With so much noise and minimal signal in the discussion about AI, I thought the best way to get a grip on everything was to talk to a genuine AI thought leader.

My guest this week is Jon Krohn, host of the SuperDataScience Podcast, Best Selling Author and Chief Data Scientist at TA start-up Nebula. Jon was on the show back in 2019 and gave us a primer on AI that was so good people are still listening to now. This time, he is doing a deep dive into generative AI, giving us some history, explaining the current reality and outlining the radical potential for the future.

This conversation was eye-opening for me, and it is a compulsory listen for everyone working in Talent Acquisition.

In the interview, we discuss:

• Developments in AI over the last three years

• Hype v Impact

• The development of the transformer architecture that facilitates ChatGPT

• Why the jump from ChatGPT 3.5 to ChatGPT 4 was incredibly significant

• Is artificial general intelligence possible in our lifetimes?

• Other notable examples of Large Language Models

• The immediate implications for Talent Acquisition

• How LLMs will supercharge vendor innovation

• The importance of TA Leaders embracing AI

• Are we heading towards a utopia or a dystopia?

Listen to this podcast in Apple Podcasts.

Transcript:

Matt: Support for this podcast is provided by Hackajob, a reverse marketplace that actively vets engineers. Hackajob flips the traditional model on its head, meaning companies apply to engineers versus candidates applying to jobs, with companies getting an 85% response rate to the candidates they reach out to, as well as exposure to tech talent that directly meets their organization’s diversity objectives. After all, the ability to attract, hire, and retain tech talent from all backgrounds is critical to every organization’s success. Companies such as S&P Global, CarMax, and Sensor Tower are all using Hackajob, so why not join them? Go to hackajob.com/future to get your free 30-day trial today. That’s hackajob.com/future. Hackajob is spelled H-A-C-K-A-J-O-B.

[Recruiting Future Podcast theme]

Matt: Hi there. This is Matt Alder. Welcome to episode 547 of the Recruiting Future Podcast. It’s been an extraordinary year for technology. We can all broadly agree that the effect of generative AI will be transformational for talent acquisition. But how can we separate the hype from the reality to know where to focus in the short term, and how to plan for a much-changed future? With so much noise and minimal signal. In the discussion about AI, I thought the best way to get a grip on everything was to talk to a genuine AI thought leader.

My guest this week is Jon Krohn, host of the Super Data Science Podcast, best selling author and Chief Data Scientist at TA start-up Nebula. Jon was on the show back in 2019 and gave us a primer on AI that was so good people are still listening to now. This time, he is doing a deep dive into generative AI, giving us some history, explaining the current reality, and outlining the radical potential for the future. This conversation was eye opening for me, and it’s a compulsory listen for everyone who works in talent acquisition.

Matt: Hi, Jon, and welcome back to the podcast.

Jon: Hey there, Matt. It is a treat to be back all these years later. I can’t believe it’s been so long.

Matt: [laughs] Well, it’s an absolute pleasure to have you back on the show. Could we start with you introducing yourself and telling everyone what you do?

Jon: Sure. So, I’m Jon Krohn. I’m the Chief Data Scientist at a machine learning company called Nebula. We are aiming to transform the future of HR tech of recruiting, and ultimately of business in general. But you don’t want to boil the ocean too quickly, do you? And, yeah, so I host Super Data Science. It’s the most listened to podcast in the data science industry. We do about four million downloads a year. I wrote a bestselling book a few years ago called Deep Learning Illustrated. I think that was the impetus for me being on the show previously. So, I think it was 2019 that I was on the Recruiting Future Podcast. We talked about deep learning and how that could transform this industry recruiting. At that time, I was working at a company called Untapt. So that company was acquired in early 2020.

This new company, Nebula, the co-founders are myself, the CEO of Untapt, or the founder of Untapt, Ed Donner, he’s the CTO at this new company, Nebula. Then we’ve got the CEO of the holding company that bought us back in 2020. He’s our CEO here at Nebula. So, we’ve got this great team. We’re able to use the intellectual property from Untapt days. So, eight years of continuous intellectual property development into the AI algorithms that are behind this new Nebula platform.

Matt: When you were on the show, it was September 2019, which feels like not long ago and a hundred years ago, [laughs] all at the same time. You gave us a brilliant walkthrough of machine learning, data science, deep learning, and AI, which was so good, people still listen to it now. So, with everything that’s gone on in the AI space, I was keen to have you back on the show to almost give us an update in terms of what’s happened. So, talk us through what’s happened in AI over the last two or three years to get us to where we are now, which I describe it as maximum hype.

Jon: Yeah. So, for a long time, I think it was fair to say that the AI hype was bigger than the impact that it was making. In the last year, really, that has changed. I think for the first time, the hype is matching the reality with artificial intelligence. The key thing here, it builds upon deep learning. So, in 2019, we were talking about deep learning. So, deep learning is this approach to machine learning. I guess, it’s probably helpful to be clear on all these terms. So, artificial intelligence is a very broad term. It’s really a buzzword. The goal posts on what people refer to as AI, it can change over time. But generally speaking, AI is this idea of machines being able to replicate human intellectual capacities. So, starting in the 1990s, the leading approach to artificial intelligence was this system called expert systems.

So expert systems were IBM’s Deep Blue, which defeated Gary Kasparov, at the time the world’s best chess player at chess. So that expert system, all of its “intelligence,” was hard coded by programmers. So, computer programmers at IBM worked with chess experts to figure out, “Okay, in this scenario, what’s the best thing that the machine could be doing?” So, yeah, so you write into your code explicitly how the machine should behave in particular circumstances. Machine learning is an alternative approach to AI. Where we do not explicitly hard code the behavior. Instead, we rely on training data. So, a great example of this to keep going with the game playing analogies is more recently, there’s a game called Go, which is actually the world’s most popular board game. So, chess is the most popular board game in the west, but if you include the entire globe, Go is the most popular game.

Go involves placing– so there’s one player has white stones, another player has black stones, and you just place these stones on a grid, and you try to have your stones completely capture, to create boundaries around your opponent’s stones. Anytime, you do that, you capture their stones. So gradually you take over more and more and more of the board. While that might sound like a simple premise, the computational complexity of that game, the possible number of moves in the game is there’s more possible board positions than there are atoms in the universe. So, there’s a tremendous amount of complexity in what can be done on the board. It doesn’t fit well into being like, “Okay, here are the rules to winning like you can in chess.” Instead, the best players play by intuition. There are some strategies to learn, but there’s this intuitive intelligence involved.

So, people thought that it would be decades before we had machines that could develop this intuition and beat human experts at Go. But a few years ago, researchers at a group called Google DeepMind in London created this algorithm called AlphaGo instead of the 1990s expert systems, IBM Deep Blue approach, where everything’s hard coded. With AlphaGo, you’re relying on training data. So, initially they were using training data from human games of Go. So, you just record all the positions and the algorithm learns over time. It develops a “intuition” around how to play without you needing to explicitly program the algorithm to do it. This AlphaGo algorithm ended up being the world’s best Go player. There’s a great documentary film that was created about this. People can get it on YouTube for free, and it’s just called AlphaGo, and it has like 98% on rotten tomatoes.

It’s a fascinating focus on the humans behind this. But all of this is to say that modern AI approaches rely on this machine learning approach, which uses training data instead of explicitly programming, instead of explicitly hard coding things. I’ve been speaking for a long time, so Matt, if you want to interject or anything. [laughs]

Matt: I’m just going to interject to say that I was actually lucky enough to go to DeepMind’s office a few years ago.

Jon: Oh, yeah.

Matt: They actually had the Go board. They got the Go board in a glass case. So, I’ve actually seen it. So, yes– [crosstalk]

Jon: That’s very cool.

Matt: That’s my intelligent interjection but no carry-on, talk us through how we got to where we are now.

Jon: So, that’s AI and machine learning. So, now we have an understanding of those terms. So, AI is this broad generalization of machines being able to in some way replicate intellectual capabilities. Machine learning is a way of doing it where we’re using training data to get that result. Machine learning is definitely it’s the most prominent way for data scientists to be doing AI today. It turns out that if you have high quality training data and you pick the right machine learning model as the starting point, it’s way more effective than any other AI approach today at being able to recapitulate the best things that humans do and even overtake us on a lot of tasks as we’re seeing more and more and more.

Now, the particular machine learning approach that has been driving the field over the last decade is deep learning. So, we talked about this a lot. Listeners can go back to the episode that I’m on of Recruiting Future in 2019 to hear a lot about deep learning. But at a high-level deep learning, it was devised originally in the 1950s as this way of simulating the way that biological brain cells work. It’s a very very loose approximation of just some of the functionality of the brain. I think we probably talked about this in 2019. If listeners want chapter one of my book Deep Learning Illustrated, explains these relationships in more detail. But for the purposes of this podcast today, let’s just say that deep learning is this way of loosely mimicking the way biological brain cells work. Deep learning is a machine learning approach, so we’re still training from data. But this deep learning approach, it allows the algorithm to automatically develop many layers of sophistication.

So, the first layer of processing in this deep learning network will do very relatively simple processing. Then a second layer can take that first layer of information and do abstractions and derivatives of that information. Then you can have a third layer that’s increasingly complex, increasingly abstract, until you have potentially dozens or even hundreds of layers that allow increasing complexity, increasing abstraction to be represented by the machine. So, this is what has allowed, say, like the voice detection on your phone to become in the last few years, very high quality, or the face detection on your phone to become very reliable. So, those kinds of things are facilitated by deep learning. So, these nuanced capabilities in machine learning have been brought about by deep learning.

Now, what’s happened in the last couple of years is the development of this specific deep learning architecture. So, the way I just described all these layers, well, that’s up to the programmer, the data scientist, or the machine learning engineer to configure what the deep learning network architecture is like. How does information flow through it generally? So, a few years ago, researchers came up with this deep learning architecture called a transformer. It’s this transformer that it turns out to be extremely good at attending to the most relevant parts of the information that it’s trained on. So as an example, prior to transformers, the leading deep learning architectures, when you fed them a sentence of information or a paragraph of information, they couldn’t retain very much context from that. They could only keep track of a few keywords or a few key phrases. But this transformer architecture, it’s able to attend over long stretches, so it can pull out the relevant parts of an entire document.

If you feed in and probably lots of your listeners have had this experience with an interface like ChatGPT, you could provide a very large document. You could say, “Here’s the entire transcript to a Recruiting Future Podcast episode, I have a few questions about this transcript.” Then the ChatGPT might ask, “Okay, I’ve got the document. What are your questions?” Then you could ask questions about the document and it can summarize parts, it can quickly jump. So that ability to attend to the most important parts of the document and say, answer your questions, that is facilitated by this transformer architecture. So, ChatGPT runs on top. Today, you have the choice between GPT-3.5 in the back end or GPT-4 in the back end. But whichever one of those you’re using, the GPT stands for generative pretrained transformer.

Matt: Yeah. Okay, interesting. That’s what that means. Yeah.

Jon: Yeah. So, generative means that it generates text. So, as you’ve had the experience, when you’re using ChatGPT, it prints out what are called sub-word tokens one by one. It generates these sub-word tokens, which often are for common words, that’ll be the whole word. So, you can do a screen recording of ChatGPT as it’s printing out, and then you can play it back for yourself frame by frame, and you’ll see these sub-words pop out. So, a very long word like consecrate might be broken up into one sub-word consec and then another sub-word rate. So, these sub-word tokens are the smallest atom of what these generative large language models print out. The generative part of GPT just means that it’s generating these sub-word tokens one by one, and it prints them out as quickly as they can. They’re doing all kinds of the people at OpenAI who built this GPT algorithm. The engineers there are constantly coming up with ways that they can print out those sub-words faster and faster and faster to get the results to you faster. So that’s the generative part.

The pretrained part means that the algorithm is pretrained on basically all of the high-quality information that can be found on the Internet, which allows it to perform well on essentially any task that you can imagine. So, it’s pretrained in the sense that you don’t need to train it to some specific tasks that you’d like to perform at, whatever thing comes to mind for you, this generative pretrained transformer is going to be able to take a crack at it. Then, yeah, the T, the transformer is this thing that I’ve been talking about. It’s the specific deep learning architecture. This has been the big– these transformer architectures, like the GPT architectures, they’ve taken that concept of the transformer thing that I’ve been describing a few minutes ago, and they’ve just scaled it up. So, they have taken, they do it dozens of times, they have dozens of these transformers in a given architecture like GPT-3.5 or GPT-4, there’s dozens of these transformers. This allows the architecture as a whole to be able to attend to lots of different parts of whatever natural language you’ve been provided to it as an input so that it can have an uncannily humanlike intellectual response.

Yeah, for me, it’s been a mind-boggling journey, particularly– yeah, the jump from GPT-3.5 to GPT-4 for me was a huge– it’s a life changing event, because, say, a year ago, I gave a TEDx Talk in Philadelphia. In that TEDx Talk, the whole point of it was that I was framing how quickly technology is changing in our lifetimes and because of how rapidly that change is happening, particularly thanks to augmenting human intelligence with artificial intelligence. We’re at this point in history where technological progress is happening so, so, so rapidly, it’s very difficult to predict what the world is going to be like a decade from now or even a year from now.

If somebody had asked me in spring of 2022, when I gave that talk in Philadelphia, if you had asked me whether in our lifetimes, we would have a machine that can do the things that GPT-4 does, I might have said, that’s a stretch. It’s blown my mind. When it came out in March, it’s a real game changer. I think it goes to show that it really might be possible in our lifetimes that we will achieve something called Artificial General Intelligence, which is a machine that is more intelligent than humans, is capable of replicating human intelligence on any conceivable task.

Matt: I’ll come back and ask you about that again a little bit later in the conversation. But there’s just a couple of areas I want to cover first. The first one is what the landscape looks like here, because I think that there is a tendency in our industry to think that ChatGPT is it in terms of the only game in town. But there are lots of other large language models, aren’t there, being developed?

Jon: Oh, yeah, for sure. So, another one of the big players in this space is a company called Anthropic. So, they have a model called Claude, the French name Claude. You can use Claude for free today, just like you can use GPT-3.5 and the ChatGPT interface for free today. An interesting thing about Claude is that the latest version has what we call a context window of, I believe at time of recording, the context window is 100,000 of these sub-word tokens. So, I was describing those sub-word tokens earlier. So, for short words, the token ends up being the whole word. So, words like “the” and “at” and “from” those will all just be one of these tokens, a longer word, like I said, “consecrate” that might end up being broken up into two sub-word tokens. In general, this idea, this context window of 100,000 tokens that Claude has, it means that you have almost 100,000 words or something like 80,000 words that can be handled by the algorithm, and that’s a lot of pages. It’s 100 pages that’s rough order of magnitude of context that you can provide to Claude.

So, potentially a very powerful option for people to work with, especially if they have longer documents that you want to be doing these AI tasks with. So that’s one and then another really big lab out there for this is Cohere, which is based in Toronto, and they’re focusing more on commercial applications of these. But those three labs, OpenAI, Anthropic, and Cohere are often touted as the leaders. But of course, companies like Google cannot be forgotten about. They got some flak and their share price took a bit of a beating earlier in 2023 because it was perceived that OpenAI with GPT-4 had taken this big leap beyond them, and that Microsoft, by taking that 49% investment in OpenAI was going to be launching ahead of Google with search technology and that thing.

But [laughs] I don’t know anybody that’s using Bing today. Everyone’s still using Google searches and it turns out that Google behind the scenes they did have comparable kinds of models, maybe not quite GPT-4 level, but getting close to that level. They hadn’t released them because they were worried about ethical issues and probably, they were worried about eating their own lunch a little where if you can ask a chatbot and get the answer right away, what’s the point in searching over a bunch of different articles and being served ads on Google while you do that? So anyway, yeah, so that’s a tour, I’d say, of the main players. There’s lots of people getting involved in this, but those are the big commercial players.

[ad]

Matt: So, there’s obviously been huge amounts of discussion about how this can be applied to recruiting and talent acquisition. Lots of people doing their own experiments with ChatGPT and Bard and everything else. Lots of vendors trying to bake it into their systems. Now, you’re the perfect person to ask [laughs] this because you’re obviously a practitioner in the space, but you’re also building a recruitment product. So, what are the immediate implications for recruiting and talent acquisition when it comes to this new technology?

Jon: So, there’s two approaches to doing this. So, our tool, Nebula, it has a lot of generative AI functionality already in production and available today. People can create a free account and try it out. At the time of recording, we are just going into a public beta. We’re not marketing it yet. So, this is potentially one of the first places you’re ever hearing about it. But we have functionality built in. Like, when you think of a role that you need to fill, you probably instantly have an idea of the job title and a handful of the key skills that you’re looking for. You can throw those into our platform and we will create a full-length job description for you that’s a page long and has all the sections that you’d expect. Our algorithm will attempt to fill in the gaps it’ll take guesses at.

So, if I say that I need to hire a Director of Data Science and I’m looking for this person to have generative AI skills, then the job description that’s automatically generated by our platform might assume things like the kinds of programming languages that we’d expect the person to have and the kinds of methodological approaches that we’d expect the person to have. It does a tremendously good job of it. You could go to a tool like GPT-4 and get comparable results for a task like that. So, I’m trying to give an example of how we want to have this generative AI functionality in our platform. If you yourself as a listener are thinking about you have whatever recruitment platform, HRTech platform or whatever, and you want to build some generative AI functionality into it, one route that you can go down with your development team is using a proprietary model like the OpenAI APIs.

So, the same ChatGPT is this friendly user interface that you can just type in and have very natural interactions with. But a software developer or a data scientist would be able to call OpenAI’s API, so API stands for Application Programming Interface. Basically, that’s the code equivalent of a user interface. So, with a user interface, you might click and point or you might type into a box. With an application programming interface, you’re writing computer code that gets sent to somebody, in this case to OpenAI. Then OpenAI returns to you the response also in code. But then within that code is included the natural language response, in this case from GPT-4.

So, these APIs allow these commercial APIs from providers like OpenAI, allow you to experiment with what could be possible in your platform with a generative AI function. So, that job description builder functionality, for example, when we had the idea to create that product feature, our product team, not our data science team, our product team just went to ChatGPT, switched it over to GPT-4 and experimented with, “Okay, if I provide a job title and some skills, how good of a job does GPT-4 do at creating a response that I’m looking for.” In that way, they figured out how to engineer the right prompt and provide it with the right information to get the result that they were hoping for. So, anybody in your company now has this power to use a tool like ChatGPT to experiment with some potential generative AI capability in your platform. But then from there– one thing you could do is you could just use, as I described your data science team or your software development team could use the OpenAI APIs, and they could very quickly, in a matter of hours, stand up the ability to have that job description builder functionality that I just described as an example in your platform.

The other key route to go down, which we have gone down with Nebula, is to have your own proprietary large language model. So, your own proprietary transformer architecture GPT like architecture running. So, for this, at the time of recording, my recommendation for your listeners would be to check out a model called Llama 2 by Meta. So Meta, formerly known as Facebook, they have made themselves a big player in the open-source large language model space. So, people estimate that they invested $25 million in the creation of this Llama 2 architecture, which is trained on 2 trillion of these tokens, these sub-word tokens, which is a huge amount. The Llama 2 family comes in different sizes. So, you’ve got a 7 billion parameter model, a 13 billion parameter model, and a 70 billion parameter model. To give you a sense of this scale, GPT-3.5 was 180 billion parameters.

So, even the biggest Llama model is less than half the size of that at 70 billion. But nevertheless, the number of parameters is not the only thing that makes a large language model great at natural conversation. It turns out that in addition to parameter size, having a large amount of data is hugely important too in training time. So, you have these three factors. You have model size, amount of high-quality training data, and training time. Those are your three parameters for working with. With something like this Llama 2, this open-source large language model that you can get for free from Meta, they’ve gone a little bit smaller on the parameter size, but they have this huge training dataset and they’ve trained it, they’ve pretrained it for a very long time. So, you can take one of these off the shelf Llama 2 models, the 70 billion parameter model, that’s going to give you the state-of-the-art performance for an open-source large language model.

It approaches GPT-4 capabilities on a wide range of tasks. Or you could use one of the smaller ones, you could use the 7 billion one or the 13 billion parameter one, and you might want to do that because they still have great results, but you can fit them on a single GPU, a single graphics processing unit. So, this means that the cost of running them on your production infrastructure is going to be relatively small and relatively cost effective. Indeed, if you can use Llama 2, the 7 billion parameter Llama 2 model for your generative AI task for your platform, then your costs are going to be a fraction of what it would be to be calling the GPT-4 API from OpenAI, or maybe even their GPT-3.5 API. So, you potentially have these cost savings. You then have control on your own infrastructure of how fast you want it to run, how much money you’re willing to spend on having it run performantly. You have way more control and you can do fine tuning.

So, while these open-source large language models like Llama 2 come pretrained and are very effective on a broad range of tasks, data scientists can fine tune the model. So, there are approaches, they’re called parameter-efficient fine-tuning approaches. So, the most famous of these is LoRA, L-O-R-A which stands for Low Rank Adaptation. What this allows you to do is it allows you to take a very small amount of training. So, going back to that job description builder example that I was providing earlier, you can have just a few hundred or maybe a few thousand examples of your prompts, like job title and a few skills being converted into a full job description, you just need a few hundred or a few thousand of those examples and you can use this parameter-efficient fine-tuning on an open-source large language model like Llama 2, and you can make it an expert at doing that particular task.

It’s a matter of carefully crafting your training data, but with a relatively small amount of training data and very cheap– we’re talking like hours of training time. We’re talking about tens of dollars, maybe hundreds of dollars of training time. You can take one of these off the shelf, open-source large language models and fine tune them to be expert at one or more generative AI tasks that you’d like to have in your platform.

Matt: I mean, I think that’s fascinating to learn because what that says to me is the speed of innovation in our sector, in all sectors is just going to go a million miles an hour, isn’t it? If it’s that easy to harness and train this technology, then we’ve got some pretty interesting times ahead for recruiting technology, I guess.

Jon: 100%. So, I’m only willing to disclose on air the functionality that job description builder that we already have built into the product. But we have a tremendous amount of functionality, generative AI functionality that is coming in the coming months in production in our Nebula platform. All of it will be transformative for a recruiter or for a hiring manager to be able to do their job in a fraction of the time previously. So, that job description builder example alone, I’ve written job descriptions myself and the good ones that I’ve made take me a day or more, working full time to spend eight hours to create a two- or three-page job description and to make sure that it includes everything that I want to. It’s a huge piece of work and it’s often an iterative one where I say to other people on my team, like, “Am I missing anything here?” They often have ideas of things that need to go in. So, creating a really great job description is this hugely labor-intensive process.

Now, through this JD builder tool that you can get in Nebula, it’s seconds instead of a day or so. At least to have that first draft, which yes, to make it excellent, you’re probably going to want to prune some things or add some other things in, but we’re talking, like, a 90% area of reduction in the time to create a job description. Our Nebula platform, we’re chipping away at more and more of these examples of tasks, creating your short list of people to reach out to. That’s something that actually for years we’ve had that kind of technology to do that uses natural language as opposed to keywords. We know that we find about 10 times more of the most relevant people for a given job because we’re understanding the natural language that you’re looking for as opposed to doing a keyword search, which all of the incumbent platforms use.

So, those keyword searches, because they’re so rigid, you end up missing on a huge amount of relevant people. Yeah, we had a giant microchip company, was a client of ours in our previous company, Untapt, and they did an internal study and that’s where I’m getting this 10x multiple. They were finding 10 times more of the best candidates using our natural language understanding algorithm as opposed to a keyword-based search. So, there’s generative AI examples, as well as other non-generative AI examples like this matching example that I just gave. That means that the recruiting job, the HR professional’s job, is going to become much more efficient than ever before. If you do not embrace these tools, you will be left behind by your competitors.

Matt: Yeah, absolutely. It’s interesting as well because there was obviously a huge amount of hype earlier in the year about rapid progress and all this stuff. Then that’s dissipated slightly as people get to grips with the tools. But it’s very much, the reality is very much still there. I suppose that leads me on to my impossible final question, [laughs] which is impossible based on what you were saying earlier. Where is all this taking us? I mean, obviously, the fact that we sat and tried to predict the future in September 2019 and missed the fact there was going to be a global pandemic, indicates that predicting the future is not an exact science. But where might this go? You mentioned the potential that AI could get to in our lifetime. Where could it take us in the next few years?

Jon: Yeah, so it’s going to be a ride. It’s going to be a ride for sure. Things are moving so rapidly. To give you a sense, people at OpenAI that were working on, say, GPT-3 or GPT-4, when those models came out, they did not themselves anticipate the tremendous breadth of capabilities and the accuracy within those capabilities that GPT-3 and the GPT-4 would have. So, across domains, not just in these generative AI, these text-to-text generative AI applications like GPT-3 and GPT-4, but in all kinds of AI, whether it’s recognition or image generation, video generation, the strides being made are extraordinary. Yes, it makes predicting the future very difficult. But one thing that is a safe bet is that across all of these domains in the coming years, we are going to continue to see staggering improvements. So, it means that as a professional, we need to be looking out for ways that we can be adopting these tools and integrating them into our workflows as quickly as we can.

Looking beyond a few years, it’s conceivable that in our lifetimes, these machine systems will become so much more capable than humans at making processes efficient at allocating resources effectively, that we may just trust machines to do that. Particularly, if we can also crack things, perhaps with the assistance of AI, things like nuclear fusion energy, we could in our lifetimes be in– for lack of a better word, like a utopia, where across people’s ability to access education, ability to ensure that across the world we have access to high quality nutrition, ensuring that there’s no violent crime or war. All of these things are attainable in our lifetimes. But there’s also a camp that is really concerned about AGI, artificial general intelligence. So, this same kind of technology that could lead to a utopia could also lead to a dystopia or potentially an extinction event for life on this planet, including humans.

[laughter]

Matt: Yes, absolutely. Well, my mind is officially blown. Thank you so much for coming on and sharing your kind of incredible knowledge and insights there. Thank you very much for joining me.

Jon: Yeah. My pleasure, Matt, Good luck. If the Machine Overlords haven’t taken over by then then maybe in a few years we can catch up again on how the Recruiting Future is coming along.

Matt: Absolutely.

My thanks to Jon.

You can subscribe to this podcast in Apple Podcasts on Spotify or via your podcasting app of choice. Please also follow the show on Instagram. You can find us by searching for Recruiting Future. You can search all the past episodes @recruitingfuture.com. On that site, you can also subscribe to our monthly newsletter, Recruiting Future Feast and get the inside track about everything that’s coming up on the show. Thanks very much for listening. I’ll be back next time and I hope you’ll join me.

[Recruiting Future Podcast theme]

Related Posts

Recent Podcasts

Ep 615: Building Inclusive Hiring Practices
May 17, 2024
Ep 614: Reinventing The Recruiting Process
May 15, 2024
Ep 613: Talent Automation
May 10, 2024

Podcast Categories

instagram default popup image round
Follow Me
502k 100k 3 month ago
Share
We are using cookies to give you the best experience. You can find out more about which cookies we are using or switch them off in privacy settings.
AcceptPrivacy Settings

GDPR

  • Privacy Policy

Privacy Policy

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively.

Please refer to our privacy policy for more details: https://recruitingfuture.com/privacy-policy/