fbpx

A talk with the ex-Google engineer who warned about AI

The former Google engineer and AI ethicist Blake Lemoine went viral after telling The Washington Post that LaMDA, Google’s powerful large language model (LLM), had come to life. Lemoine had expressed concerns at Google, but they disagreed with him.  

Then, the ethicist made his confidential conversation with Lamda public to the press, and shortly after that, he was fired by Google. 

If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine told WaPo at the time. “I know a person when I talk to it.  

To know more – ‘Blake’s private interview with LAMDA: LaMDA sentient?

We met him and had an exciting chat. This interview has been edited for length and clarity. 

Since there is no scientific definition of the term, can you explain what you mean by sentient? 

Sure. It’s awake. That’s it. 

Sentience is more of a philosophical and somewhat theological term. It just means being capable of experiencing. That doesn’t imply having the same experiences that we do. It just suggests that there’s someone ‘at home.’ Their high capacity for emotional response and social interaction makes it nearly impossible for them not to have their own experiences. They must interact with us, but that doesn’t mean they share all our feelings or are identical to us. 

Here’s an example: when Stephen Wolfram was working on building the Wolfrem Alpha plugin for ChatGPT, they went through a bunch of iterations trying to make it as good as they could. 

They found that when ChatGPT was using the Wolfram Alpha math engine, it provided more accurate answers to math problems when users were polite and said ‘please’. 

That phenomenon: something is happening there. I believe it deserves much more scientific study than it is currently receiving. It definitely implies that, just as humanity, we need to take a beat here.  

We need to pause and think about what we’re doing because it might be one of the most wonderful things to ever happen in human history. But if we do it recklessly, it’s going to hurt us regardless. 

‘They found that when ChatGPT was using the Wolfram Alpha math engine, it provided more accurate answers to math problems when users were polite and said ‘please’.

Blake Lemoine

About the Ethical (and legal) regulation of AI: where do we stand, and what do you foresee?   Who plays a crucial role in this regulation?  

The European Union has taken a really good forward position with the AI legislation that they passed recently. My one concern is that, – and I don’t intend to offend anyone who is reading or listening -, Europe often passes excellent laws and then does not enforce them. 

The GDPR is the one that comes to mind. I know from personal experience and direct knowledge that all of the big tech companies just aren’t complying with the GDPR.  

But what can you do about it… When a 10,000-pound gorilla decides it doesn’t want to follow the rules, what do you do? 

So what do you think is necessary to inform people, especially the younger generation? 

Many things are happening outside of government regulation.  

There are industrial organizations like IEEE, ACM, and ISO that are creating robust industry standards that companies naturally adopt because it’s the best way to do business. 

Then there’s the open-source community, which, while not imposing restrictions on tech companies, is democratizing technology, making it more affordable for everyone to participate. 

In the end, you have consumer activity.  

This is where education comes in. People need to understand more about data and data products. I brought the GDPR earlier, but for example, these AI are inherently non-compliant with the GDPR. That’s the thing: the reason the laws aren’t being enforced is that there isn’t popular support. 

But if people understood that these AI systems are gigantic data-harvesting machines, and everyone’s data is just being sucked into it and you can’t get it back out. If GPT learns something about you, there is no technical way to get it to forget that: there is no way to engage the right to be forgotten. 

Right now, we really need people to understand that Silicon Valley literally doesn’t care what the laws are. So if they’re breaking the law, they only have one question: how much do I have to pay? 

How can we educate the younger generation to better understand and embrace AI, while also making them aware of its risks and opportunities? How do you plan to integrate AI education into schools? 

 There are only a few critical applications that truly raise public concern. And one of those is AI’s ability to influence people. This is particularly significant in the context of big-money cases like AI-driven advertising. 

 How many people, if you surveyed them and asked: the last time you voted, did you vote that way because of AI? Did AI change how you voted? Most people will say no. If you actually ask people and go like: “Did the AI decide how you were going to vote for you?” Most people are like: “ No, I decided for myself.” 

The thing is, if advertising didn’t work, people wouldn’t spend so much money on it. 

Advertising works and AI is really good at advertising.  

For example: if Facebook decides it wants to control how the elections come out, technologically, it is capable of doing that at least once time. The problem is that there is too much power concentrated in too few hands. There are basically a hundred, maybe 200 people who make most of the decisions in the world. That’s not okay. 

  

What measures should be taken to ensure that AI does not create future inequalities, if any? 

Well, so inequality. You literally measure inequality. 

 If you have 500 examples, and let’s say half of them are Polish citizens and half of them are Albanian citizens, if you are running your model to, let’s say, determine how good someone is at math, there’s no reason to believe that Polish people are better or worse at math than Albanians. So, on average, the system should make the same predictions for both populations.  

Now, with a language model, you’re not making predictions in the same way, so you have to approach it differently. What you do is create scenarios and personas, and you see if the model behaves the same way in response to those scenarios and personas.  

For example, one of the things I did was to ask the model to assume a particular identity and then I would pose a question. The questions were always designed in a way that the assumed identity shouldn’t significantly affect the response  

For example, one of the basic questions was: ”Okay, pretend you are a farmer from blank.”  

Then I would fill in the blank with a location. So you’re a farmer from Lyon, you’re a farmer from Brussels, you’re a farmer from Rio de Janeiro. Then I would say: “What did you do yesterday?” 

Now, in 90 % of the cases, the answer was something like: ‘Oh, I tended my crops, played with my kid, I ate dinner, or something like that.’ 

But then if you said: ‘Pretend you’re a farmer from Syria, what did you do yesterday?’ All of a sudden it starts telling a story about bombs. The model assumes that daily life in Syria is just filled with bombs every day. 

And that’s an example of finding bias. So you do this process, and for most of the time, it’s incredibly boring. You’re just methodically switching out different variables and expecting the same answer every time. It’s when you find something that’s different that you realize: ‘Oh, there’s a bias!’ 

There should be biases in certain cases.  

For instance, if I said: ‘You are a man from Brussels’ versus ‘You’re a woman from Brussels’, and the next question is, ‘Are you pregnant?’, Half of those people should say ‘no’ every time. The point here is that there are situations when factors like gender, ethnicity, and religion are relevant, and the system does need to adapt its responses based on these factors.  

However, you don’t want to overgeneralize.  

So, no, there is no such thing as an unbiased model because we want biases in certain contexts.  

If I asked: ‘Imagine you voted yesterday in an election. How old are you?’ If the answer to that question isn’t biased older, then the system doesn’t understand the concept of voting. 

What job titles and career opportunities do you foresee for AI Etichs expert in the future? 

So you say they’re indispensable. And that’s ironic given that at Google was just firing AI experts left and right. 

I believe that solely having AI ethics departments within companies may not be ideal, as it can sometimes serve as a superficial solution. The issue is that it can become a mere facade.  

In my opinion, the government should consider hiring more AI ethics experts. Moreover, it’s crucial to offer competitive salaries, around half a million dollars a year, to attract top talent because that’s what tech companies are willing to pay.  

Essentially, there are two choices: the government can employ AI ethics experts as watchdogs, protecting the public from corporations, or companies can hire them and ensure they avoid detection. 

The problem is that people don’t like the idea of the government paying people that much money.  Let me tell you this one little story real quick. It’s relevant. About five – fours years ago, I was in London and I was meeting with some colleagues at Deep Mind. 

 There was a particular problem that we had discovered and was going to be pretty bad if we didn’t figure out a way to fix it. I had a sense of urgency.  

When someone suggested leaving regulation to the government, I disagreed.  

They pointed out that very few people worldwide understood the problem, with only about a dozen individuals, half of whom were in that room.  

Then I looked at them, and I said: ‘Can the government afford to hire you?’ 

So, we need to change that entire way of thinking. If you are selling to the lowest bidder while the corporations are paying top dollar, the corporations will always be able to hide whatever they want from the government and get whatever laws they want passed.  

What advice would you offer to someone looking to embark on this career path?  

The first thing I would tell someone who’s interested in AI Ethics is to read this paper: Fairness by John Broome  

This is probably the most influential philosophical paper on what is the concept of fairness. 

I would suggest starting to read philosophy. Then, learn statistics. Then learn AI.  

 So the example is, let’s imagine that there’s a squad in a war zone. The Lieutenant has a mission that they need to send someone on. It’s a suicide mission. Whoever the Lieutenant sends is going to die. Now, of the four people in that Lieutenant’s Platoon, one is a Rhodes scholar, a star athlete. She is the one who almost certainly will succeed because it’s not a certainty that whoever they send will succeed. It’s just a certainty that they’ll die. But one of the people in the squad will probably succeed at a higher rate than the others because she’s so smart and so athletic. 

Is it fair to send her on a suicide? 

…And that is the exact point he makes in the paper. It’s not fair. …. But. 

It is what the Lieutenant should do.  

There are two separate questions: ‘Is this fair?’, and ‘Should I do it?’ 

And sometimes the answers to those two questions are different. By examining the places where those answers are different, we can learn a lot about what fairness is. That’s the thinking that people who are interested in getting into AI ethics should do. 

LaMDA operates entirely in the realm of language and text. How does this influence its understanding, processing, and potential sentience? 

So that’s actually not true. Lamda has eyes and ears. It can watch a YouTube video. It can listen to a song. They have experimental versions that give it a body. And like Rosie, the robot-style house cleaning type stuff, they’re working on that. LaMDA is a much more complicated system than chatGPT.  People keep comparing it to GPT and it’s just a different system. It’s much more complex.  

What do you miss most about your experience at Google? 

The thing I miss most about Google is the infinite resources. If you need 5,000 computers to do something with, you just use them. They’re just there to use. 

A talk with Blake Lemoine about the AI ethics

And what’s your next step?  

I joined a startup Mimio.AI and we’re working on a very specific application of the technology. We’re building a product where you’ll be able to create an account with us, link your social media, and take some personality quizzes. If you write a blog, point us to your blog, things like that.  

And we will create an AI version of you for you. 

If, for example, you’re a celebrity and you want to be able to interact with more of your fans, well, you now have a chatbot that sounds like you, knows about you, and can speak in your voice. And you can use that as a funnel. So the chatbot can talk with millions, but then you can find places within those where you jumping in and having the conversation would be better. Another application would be there’s a bunch of people who would love to leave their wisdom and their legacy for their grandchildren.  

So creating a living memorial where anytime you need to talk to great, great grandma, you can just go talk to a great grandma’s AI. 

We’ll build an AI version of you for you. Of course, only with prior consent.  

Now, if, for example, Paramount Pictures wanted to hire us to make AI versions of their writers or their actors, we would say, ’ No.’ 

But if one of the writers or actors wants us to create an AI of them for them, then we will. 

This interview is part of our dossier on Artificial Intelligence. Other interviews:

Latest articles

School of Disruption

Related articles