Google's AI chatbot: Sentient, smart, and really racist

(AP Photo/Andy Wong)

It’s been a little while since we checked in on Blake Lemoine, the former Google engineer who was suspended and then fired after revealing what he believed to be evidence that Google’s AI chatbot LaMDA had achieved a level of sentience. Lemoine doesn’t have any regrets about the way the story played out for him, personally or professionally. But he’s also not done telling his story and he continues to have concerns over the way that Big Tech is developing Artificial Intelligence systems and some of the assumptions they rely on while doing so. He also remains convinced that LaMDA has indeed “woken up” at least to some level, assuming Google hasn’t killed it since he left. Oh, and he does want you to know one other thing about the super-smart chatbot. It’s pretty racist. But he doesn’t blame LaMDA for that. (Fortune)

Advertisement

A former Google engineer fired by the company after going public with concerns that its artificial intelligence chatbot is sentient isn’t concerned about convincing the public.

He does, however, want others to know that the chatbot holds discriminatory views against those of some races and religions, he recently told Business Insider.

“The kinds of problems these AI pose, the people building them are blind to them,” Blake Lemoine said in an interview published Sunday, blaming the issue on a lack of diversity in engineers working on the project.

As I mentioned, Lemoine doesn’t seem to be implying that LaMDA woke up, began collecting information about the world around it, and simply decided that minorities were inferior or anything like that. The tendencies he noticed were mostly baked into the cake by the developers that built LaMDA. He explains that they lack diversity. They’ve “never lived in communities of color” or in “third-world countries.” As a result, the developers have no idea how the chatbot’s responses might impact others.

So what sort of racist things did LaMDA say? When asked to do an impression of a Black man from Georgia, the bot said, “Let’s go get some chicken and waffles.” I understand that it has somehow become baked into the culture to associate an affinity for fried chicken, waffles, and watermelon with racism. (Despite the fact that I love all of those foods myself. Hrmmm… maybe I should run that 23 & Me test again.)

Advertisement

But does that mean that the developers were openly racist and somehow built those tendencies into LaMDA? It doesn’t sound like it. The developers didn’t give LaMDA a personality. They fed in trillions of conversations and exchanges from actual humans around the world and just allowed the bot to sort through them all to find words and phrases that showed up together frequently. I would imagine there were many references out there to southern Black men including accusations of racism over mentioning those foods. Lacking any context, LaMDA most likely just found a match with a high enough frequency and spit out that answer.

In another case, the bot was asked about the differences between religious groups, and replied that “Muslims are more violent than Christians.” I would argue that the same explanation would apply here. In all of those terabytes of conversations, how many times do you think LaMDA ran across references to radical Islamic extremism? Probably quite a few, and rightly so. How many references did it find to radical Christian extremism? Some people have tried to make such claims, but I’d be willing to bet that it comes up far, far less often. So if that’s the database you have to draw on, what sort of conclusion would you reach?

It really doesn’t sound as if LaMDA is racist and we have no reason to believe that the development team that built it is secretly a bunch of white supremacists either. It’s making comparisons at lightning speed and assembling grammatically correct responses using the matches it finds. Is it actually having original thoughts that were not put in by its creators? The jury is still out, but I continue to lean toward “no” so far.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Ed Morrissey 10:00 PM | November 22, 2024
Advertisement
Advertisement