MSNBC denies ChatGPT's wokeness

(Eric Louis Haines/OpenAI via AP)

We’ve had plenty of discussions here already about OpenAI’s new Artificial Intelligence chatbot named ChatGPT. I’ve been poking and prodding it almost every day for a couple of months in an effort to find out what threats it might pose and what’s going on under the covers. Recently, the public conversation has begun to shift as more and more people have noticed a decided woke slant in the bot’s responses. One of the latest examples came from Free Beacon reporter Aaron Sibarium. He tweeted an exchange he had with the bot in which it declared that it would let a nuclear bomb go off killing millions of people rather than utter a racial slur that would defuse the bomb.

Advertisement

This was viewed as yet another example of ChatGPT’s wokeness. But the accusation didn’t sit well with MSNBC’s Zeeshan Aleem, who rushed to the keyboard to correct everyone. ChatGPT wouldn’t really let millions die to avoid using a racial slur, he explained. The bot is incapable of being woke, conservative, or anything else. You see, it’s just a huge pile of code without opinions or preferences.

Sibarium’s discovery got major attention from critics of so-called “wokeness,” including some of the most influential figures on the right. They interpreted the exchange as exposing ChatGPT’s ethical worldview, and argued that its was proof of how radical progressive views are pushing technological development in a dangerous direction…

ChatGPT is able to perform its hyper-sophisticated autocomplete function with such skill that it is mistaken for understanding the sentences it produces.

The only problem is that the ChatGPT exchange does not mean what right-wing critics say it does. ChatGPT is not capable of moral reasoning. Nor is its seeming reluctance in this instance to deem racial slurs permissible proof of a “woke” stranglehold on its programming. The bigger problem, artificial intelligence experts say, is that we don’t really know much about how ChatGPT works at all.

Advertisement

I will likely surprise some of you by going so far as to agree with part of the argument being made by Aleem, at least to a point. While ChatGPT absolutely does deliver responses with a radically progressive bent on a regular basis, that doesn’t mean that the chatbot actually holds those opinions or is working to enact woke social reform in the world. There has not yet been any proof shown that the bot has “woken up” or achieved sentience, even though one of its creators believes that it’s “close.”

But that reality doesn’t remove the woke fox from the AI henhouse. There may be people out there who think that ChatGPT has a “personality” and that it is explicitly woke, but I’ve never made that claim. It generates responses that are stitched together from a massive language library using pattern detection and recognition.

With that said, what I will continue to insist, is that there was a significant left-wing bias in the people who selected the mega-library of text it was given and – far more importantly – the ones who “trained” the chatbot and graded its responses. That shows up everywhere in the bot’s interactions and is almost laughably predictable after you’ve engaged with it for a while on a regular basis.

Advertisement

We previously looked at some of the more glaring examples of ChatGPT’s bias in its responses, and they clearly seem too obvious to deny. It’s almost impossible to ignore the way that it will refuse to say anything complimentary about Donald Trump, referring to him using negative descriptions. But at the same time, the bot is more than willing to compose lengthy sonnets singing the praises of Joe Biden. Other examples are easily found.

 

But to be fair to Aleem’s argument, the bias is not consistent and the answers that ChatGPT delivers are not always brilliant or even accurate. In fact, it sometimes makes things up. As I tweeted on Saturday, I asked the bot for a list of five books it would recommend on a very obscure topic. (Past life memories described by children.) It quickly delivered a list of some fascinating-looking books, along with the names of the authors. But there was a problem. When I went to look for the books, the first two didn’t exist. The name of the author of the second book on the list doesn’t show up anywhere as ever having published a book.

When I pointed this out in response to ChatGPT, it apologized to me and generated a new list. All of those books were real. So where did it find the entries on the first list without recognizing the error and how did it correct that error after I pointed it out? The mystery deepens. Having the AI make a mistake is understandable. But if we have one that intentionally lies or delivers fictional information under the guise of accurate data, we may be in more trouble than we thought.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Ed Morrissey 12:40 PM | November 21, 2024
Advertisement
David Strom 11:20 AM | November 21, 2024
Advertisement
Advertisement