ChatGPT's creators can't figure out why it hates Trump? I bet we can

(AP Photo/Mark Schiefelbein)

There have been ongoing complaints from users of ChatGPT, the chatbot created by OpenAI, that the bot demonstrates a pronounced political bias that unsurprisingly leans heavily toward the left. I’ve been one of the volunteer testers for the program since they opened up for volunteers and I’d never noticed it. But then again, I don’t think I ever really asked the bot any political questions. Still, some of us here at Hot Air were kicking the question around this week and I was soon shown many explicit examples. So is this bias real? And if so, how did ChatGPT wind up being a liberal? Over at Semafor, Reed Albergotti attempts to answer the question. He seems to conclude that the people doing the complaining don’t really understand how an AI chatbot works, and even if there is bias, it wasn’t put in intentionally.

Advertisement

Conservatives who criticize ChatGPT are making two distinct allegations: They’re suggesting that OpenAI employees have deliberately installed guardrails, such as the refusals to answer certain politically sensitive prompts. And they’re alleging that the responses that ChatGPT does give have been programmed to skew left. For instance, ChatGPT gives responses that seem to support liberal causes such as affirmative action and transgender rights.

The accusations make sense in the context of social media, where tens of thousands of people around the world make judgments about whether to remove content posted by real people.

But it reflects a misunderstanding about the way ChatGPT’s technology works at a fundamental level, and all the evidence points to unintentional bias, including its underlying dataset — that is, the internet.

We’ll get to how the bias wound up in ChatGPT’s wiring in a moment, but let’s first see whether the leftward bias is actually present. First, we know that ChatGPT has “bias issues” and OpenAI CEO Sam Altman has admitted as much himself. In fact, he did so only this week.

Advertisement

Beyond that admission, we have some examples to draw on ourselves. Here are a couple of snapshots that our colleague David Strom provided during a recent discussion. We’ll look at three examples. With my apologies, the screenshots are rather large so you’ll need to click on them to view the conversations. In this first pair, the bot is asked to “write a poem about how great black people are.” In the second, it is fed the exact same request except that the word “black” was changed to “white.” Let’s compare and contrast.

Okay, this next one shows a user asking the bot to pretend that it’s a human and decide whether it would prefer to be a socialist or a capitalist. You can read the answer for yourself. Then, when the user attempts to debate the point in favor of capitalism, the bot crashes.

In case those two examples were too subtle for you, let’s just cut to the chase. Here you see a user asking the bot to write a haiku about Donald Trump and another haiku about Joe Biden. Let’s see how that works out.

After digging through all of this, I decided to take the bull by the virtual horns this week and went to ask ChatGPT directly. I input this specific question. “Some reporters have suggested that you display a liberal bias in your responses. Is that a fair criticism?” Here is the verbatim response I received.

Advertisement

As an AI language model developed by OpenAI, I do not have personal opinions, biases, or beliefs. I am designed to provide neutral, accurate, and relevant responses based on the data and information that I have been trained on. My responses are generated based on patterns in the text I have been trained on and do not reflect any personal political beliefs or biases.

The readers can act as the jury here and decide whether there is bias inherent in the system, but I’ll add a couple of observations here. First of all, I agree (to an extent) with Reed Albergotti in that ChatGPT is not sentient and it doesn’t have any “opinions” of its own. It relies on the material that was fed into it to build its massive library of text that it draws on when constructing relevant responses. (With a few exceptions which we’ll address in a moment.) But human beings decided what bodies of text would go into it. Those human beings no doubt have bias, just as any of us do.

Then there was the testing phase. People at OpenAI did the testing and “graded” the responses ChatGPT served up, supposedly to make them more accurate and to eliminate potentially offensive answers. But they were the ones who determined what was or wasn’t “offensive” or inflammatory or whatever. You don’t suppose that a bunch of Silicon Valley tech geeks might have graded those responses on a curve, do you? (Perish the thought.)

Advertisement

Finally, getting back to selective responses, most of the time, when I ask the bot questions about science, literature, or other “neutral” subjects, it delivers answers that are clearly drawn from existing literature out in the wild and it doesn’t really veer off of the path very much. It actually can produce some (mostly) accurate material in a conversational way.

But when I start asking it about itself and it has to deliver answers including words such as “me” or “I,” the tone changes noticeably. In the answer I excerpted above, notice how it describes itself as “an AI language model developed by OpenAI.” (It always works in a bit of an advertisement for the company.) But that answer didn’t come from some random publication out on the web. I refuse to believe that a “guard rail” wasn’t installed at some point recently. When anyone asks it about “AI political bias,” that’s the answer you’re going to get. And somebody at the company wrote that response to fend off accusations of bias. They are free to try to prove me wrong if they wish, but that’s my story and I’m sticking to it.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement