The robots aren't getting smarter, but they're definitely getting faster

(AP Photo/Sam McNeil)

We’re continuing to track the news regarding OpenAI’s new chatbot, ChatGPT, and all of the ruckus that it’s causing in the worlds of science, education, and beyond. We’ve already learned of schools that are blocking the bot to prevent cheating by students and some employers are already toying with the idea of replacing some of their human workers with this type of artificial intelligence. Now the New York Times has jumped into the debate with a report on how these chatbots have invaded the world of online gaming, leading to even more questions about whether or not these programs are capable of “thinking” like human beings.

Advertisement

They provide a disturbing example involving the game Diplomacy. That’s the strategy game where players assume roles as European leaders and replay the first world war. A player named Claes de Graaff joined a 20-person online game and met up with another player using the pseudonym “Franz Broseph.” (A joke name based on Austrian emperor Franz Joseph.) Franz and de Graaff became allies, with Franz later backstabbing him to gain an advantage, as often happens in these games. Franz would go on to win, while de Graaff placed 5th. It was only later that he learned that his misbehaving partner wasn’t one of the usual players. It was a chatbot developed by Meta specifically to play Diplomacy, and it proved to be very, very good at it. And Claes de Graaff never had a clue that he was playing with a machine.

This story led Times technology reporter Cade Metz to suggest that Franz Broseph may have passed the famous Turing test. If its human partner was unable to be sure whether he was chatting with a person or a computer, then that benchmark may have been achieved.

Bots like Franz Broseph have already passed the test in particular situations, like negotiating Diplomacy moves or calling a restaurant for dinner reservations. ChatGPT, a bot released in November by OpenAI, a San Francisco lab, leaves people feeling as if they were chatting with another person, not a bot. The lab said more than a million people had used it. Because ChatGPT can write just about anything, including term papers, universities are worried it will make a mockery of class work. When some people talk to these bots, they even describe them as sentient or conscious, believing that machines have somehow developed an awareness of the world around them.

Privately, OpenAI has built a system, GPT-4, that is even more powerful than ChatGPT. It may even generate images as well as words.

And yet these bots are not sentient. They are not conscious. They are not intelligent — at least not in the way that humans are intelligent. Even people building the technology acknowledge this point.

Advertisement

I’ve long been interested in the advancements being made in the field of artificial intelligence while harboring concerns over what might happen if the AI were to suddenly “wake up” one day and decide that humans were actually a problem that needs to be “solved.” But the more we learn about these Large Language Model chatbots, the more convinced I am that they simply don’t have that capacity. Examples like the one above reinforce that belief.

Yes, ChatGPT and Franz Broseph do seem to be able to very convincingly “fool” human beings into being unaware they are chatting with a machine. They’ve even fooled scientists an alarming number of times. But that’s not because ChatGPT has a “desire” to fool humans. It is incapable of “desiring” anything. It’s just doing what its programmers built it to do. But it’s doing it with such rapid speed while drawing on a humungous library of text that it’s increasingly difficult for most people to tell the difference.

And yet, as Metz goes on to point out, these chatbots still aren’t perfect. Franz Broseph was constructed at great expense to be nearly perfect at playing Diplomacy. But that’s all it can do. If you toss it into a completely unrelated conversation, it will not do well at all. Even ChatGPT, as impressive as it clearly is, slips up when it encounters unexpected discontinuities in a conversation. And it still can’t master the more subtle aspects of human speech, as was seen when someone tried to use it to enter a pun contest. The chatbot has no concept of humor and can’t grow to master it.

Advertisement

So it would seem that the missing element is the human factor. These programs do seem amazing, but at the end of the day, they are not truly “thinking” the way that human beings think. We don’t even fully understand how our own brains think and store or process memories. We clearly can’t teach a machine to do that. Or at least not yet. ChatGPT has never had a single original thought. If and when the day arrives that it does, we can immediately revert to SKYNET mode and flee for the hills. But until then, we should be okay, provided one of the Boston Dynamics robot dogs with a rifle doesn’t get taken over by hackers.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement
David Strom 1:50 PM | December 24, 2024
Advertisement