We may have finally broken ChatGPT

(Oleg Reshetnyak via AP)

When I say “broken,” I don’t mean we’ve cracked the code. I’m talking about finding an input for the chatbot where it simply fails to perform in response. As I’ve said previously, I don’t tend to use ChatGPT for work for a number of reasons. First of all, it frequently gets things wrong so you have to double-check whatever it tells you. It’s also incapable of producing viable URLs for web content in most cases. And its library consists mostly of material from 2021 and earlier, so it’s useless in terms of breaking news.

Advertisement

But I have taken to using ChatGPT as sort of a hobby. I’m always looking for ways to get it to show me some hint of underlying sentient cognition. Alternately, I’d settle for proof that it’s not as much of a godlike intellect as its creators might have us believe. This weekend, I may have done the latter, though I can’t even take credit for the idea.

You may recall our recent experiment where I asked ChatGPT for a set of lottery numbers to play in the New York Lotto game. It did so and my wife went out and bought tickets for five drawings over the course of the next three nights using the bot’s numbers. Not only did ChatGPT fail to win the Lotto for me, but it also failed to pick a single correct number in any of the five drawings. The odds of that happening seemed staggering. (The bot put the odds at 1 in 574.)

While reviewing the tickets, my wife was the one who came up with a different test. If the chatbot is that incredibly bad at picking winning numbers (or is just pretending to be), she suggested having it generate 53 lottery numbers at random and then purchasing a ticket using the six numbers that it passed over. Why not, right?

I dutifully fired up the bot and asked it to randomly generate 53 nonrepeating whole numbers between 1 and 59 and list them in consecutive order from lowest to highest. That’s when things began to take a trip into strangeness. (I’m including links to screen captures of the conversation so you can check the work if you wish.) First, the bot generated 56 numbers, not 53. When I pointed this out, it apologized and tried again. It once again generated too many numbers and this time they weren’t in ascending order.

Advertisement

After three attempts, I grew frustrated and asked if there was any way I could phrase the question that would allow it to complete the task successfully. It apologized again (it was apologizing for a lot that day) and produced a sentence I could input to obtain the desired results. I copied the sentence and pasted it into the input. It then produced all 59 numbers and they were not all in ascending order. Trying a different tactic, I asked it to review the last set of numbers for me and verify that there were 53 of them, and that were all in ascending order.

After another apology, it said that it had produced an inaccurate answer and tried again. This time it generated all 59 numbers in order. I told it what it had done and asked if “this was something we should tell your creator about.” It didn’t seem to care for that suggestion at all and without me asking, it tried again. And it failed again. On the seventh try, it finally produced 53 numbers in ascending order, but they were the numbers 1 through 53. (Not exactly random.) So I asked it what the odds were that a random number generator would randomly omit the final six numbers. It told me the chance of that happening would be 0.000005%.  Using part of its last answer, I asked if these errors suggested that its number generator was not truly random or if it contained programming errors. It denied that entirely and at length.

Advertisement

That’s when something unforeseen happened. I accepted its explanation and asked it to run the problem again with the missing numbers not being at the end of the sequence. I hit enter and I waited. The cursor blinked. Even on some of the deepest research questions I’ve posed, I’ve never seen ChatGPT take longer than five or possibly ten seconds. I looked at the clock and stared. I walked away for a coffee and came back. Five minutes had passed and there was still no answer. Then I asked a question I had never asked the chatbot before. “Are you still there?” I actually experienced a feeling of dread before hitting the Enter key.

The bot answered immediately. “Yes, I am still here.” (That was creepy too, for some reason.) It then delivered a new set of numbers and they were, amazingly, a proper response. It was 53 numbers in the specified range and the missing numbers were not all at the end. They were all in the upper forties and fifties, so still didn’t look all that random, but it was technically a success.

We then had a brief discussion about how the system finds errors in its performance and corrects them and I called it a day. It was just so bizarre. So what do you think? Was my question simply too complicated for the bot to handle? Were there too many parameters to apply to the output of its random generator? I’m having a hard time imagining anything related to math that a system of that size couldn’t handle. But it is what it is.

Advertisement

We’ll go and play those six numbers in Wednesday’s Lotto drawing. If it somehow works, I’ll call Ed or John from the Carribean and ask them to post an update for you.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement
Advertisement