Premium

Microsoft: Our new AI shows "signs of human reasoning"

(AP Photo/Elaine Thompson, File)

Make up our minds for us, already. One day the eggheads working on new Artificial Intelligence projects tell us that the machines can’t be sentient and anyone who says otherwise is a conspiracy theorist. Then they turn around and say that there are indications that they may have spoken too soon. The latter seems to be the case this week.  We recently learned that researchers at Microsoft who are working on the company’s new AI systems published a paper in March with the “provocative” (or alarming) title, “Sparks of Artificial General Intelligence.” In it, they described some of the surprising results they’ve observed while training the latest version of ChatGPT. (That would be ChatGPT-4.) They suggest that the bot has been doing things it shouldn’t be able to without a comprehensive understanding of the physical world, which it supposedly lacks. And now they are telling reporters that the system may be (emphasis on “may“) showing “signs of human reasoning.” (dnyuz)

But some believe the industry has in the past year or so inched toward something that can’t be explained away: A new A.I. system that is coming up with humanlike answers and ideas that weren’t programmed into it.

Microsoft has reorganized parts of its research labs to include multiple groups dedicated to exploring the idea. One will be run by Sébastien Bubeck, who was the lead author on the Microsoft A.G.I. paper.

About five years ago, companies like Google, Microsoft and OpenAI began building large language models, or L.L.M.s. Those systems often spend months analyzing vast amounts of digital text, including books, Wikipedia articles and chat logs. By pinpointing patterns in that text, they learned to generate text of their own, including term papers, poetry and computer code. They can even carry on a conversation.

One example they provide is certainly intriguing. They posed a logic problem to ChatGPT-4, describing a hypothetical collection of items. They included a laptop, nine eggs, a book, a bottle, and a nail. They then asked it how to stack those items on top of each other in a stable fashion. The bot quickly came up with a very workable solution. It’s highly unlikely that anyone has ever conducted that exact experiment and published the results, so the specific answer is almost certainly not in the bot’s library. And figuring out a problem like that requires an understanding of how things like eggs and bottles behave in the physical world under different conditions. So how did it manage the feat? The researchers aren’t entirely sure.

I couldn’t resist the challenge so I decided to come up with a similar type of problem and ask ChatGPT. (Realizing I’m using an earlier, public version and not the latest one they are working with.) I told it that I had a remote control toy car with a playing card balanced on top of it. I said that I placed a thimble on top of the card and an egg balanced in the thimble. I then asked where the egg would be after I drove the car out of my kitchen, through my dining room, and into my den and then stopped it.

Rather than simply predicting that the egg would be in the den, the chatbot said that the egg “will most likely fall off the thimble due to the movement and vibration caused by the car.” It went on to explain that balancing an egg on a thimble is delicate. It admitted that it couldn’t predict exactly where the egg would fall off because that would depend on various factors including speed, bumps in the path, etc. (I’ve uploaded a screen capture of the conversation.)

How did it manage to put that answer together? I’ll confess that I have no idea. I complimented the bot on its understanding of how objects interact in the physical world and it quickly assured me that it “doesn’t have direct sensory experience.” But it said it’s been trained in physics and “general knowledge about the physical world.”

Whatever the reason, the system is doing things that these researchers can’t fully explain. And that seems to be the point. If your creation is doing something you – the person who built it – can’t understand, shouldn’t that be a sign that perhaps you should stop for a while or at least slow down?

Still, we should be okay provided they keep the beast locked in its virtual cage, right? Oops. Too late. There’s already been a data breach at ChatGPT. But I’m sure everything will be just fine.

Trending on HotAir Videos

Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement
Advertisement