ChatGPT just passed a Wharton MBA exam. Now what?

All of the ongoing buzz around OpenAI’s ChatGPT chatbot continues to increase in volume as people discover new and more inventive ways to use it. But some of it is taking on an increasingly darker tone. As the bot continues to expand its language base, becoming more and more “humanlike” in its responses and accurate in the material it generates, it’s becoming clear that the technology may be reaching the point where it is outgrowing its makers. The most recent example turned up when the University of Pennsylvania tasked ChatGPT to take a final exam in a core course from Wharton’s MBA program, Operations Management. It’s a daunting challenge for many of even the brightest postgraduate students. But ChatGPT not only passed the exam with an impressive score but did it in a very short period of time.

Advertisement

This week, Terwiesch released a research paper in which he documented how ChatGPT performed on the final exam of a typical MBA core course, Operations Management.

The A.I. chatbot, he wrote, “does an amazing job at basic operations management and process analysis questions including those that are based on case studies.”

It did have shortcomings, he noted, including being able to handle “more advanced process analysis questions.”

But ChatGPT, he determined, “would have received a B to B- grade on the exam.”

Some people in the tech industry are raising the alarm about what this could all mean for the future of human beings. One analyst who specializes in software that helps identify AI text in academic settings summed it up this way. She said, “I’m of the mind that AI isn’t going to replace people, but people who use AI are going to replace people.”

Some users have committed blunders that demonstrated the dark side of these large language model chatbots. It turns out that ChatGPT is also pretty good at writing malware that can destroy your computer. (Gizmodo)

Yes, according to a newly published report from security firm CyberArk, the chatbot from OpenAI is mighty good at developing malicious programming that can royally screw with your hardware. Infosec professionals have been trying to sound the alarm about how the new AI-powered tool could change the game when it comes to cybercrime, though the use of the chatbot to create more complex types of malware hasn’t been broadly written about yet.

CyberArk researchers write that code developed with the assistance of ChatGPT displayed “advanced capabilities” that could “easily evade security products,” a specific subcategory of malware known as “polymorphic.”

Advertisement

Now you don’t need to wait to be attacked by hackers. Just log in and ask ChatGPT to write some malware for you and… bingo. Your laptop is dead. Science is awesome, isn’t it?

Some people see even darker possibilities on the horizon. Libby Emmons at Human Events writes this week that ChatGPT signals “a rapidly encroaching singularity that threatens humanity.”

We are headed toward a collision in the concept of humanity itself. Long imagined, recently predicted, we are arriving at a point where human beings and man-made machines will become, at least in function, indistinguishable from one another.

What ChatGPT signals more than anything is that the singularity is imminent, the point at which our creation betters us in nearly every way is not only coming, but it is essentially already here. And with that comes many questions asked for generations by our theologians, philosophers, artists, scientists.

Does the machine imagine or does it simulate imagination? And is there a difference? If the simulation is as convincing as the real thing, is there any value to the real thing? Is there any value to humanity when it becomes apparent that our machines create art that is equally as pleasing, stories that are equally as compelling, can parse and assimilate data better than any of our top scientists?

As I’ve previously written, I’m far less concerned about the possibility that these large language model chatbots will attain sentience, “wake up,” and kill us all. But Emmons makes a valid point in suggesting that the bot doesn’t need to achieve sentience if it can imitate it so well that we can’t tell the difference. And if it’s better than us at everything, what point is there in relying on human beings to do anything other than keep maintaining the code or generating the electricity that feeds the digital beast?

Advertisement

I think the bigger question here should be why something like ChatGPT was created in the first place. Does the chatbot even have an actual productive use that doesn’t cause a downside for people? In the broader sense, ChatGPT seems to be only “useful” for two things. It can be used by humans to cheat on exams or improve their output to a level not merited based on the user’s own capabilities and skills. Or it can be used to simply replace humans in a variety of knowledge-based occupations, such as journalism (gulp) or software coding.

Underneath it all, there lies a trap. The underlying reality of ChatGPT is that it doesn’t actually “know” anything, nor does it perform any true cognitive functions. It stores a massive repository of the works of man and simply stitches it together in increasingly clever and realistic ways. But if it’s allowed to ultimately achieve in these endeavors and replace all humans in those fields, there will be no more “food” to feed into its massive repository of text. At that point, progress for ChatGPT ceases and there may not be enough smart people left to pick up the pieces.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement
Advertisement