Elon Musk and the AI that is too dangerous for prime time

It’s not exactly the Killer Robot Revolution (well… not yet, anyway) but Artificial Intelligence continues to confound the tech community with unexpected advances. OpenAI, a company that Elon Musk supported for some time, has created a program that’s able to generate fake news text of such quality that the majority of people reading it can’t tell that it’s not an original article from a human being. (Mysterious Universe)

Advertisement

OpenAI’s newest hellish creation is called GPT2. The program is essentially a text generator which can analyze existing text and then produce its own based on what it expects might come after it. What separates GPT2 from other natural language bots is the fact that it can produce realistic texts in perfect prose – and that’s where the danger comes in.

Jack Clark, policy director at OpenAI, says that because the program writes such realistic-looking text, it could be easily used to fool or mislead readers with fake news stories. “We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” Clark told the MIT Technology Review. “It’s very clear that if this technology matures—and I’d give it one or two years—it could be used for disinformation or propaganda. We’re trying to get ahead of this.”

Brett Tingley, the author of the linked article, asks the obvious question: How much longer until one of these systems is let loose on an unsuspecting public?

Good question. OpenAI claims they’re trying to “get ahead of this” and are even suggesting that they shouldn’t release the software at all. But there’s a demand for such products out there. Some news aggregators are already using more crude versions to scan news sites and assemble articles from bits and pieces of various stories on the same subject. But up until now, you could generally always tell when that was the case because the output tended to be rather clunky to read.

This is clearly taking the process to the next level. It was alarming enough to Elon Musk that he announced he was disassociating himself with OpenAI and said that he really hasn’t been directly involved with them for more than a year.

Advertisement

The possibilities certainly look alarming. As the Daily Caller reported, the program was able to take a two-sentence item of (fake) news and craft an article several paragraphs long that looked completely legitimate. The original entry read, “A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.”

Imagine waking up in the morning and seeing what looks like a legitimate report in your news feed with a lengthy description of that supposed event. People would quickly be freaking out. Multiply that by the number of news aggregators pumping stories out into the social media streams every minute of every day and you can see how it would quickly get out of hand.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
David Strom 1:50 PM | December 24, 2024
David Strom 8:00 AM | December 24, 2024
Advertisement
Ed Morrissey 10:00 PM | December 23, 2024
Advertisement
Advertisement