Impact Lab


Subscribe Now to Our Free Email Newsletter
February 19th, 2019 at 12:49 pm

The AI text generator that’s too dangerous to make public

 AE3D2B4A-F1A2-41F6-9AAD-35CE5624C560

IN 2015, CAR-AND-ROCKET man Elon Musk joined with influential startup backer Sam Altman to put artificial intelligence on a new, more open course. They cofounded a research institute called OpenAI to make new AI discoveries and give them away for the common good. Now, the institute’s researchers are sufficiently worried by something they built that they won’t release it to the public.

The AI system that gave its creators pause was designed to learn the patterns of language. It does that very well—scoring better on some reading-comprehension tests than any other automated system. But when OpenAI’s researchers configured the system to generate text, they began to think about their achievement differently.

“It looks pretty darn real,” says David Luan, vice president of engineering at OpenAI, of the text the system generates. He and his fellow researchers began to imagine how it might be used for unfriendly purposes. “It could be that someone who has malicious intent would be able to generate high-quality fake news,” Luan says.

IL-Header-Communicating-with-the-Future

That concern prompted OpenAI to publish a research paper on its results, but not release the full model or the 8 million web pages it used to train the system. Previously, the institute has often disseminated full code with its publications, including an earlier version of the language project from last summer.

OpenAI’s hesitation comes amid growing concern about the ethical implications of progress in AI, including from tech companies and lawmakers.

Google, too, has decided that it’s no longer appropriate to innocently publish new AI research findings and code. Last month, the search company disclosed in a policy paper on AI that it has put constraints on research software it has shared because of fears of misuse. The company recently joined Microsoft in adding language to its financial filings warning investors that its AI software could raise ethical concerns and harm the business.

OpenAI let WIRED play with its text generator, via a web page that lets you type in text the system uses like a writing prompt. The results could be garbled, but the way the system riffed on prompts such as song lyrics, poems, and phrases like “here is a recipe for yogurt pancakes” confirmed Luan’s view that the output can look pretty darn real.

However, it didn’t take much to get the system to unspool paragraphs that resembled the work of disinformation artists. When WIRED typed in the phrase “Hillary Clinton and George Soros,” OpenAI’s system wrote the following:

“…are a perfect match, and their agenda appears to be to create a political movement where Soros and his political machine and Clinton are two of the only major players. This is the first time Soros and Clinton have been caught on tape directly colluding in promoting the same false narrative. One of the key revelations in the leaked audio was Clinton’s admission to a Russian banker that she knew about the Uranium One deal before it was approved by Congress. Clinton was shown sharing the same talking points that were originally drafted by a Fusion GPS contractor hired by an anti-Trump Republican donor. The leaked audio is the clearest evidence yet that the Clinton campaign and the Hillary Foundation colluded with Fusion GPS to manufacture propaganda against President Trump.”

Jack Clark, policy director at OpenAI, says that example shows how technology like this might shake up the processes behind online disinformation or trolling, some of which already use some form of automation. “As costs of producing text fall, we may see behaviors of bad actors alter,” he says.

The sample reflects the nature of text on the internet and how machine-learning systems try to mimic what’s in their training data. “This stuff reads great, but it’s not necessarily true,” says Kristian Hammond, a professor at Northwestern. “It’s learning the surface structure—Given what I know, what words can I say that sound right?” OpenAI’s researchers have seen their system write text with nonsensical scenarios, like fires underwater.

Via Wired

 

IL-Header-Communicating-with-the-Future

Comments are closed.

Futurist Speaker