OpenAI's 'deepfakes for text', GPT2, may be too risky to be released

OpenAI co-founder Elon Musk

OpenAI co-founder Elon Musk

The text was generated by an AI model called GPT2, built by an organization called OpenAI-which is funded by Elon Musk and Reid Hoffman. Users simply feed it a few words on a topic and the AI autonomously writes a story.

In their recent announcement, OpenAI researchers admitted that the malicious applications of the GPT-2 AI model include the ability to generate misleading news articles, impersonate others online and create abusive or faked content to post on social media.

The AI is also adept at writing news, whether it be real or fake. Examples of what GPT-2 "writes" unprompted, which OpenAI released on GitHub together with a weaker version of the model, range from slightly surreal to downright freaky.

On the surface, GPT-2, as the model is called, works somewhat like a popular game one can play with the less advanced version of AI on any smartphone, accepting its word suggestions one after another to create sometimes surprising little stories.

According to Wired, OpenAI, which is co-founded by none-other than brainiac Elon Musk and startup backer Sam Altman, has developed an AI system that's so good at its job, it would be unsafe if let loose on the public.

OpenAI's text generator will be kept under lock and key until its creators understand what it can and can't do.

Trump to sign border bill, declare emergency -McConnell
Trump has been demanding US$5.7-billion for the wall from Congress since December as part of a larger government spending bill. Congress should do theirs". "It's not an emergency what's happening at the border - it's a humanitarian challenge to us".

GPT-2 generates text samples, using data scraped from approximately 8 million web pages and producing content "close to human quality".

Transformer models, including the first GPT model that Google Brain researcher Ashish Vaswani and his colleagues described in July 2017, are simpler than recurrent neural networks (RNNs) and convolutional neural networks (CNNs), are more easily adapted to parallel computation, and required significantly less time and compute resources to train. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas. So, though the model can also be used for innocuous purposes, such as creating better dialog bots, the nonprofit has, at least for now, decided against releasing the training dataset or the full code for the model. The San Francisco, California group's founders were awed by the positive and negative aspects of AI.

OpenAI is modeling its slow release of GPT-2 on the "responsible publication" standards that have been exercised by biotechnology and cybersecurity firms, where the potential for abuse or misuse is weighed against the potential good a technology can do.

A fake news story about missing nuclear material gets the name of the US Energy secretary wrong, but how many people know that it's Rick Perry?

OpenAI policy director John Clark said it would not be long before the AI could reliably churn out fake news stories, and predicted that this would be possible in one or two years. We are trying to develop more rigorous thinking here.

Recommended News

We are pleased to provide this opportunity to share information, experiences and observations about what's in the news.
Some of the comments may be reprinted elsewhere in the site or in the newspaper.
Thank you for taking the time to offer your thoughts.