GPT-3 could Eat Up Humans in Spreading Misinformation and Fake News

GPT-3 could Eat Up Humans in Spreading Misinformation and Fake News
Published on

Let's see how misinformation experts have demonstrated how effectively use GPT-3 to misinform.

GPT-3 means Generative Pre-trained Transformer 3, is a language model that leverages deep learning to generate human-like text. Not only can it produce text, but it can also generate code, stories, poems, etc. And it is an auto-complete bot whose underlying Machine Learning model has been trained on vast quantities of text available on the Internet.

It is way better than any algorithm language program in existence and it makes huge pre-trained language models that will become an integral part of AI applications in the near future. The ability of GPT-3 to generate several paragraphs of synthetic content that people find difficult to distinguish from the human-written text in section 3.9.4 represents a concerning milestone.

AI powered misinformation:

OpenAI's text-producing framework GPT-3 has captured a lot of mainstream attention. OpenAI isn't the main association to have strong language models, the computing power and data used by OpenAI to model GPT-n.

AI algorithm capable of generating coherent text is GPT-3. Its makers cautioned that the device might actually be employed as a weapon of online misinformation.

Experts from Georgetown research team on misinformation have demonstrated how effectively GPT-3, could be used to mislead and misinform. The result is, that it could intensify a few types of trickiness that would be particularly challenging to detect.

The team used GPT-3 to generate misinformation, including stories around a false narrative, news articles altered to push a bogus perspective, and tweets riffing on particular points of misinformation.

The dataset on which GPT-3 was trained, got terminated in October 2019. So GPT-3 doesn't know anything about the dataset after that. It can be the weapon of choice for actors who want to promote fake tweets to manipulate the price of crypto.

The team says GPT-3 or AI language algorithm, could prove especially effective for automatically generating short messages on social media, what the researchers call one-to-many misinformation. Making GPT-3 behave would be a challenge for agents of misinformation.

The team showed example tweets written by GPT-3 about the withdrawal of US troops from Afghanistan and US sanctions on China. In the two cases, they observed that members were influenced by the messages. Subsequent to seeing posts contradicting China sanctions, for example, the level of respondents who said they were against such a strategy multiplied.

In another political situation, GPT-3 had the option to totally change a few people groups' perspectives, with the assertions making respondents 54% bound to concur with the position subsequent to being shown one-sided AI-generated text.

AI researchers have built programs capable of using language in surprising ways of late, and GPT-3 maybe is the most alarming show of all. The scientists at OpenAI made GPT-3 by taking care of a lot of text scratched from web sources to a particularly enormous AI calculation intended to deal with language.

The Georgetown work features a significant issue that the organization desires to moderate. What's more, they effectively work to address dangers related to GPT-3.

More Trending Stories 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net