GPT-3: Potential Risks and Benefits of Generating Information

GPT-3: Potential Risks and Benefits of Generating Information
Published on

Explore OpenAI GPT 3 AI chatbot potential risks and benefits of generating information

According to new research, artificial intelligence language models like OpenAI GPT 3 can produce tweets that are more accurate and understandable than those made by people, as well as less evident fakes.

To assess AI models' possible hazards and advantages in producing and transmitting (dis)information, researchers at the University of Zurich focused their investigation on GPT 3. The aim of the study, which had 697 participants but has yet to be peer-reviewed, was to see if people could distinguish between false information and reliable information in tweets.

The researchers wrote in the paper's abstract posted on a preprint website that their results showed that GPT-3 Chatbot was a double-edged sword, producing accurate information that was easier to understand than humans but could produce more compelling disinformation.

GPT-3 showed the skill to create accurate and more understandable information than real Twitter users. But the researchers also discovered that the AI language model could make convincing disinformation.

A worrying development was that participants could not consistently distinguish between tweets made by genuine Twitter users and those produced by GPT-3.

Federico Germani, a postdoctoral researcher at the varsity, said that their study revealed the power of AI to inform and mislead, which brought up important questions about the future of information ecosystems.

These results imply that information campaigns developed by GPT-3, based on well-structured prompts and reviewed by trained people, might be more successful, for example, in a public health crisis requiring quick and unambiguous communication to the public. Researchers urged politicians to respond with strict, fact-based, and ethically informed policies to counter the possible hazards since it also raises serious concerns about the prospect of AI spreading misinformation.

Nikola Biller-Andorno, director of the IBME at the varsity, said that the findings highlighted the urgent need for proactive regulation to prevent the possible harm caused by AI-driven disinformation campaigns.

In the digital age, preserving a trustworthy information ecosystem and safeguarding the public's health depends on detecting the dangers posed by AI-generated misinformation, says Biller-Andorno.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net