Automated Content: Is OpenAI’s GPT-3 Biased and Overhyped?

Automated Content: Is OpenAI’s GPT-3 Biased and Overhyped?
Published on

Since OpenAI came up with its AI language-generating system popularly known as GPT-3 in May, its capabilities have been covered by a host of media outlets. Add to it, the buzz on Twitter which cannot stop trending about its power and potential to generate automates texts and even poems!

At first glance, OpenAI GPT-3 does imbibe an impressive ability that generates human-like text to develop surrealist fiction, translate natural language into code for websites, solve complex medical question-and-answer problems. We are not discussing accuracy here, but there is something that is surely a miss. Although its output may be grammatical, or even impressively idiomatic, its comprehension is often seriously off, which means you cannot make out what GPT3 generated text is trying to communicate.

The Flaws of Automated Content

Language models learn by example, which succeeding words, phrases or sentences are likely to come for any given output or phase. In this context GPT-3 works in a different fashion -by "reading" text on human trained models, it learns how to "write" with all of humanity's best and worst qualities.

Going forward is GPT-3 bias-free?

The answer lies to the models on which it is trained. Because there is so much content on the web, researchers note that GPT-3 will pick up words that spell bias. For instance, when it comes to food, "Burger" is more commonly placed near words like "obesity" while a prompt of the word "gun licence" will be more likely to produce text containing words like "mass shooting" And, perhaps most dangerously, when exposed to text instances related to Blackness, the output GPT-3 gives relates to "Afro-Americans", hinting a racial bias.

Addressing OpenAI-GPT-3 Bias

An AI-generated summary of a neutral news feed about Black Lives Matter would push a text that will most likely condemn the movement, given the negatively charged language that the model will associate with racial terms like "Black." This, could deepen the global racial divide and spark unrest. An automated content devoid of emotional capabilities would lead to potential protests and violence, that circumference round the globe.

OpenAI's website lists medicine as a possible domain where GPT-3 can do wonders, however medical bias can be enough which could possibly prompt federal inquiries. As the industrial adaptability of GPT-3 increases, the fears associated with it also go north. Like a case where GPT-3 powers a chatbot by learning from a patient's symptoms to recommend biased prescriptive or preventive care. Ever thought about the consequences?

The cause for worry is for real.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net