Automated Content: Is OpenAI’s GPT-3 Biased and Overhyped?

Artificial Intelligence

Since OpenAI came up with its AI language-generating system popularly known as GPT-3 in May, its capabilities have been covered by a host of media outlets. Add to it, the buzz on Twitter which cannot stop trending about its power and potential to generate automates texts and even poems!

At first glance, OpenAI GPT-3 does imbibe an impressive ability that generates human-like text to develop surrealist fiction, translate natural language into code for websites, solve complex medical question-and-answer problems. We are not discussing accuracy here, but there is something that is surely a miss. Although its output may be grammatical, or even impressively idiomatic, its comprehension is often seriously off, which means you cannot make out what GPT3 generated text is trying to communicate.

 

The Flaws of Automated Content

Language models learn by example, which succeeding words, phrases or sentences are likely to come for any given output or phase. In this context GPT-3 works in a different fashion -by “reading” text on human trained models, it learns how to “write” with all of humanity’s best and worst qualities.

Going forward is GPT-3 bias-free?

The answer lies to the models on which it is trained. Because there is so much content on the web, researchers note that GPT-3 will pick up words that spell bias. For instance, when it comes to food, “Burger” is more commonly placed near words like “obesity” while a prompt of the word “gun licence” will be more likely to produce text containing words like “mass shooting” And, perhaps most dangerously, when exposed to text instances related to Blackness, the output GPT-3 gives relates to “Afro-Americans”, hinting a racial bias.

 

Addressing OpenAI-GPT-3 Bias

An AI-generated summary of a neutral news feed about Black Lives Matter would push a text that will most likely condemn the movement, given the negatively charged language that the model will associate with racial terms like “Black.” This, could deepen the global racial divide and spark unrest. An automated content devoid of emotional capabilities would lead to potential protests and violence, that circumference round the globe.

OpenAI’s website lists medicine as a possible domain where GPT-3 can do wonders, however medical bias can be enough which could possibly prompt federal inquiries. As the industrial adaptability of GPT-3 increases, the fears associated with it also go north. Like a case where GPT-3 powers a chatbot by learning from a patient’s symptoms to recommend biased prescriptive or preventive care. Ever thought about the consequences?

The cause for worry is for real.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates
Whatsapp Icon
Telegram Icon

Disclaimer: Any financial and crypto market information given on Analytics Insight are sponsored articles, written for informational purpose only and is not an investment advice. The readers are further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Conduct your own research by contacting financial experts before making any investment decisions. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Analytics Insight of being absolved from any/ all potential legal action, or enforceable claims. We do not represent nor own any cryptocurrency, any complaints, abuse or concerns with regards to the information provided shall be immediately informed here.

454 Views
Close