Artificial Intelligence

Like Meta, Can OpenAI Open-Source GPT-3’s Internal Mechanism!

S Akash

Meta is making OPT to match GPT-3 both in its accuracy in language tasks and in its toxicity!

Open AI published a groundbreaking paper titled, 'Language Models Are Few-Shot Learners'. They presented GPT-3, a language model that holds the record for being the largest neural network ever created with 175 billion parameters. It's an order of magnitude larger than the largest previous language models. GPT-3 was trained with almost all available data from the Internet, and showed amazing performance in various NLP (natural language processing) tasks, including translation, question-answering, and cloze tasks, even surpassing state-of-the-art models.

The reason that such a breakthrough could be useful to companies is that it has great potential for automating tasks. GPT-3 can respond to any text that a person types into the computer with a new piece of text that is appropriate to the context. Type a full English sentence into a search box, for example, and you're more likely to get back some response in full sentences, that is relevant. That means GPT-3 can conceivably amplify human effort in a wide variety of situations, from questions and answers for customer service to due diligence document search to report generation.

GPT-3 Use Cases

As a result of its powerful text generation capabilities, GPT-3 can be used in a wide range of ways. GPT-3 is used to generate creative writing such as blog posts, advertising copy, and even poetry that mimics the style of Shakespeare, Edgar Allen Poe, and other famous authors.

Using only a few snippets of example code text, GPT-3 can create workable code that can be run without error, as programming code is just a form of text. GPT-3 has also been used to powerful effect to mock-up websites. Using just a bit of suggested text, one developer has combined the UI prototyping tool Figma with GPT-3 to create websites by describing them in a sentence or two. GPT-3 has even been used to clone websites by providing a URL as suggested text. Developers are using GPT-3 in several ways, from generating code snippets, regular expressions, plots, and charts from text descriptions, Excel functions, and other development applications.

GPT-3 is also being used in the gaming world to create realistic chat dialog, quizzes, images, and other graphics based on text suggestions. GPT-3 can generate memes, recipes, and comic strips, as well.

Meta's Challenge to OpenAI

Meta's AI lab has created a massive new language model that shares both the remarkable abilities and the harmful flaws of OpenAI's pioneering neural network GPT-3. And in an unprecedented move for Big Tech, it is giving it away to researchers together with details about how it was built and trained.

Meta's move accounts for the first time that a fully trained large language model will be made available to any researcher who wants to study it. The news has been welcomed by many concerned about the way this powerful technology is being built by small teams behind closed doors.

Large language models and robust programs that can generate paragraphs of text and mimic human conversation have become one of the hottest trends in AI in the last couple of years. But they have deep flaws, parroting misinformation, prejudice, and toxic language.

In theory, putting more people to work on the problem should help. Yet because language models require vast amounts of data and computing power to train, they have so far remained projects for rich tech firms. The wider research community, including ethicists and social scientists concerned about their misuse, had to watch from the sidelines. 

Meta is making its model, called Open Pretrained Transformer (OPT), available for non-commercial use. It is also releasing its code and a logbook that documents the training process. The logbook contains daily updates from members of the team about the training data – how it was added to the model and when what worked and what didn't. In more than 100 pages of notes, the researchers logged every bug, crash, and reboot in a three-month training process that ran nonstop from October 2021 to January 2022.

With 175 billion parameters (the values in a neural network that get tweaked during training), OPT is the same size as GPT-3. This was by design, says Pineau. The team built OPT to match GPT-3 both in its accuracy in language tasks and in its toxicity. OpenAI has made GPT-3 available as a paid service but has not shared the model itself or its code. In other words, this might be Meta's challenge to OpenAI. So, now the question is: Can OpenAI open-source GPT-3's internal mechanism!

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Web3 News Wire Launches Black Friday Sale: Up to 70% OFF on Crypto PR Packages

4 Cheap Tokens That Will Top Dogecoin’s (DOGE) 2021 Success in the Next Bull Run

Ripple (XRP) Price Eyes $2, Solana (SOL) Breaks Out While Experts Suggest a New Presale Phenomenon Could Be Next Up

Ready to Earn More Crypto? TapSwap Daily Codes for November 22 Are Here

Holding This Dogecoin Competitor for 10 Weeks Could Deliver 100x ROI: Is It the New DOGE?