Three Building Blocks of OpenAI’s ChatGPT You Need to Know

Three Building Blocks of OpenAI’s ChatGPT You Need to Know
Published on

These are the three building blocks of OpenAI's ChatGPT you need to know in the year 2023

OpenAI, the maker of ChatGPT, stated in an open 'note' to users that some of the output of its AI tool ChatGPT has been labelled as politically biased, offensive, or otherwise objectionable. While the company acknowledges that some of the content is what it has been accused of, it also demonstrates the system's limitations. It also emphasised that not all accusations are entirely true. Many of them demonstrate user misconceptions about how OpenAI's ChatGPT systems and policies work to deliver outputs.

"Since the launch of ChatGPT, users have shared outputs that they believe are politically biased, offensive, or otherwise objectionable. In many cases, we believe that the concerns expressed are valid and have revealed real limitations in our systems that we want to address. "We've also noticed a few misconceptions about how our systems and policies interact to shape the outputs you get from ChatGPT," the blog stated.

In this article, we have explained the three building blocks of Open AI's ChatGPT that you need to know for achieving the context of AI system behaviour.

The three building blocks of ChatGPT:

  • Improve default behaviour:

According to OpenAI, it is investing in research and engineering to reduce both obvious and subtle biases in how ChatGPT responds to various inputs.

The study will also look into cases where ChatGPT refused outputs that it should not have, as well as cases where it does not refuse outputs when it should. The startup also emphasised the importance of 'valuable user feedback' to make further improvements.

  • Define AI's values:

The company is working on a ChatGPT upgrade that will allow users to easily customise its behaviour as "defined by society."

"This will imply allowing system outputs with which other people (including ourselves) may strongly disagree. Striking the right balance here will be difficult; going too far with customization risks enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people's existing beliefs ", it stated.

  • Public input on defaults:

OpenAI stated that it is in the early stages of piloting efforts to gather public feedback on topics such as system behaviour, disclosure mechanisms (such as watermarking), and deployment policies in general.

"We are also exploring collaborations with external organisations to conduct third-party audits of our safety and policy efforts," the company said.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net