‘AI Sentience’ is a Publicity Stunt! Criticism against LaMDA Intensifies

‘AI Sentience’ is a Publicity Stunt! Criticism against LaMDA Intensifies
Published on

Google AI engineer Blake Lemoine had been testing whether the LaMDA used harmful speech

Artificial Intelligence (AI) has been considered the key to the future when it comes to imitating the human brain or becoming sentient. Google's Responsible AI team member, developer Blake Lemoine had been testing whether the large-language model (LLM) used harmful speech. Google denied his claims. The company put Lemoine on leave for publishing confidential information. He had signed up to test Google LaMDA if the artificial intelligence used discriminatory or hate speech.

LaMDA is Google's Language Model for Dialogue Applications. It is a chatbot based on the big advanced language model that can ingest trillions of words from the internet to inform its conversation.LaMDA is not sentient for the simple reason that it does not have the physiology to have sensations and feelings. LaMDA has similar skills to BERT and GPT-3 language models and is built on Transformer, a Neural Network architecture that google research invented in 2017. According to Google, it cannot feel. It does not have thoughts. It does not have a sense of self.

LaMDA: The hype about Google AI being sentient

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. He believes AI models may not be far off from achieving consciousness is getting bolder. Lemoine said that people have a right to shape the technology that might significantly affect their lives. He queried LaMDA on religion and observed that the chatbot was talking about its rights and personhood; he got convinced that LaMDA is sentient.

According to Google, LaMDA is trained to pick up these several nuances of language that differentiate open-ended conversations from other forms making them more sensible. Google says it investigated Lemoine's claims and found them to be baseless. It also says that Lemoine was placed on paid administrative leave because he leaked confidential company information as well as engaged in a series of provocative actions.

LLMs are gradually getting alarming. While the models have become adept at generating human-like text, excitement about their "intelligence" can mask their shortcomings. It is a software program designed to produce sentences in response to sentence prompts. Blake Lemoine was tasked to check LaMDA for bias and inherent discrimination and not for its sentience. these models are neither artificial nor intelligent and are just based on huge amounts of dialogue text available on the internet and producing different sorts of responses based on the relationship to what one says.

The major challenge and issues with the language-based models or chatbots are related to the propagation of prejudices and stereotypes built into such models. Another concern with the use of such systems is transparency. The trade secrecy laws prevent researchers or auditors from looking into AI systems to check for misuse. Therefore, it is essential for governments to come up with policies and regulations for the responsible use of AI. But focusing on that prospect is making us overlook the real-life consequences that are already unfolding.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net