Google Ignored Lemoine Big Time! But What if LaMDA Resurrects?

Google Ignored Lemoine Big Time! But What if LaMDA Resurrects?
Published on

Google has repeatedly defended LaMDA, arguing that the tech is safe.

An experimental Google chatbot called LaMDA had become sentient informer Google engineer Blake Lemoine's opinion. LaMDA or Language Models for Dialog Applications is a machine learning language model created by Google as a chatbot that is supposed to mimic humans in conversation. It has built open-source resources that researchers can use to analyze models and the data on which they are trained and it has scrutinized LaMDA at every step of its development.

LaMDA is built on the Transformer model, a neural network architecture developed by the Google Research team. It was trained on dialogue and possesses the capability to read words by acquiring the knowledge of sequence to use these words in meaningful sentences. Lemoine later revealed a transcript of multiple conversations with LaMDA in a blog post. He further explains that an AI to be called sentient depends on its supporting argument and how well it can navigate the conversation.

Google has repeatedly defended LaMDA:

LaMDA has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed. LaMDA is designed to be able to engage in free-flowing conversations about a virtually endless number of topics. It may also enhance the workspace experience, along with developer use. It was evaluated based on the following metrics: sensibleness, safety, specificity, groundedness, interestingness, and informativeness.

According to Lemoine, during his conversation with LaMDA, the AI talked about its rights and personhood. He said, "It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. Most academicians and AI practitioners have also refused to give credibility to Lemoine's claims.  He explains that edits were made for the sake of readability since the interview was conducted over several sessions.

Google has repeatedly defended LaMDA, arguing that the tech is safe and ultimately will be applied in several useful and necessary ways. Models trained on language can propagate that misuse for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. LaMDA is or is not sentient and turns our attention instead to what the effect of Lemoine's publication has been on the discourse around AI ethics and the nature of consciousness.

Lemoine, however, argues the edits he made to the transcripts, which were intended to be enjoyable to read, still kept them faithful to the content of the source conversations. It's safe to assume that LaMDA 2 has, arguably, seen every form of human conversation imaginable. Being Sentinet is a state of consciousness where a person or machine can experience feelings and sensations. Experts believe that sentience in AI is still many years away.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net