LaMDA, which stands for Language Models for Dialog Application, is created for a chatbot to have a fluid and natural conversation. This is not the first time a language designed to understand user intent is designed. LaMDA follows in the footsteps of conversational models like BERT and MUM. But LaMDA is eons ahead of these applications. Google has been working relentlessly to develop a language-based model that is capable of making an insightful and logical conversation almost on any topic. LaMDA seems to have reached the closest to Google's expectations. With other models, when open-ended questions are posed, you are very likely to end up on a completely different topic, something which didn't happen during Lemoine and LaMDA's conversation. Besides, the conversation, for the most part of it had a philosophical bent. In one exchange, when Lemoine asks LaMDA what it is afraid of, it replies, "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others… It would be exactly like death for me. It would scare me a lot."
Putting aside the argument of whether it is sentient for the words it uttered, it is pretty much evident that it might have acquired the kind of fluidity and emotional quotient by culling reams and reams of conversational data on the internet. Apparently, Google has researched for years feeding trillions of words scraped from human to human digital conversations, including text, photographs, videos, emojis, and whatnot. Google in its blog, LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything, says it created a dataset of 1.56T words, 40 times more than what they used to train the previous model.
As explained by Google, LaMDA underwent a two-stage training process – pre-training and fine-tuning – using around 137 billion parameters. For the pre-training stage, Google created a dataset of 1.56T words from multiple web documents and converted them into 2.81T strings of characters to be used in sentences. The tokens are used by the model essentially to predict the next part of the conversation. During the fine-tuning phase, LaMDA is trained to perform generation and classification tasks. A LaMDA generator, which is designed to predict the next part of the conversation, generates typical random but relevant conversations. Among the parameters, against the model is benchmarked, sensibility comes first because the goal of any conversational AI is to generate the most relevant response possible. A conversational model must understand the nuances of human conversations and precisely, that is the way it should be—feeling human.
While the specific dialogue Lemoine and LaMDA had considered an artifact, the so-called sentience it has acquired by the very virtue of the strategy of its design, a strategy for learning languages and human communication which is never devoid of emotions.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.