Common sense is what differentiates humans from machines. For years, scientists and researchers have been looking for ways to bridge the gap and make Artificial Intelligence (AI) more capable of interacting with the human world. However, the process is more complicated than it sounds.
Artificial intelligence researchers have been unsuccessful in giving intelligent agents the common-sense knowledge they need to reason about the world. Common sense is considered as something that will pull artificial intelligence closer to humankind. It paves way for the interaction of intelligent agents with the world. There have been two major approaches by scientists that showed the exit door but still managed to make a map on how common sense might work on intelligent agents. Symbolic logic and deep learning are the unsuccessful attempts. However, a new project called COMET tries to bring both the approaches together. The project is on progress but is expected to succeed on a large scale and make a huge difference in the existing human-AI worldly interaction.
Common sense is the background knowledge humans have about the physical and social world that we have absorbed over our lives. Humans use this knowledge intuitively, without thinking about it. We don't realize that a mechanism or a function is being processed in our brain that gives a signal to what to do and what not to. Obtaining common sense includes understanding physics such as causality, hot and cold, as well as expectations about how humans behave. Common sense according to Leora Morgenstern is 'what humans learn when they are two or four-years-old without having to put the actual process at a written form.' Ultimately, this is the major reason why configuring intelligent agents with common sense knowledge feels tough.
For example, humans use their common sense to know that the snowman standing at the corner of the street won't run into the road and dodge on the vehicle. However, it is not similar to an automated vehicle's sense. It doesn't take chances to think that a snowman is immovable.
The programming of common sense into a computer involves adding inputs of computer rules. Today, this is referred to as Good Old Fashioned Artificial Intelligence (GOFAI). Even though when this initiative didn't succeed in giving the common sense, it did succeed in some rules-based expert systems.
Another attempt started in 1984. Cyc, originally a project proposed to capture common sense knowledge through a knowledge base and relationships among the knowledge paved way for another research motive. Today, it seems to be limited to providing limited private sector applications.
One of the basic problems that stagger the growth in common-sense handling in artificial intelligence is that human speaking has clustered words that a machine might find difficult to understand. The language too might be fuzzy which makes the process more complicated. There are other millions of rules in machine common sense that needs to be addressed. For example, if someone goes out in rain, they will get wet unless they are under a cover. Again, the action is insufficient to conclude that they won't get wet without looking for the size of the cover, the rain direction and rainfall severity.
Semantic network is used to tackle the fussiness problem. Concept Net, an example of such a network, has used crowd-sourced knowledge where people can enter what they consider to be common sense knowledge. The problem is that information needed to decipher the semantic network is not in the network. For example, the difference between eating and swallowing always holds. The similar is for the cake which can be taken both as a snack and dessert.
Deep learning is an artificial intelligence function that imitates the working of the human brain in processing data and creating patterns for use in decision making. It is a subset of machine learning in artificial intelligence that has networks capable of learning unsupervised from data that is unstructured or unlabeled. Neural networks have achieved more success than either of the above approaches. Still, it is far from achieving the common sense motive.
Alpha Go: Alpha Go combines a state-of-the-art tree search with two deep neural networks, each with millions of connections. The policy network narrows the search options after predicting the next move. The value network reduces the depth of the search tree by estimating the winner in each position instead of searching to the end of the game. Alpha Go uses Monte-Carlo tree search to simulate the remainder of the fame much as a human would play the remainder of the game in their imagination. It chooses the move based on the most successful simulation.
GPT-3: Generative Pre-Trained Transformer (GPT-3) is the largest trained language model that addresses the need for a similar mechanism in AI today. The model uses analyzing language using deep learning technique to deal with ambiguity. It generates text responses to input text which helps to answer or write an essay by the module.
BERT: Bidirectional Encoder Representations from Transformers (BERT) is a neural network that tries to understand written language. It is a Natural Language Processing (NLP) algorithm that uses a neural net to create pre-trained models. These pre-trained models are general purposed models that can be refined for specific NLP tasks. For example, BERT finds the meaning of the word 'bank' from 'I sat by the bank of the River Thames' using the fragments from before and after the words.
COMET's (Commonsense Transformers) main motive is to put common sense into the model of AI. The project is an attempt to combine the approach of symbolic reasoning with the neural network language model. The key idea is to introduce common-sense knowledge when fine-tuning a model. Similar to just like the deep learning models, they try to generate plausible responses rather than making deductions from an encyclopedic knowledge base.
Yejin Choi of the Allen Institute started working on COMET since 2019 with the belief that the neural networks could make progress where the symbolic approach has failed. The objective was to give the language model additional training from a common-sense knowledge base. The language model could then generate inferences based on common sense just like a generative network could learn how to generate text. The team working on the model fine-tuned a neural language model with the common-sense knowledge from a base called atomic in order to create COMET. Choi believes that the neural network will learn from knowledge bases without human supervision which will turn to be a breakthrough in AI-common sense mechanism.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.