Artificial intelligence (AI) has been making massive advances rapidly. It has turned into the critical technology behind automatic translation systems, self-driving cars, voice and textual analysis, image processing, and all kinds of recognition systems. It can exceed the best human performance levels at specific tasks.
We can see the emergence of a new commercial industry with intense activity, tremendous potential, and vast financial investment. It looks like there are no areas that are beyond AI's improvement. There're no operations that an AI application cannot automate, no problems that it cannot help. But is this true?
Theoretical studies of computation show that there are a few things which are not computable. Mathematician and code breaker, Alan Turing proves that some computations might never finish while others would take years or even decades.
For instance, we can efficiently compute a few moves ahead in a game of chess, but to scrutinize all the moves to the end of a typical 80-move chess game is entirely impractical. Even using the fastest supercomputer and running it at over one hundred thousand trillion operations per second, it would take more than a year to get just a small portion of the chess space explored. It is also called as the scaling-up problem.
Early research on AI often brought satisfying results on small number of combinations of a problem such as noughts and crosses, known as toy problems. These can compete with the world's best human players by looking a lot further than the human mind can manage. It can do this by applying methods involving approximations, possibility estimates, large neural networks and other machine-learning tools.
However, these are issues of computer science, not artificial intelligence. A severe problem becomes clear if we consider human-computer interaction. We broadly expect that the future AI systems will communicate with and assist humans in a friendly and interactive manner and effectively socialise.
So, why does AI require a physical body to connect with humans emotionally?
Although we already possess primitive versions of these systems, audio command systems and call-centre script-processing pretend to be conversations. What is required are proper social interactions associating free-flowing conversations for the long term during which AI systems recall the humans and their past conversations. AI has to understand intentions and beliefs and the meaning of what people are saying.
In psychology, this needs a theory of mind – an understanding that the individual you are interacting with has a way of thinking and roughly sees the world in the same way as you do. So when someone shares their experiences, you can recognise and appreciate what they describe and how it is relatable to you, making it meaningful.
We also notice that individual's actions can figure out their intentions and preferences from gestures and signals. For example, when Jack says, "I think that David likes Zoe but thinks that Zoe finds him unsuitable," we know that Jack has a first-order model of himself, a second-order model of David's thoughts, and a third-order model of what David thinks Zoe thinks. We need to have similar experiences of life to understand this completely.
It is distinct that all this social interaction only makes sense to the gatherings and parties involved if they have a 'sense of self' and can maintain a similar model of the self of another agent. People should know themselves first in order to understand someone else. An AI 'self-model' must contain a subjective perspective, involving how its body operates, including a detailed map of its own space and a repertoire of well-understood skills and actions. Also, how its visual viewpoint depends on the physical location of its eyes.
It indicates a physical body is required to ground the sense of self in factual data and experience. When one agent observes another agent's action, they can understand mutually through the shared components of the experience. AI needs to be realized in robots with physical bodies. A software box cannot have a subjective viewpoint in the physical world where humans inhabit. Our conversational systems must not be only embedded but embodied.
Research on developmental robotics is now exploring how robots can learn from scratch. Initial stages involve discovering the properties of passive objects and the 'physics' of the robot's world. Though disembodied AI has a fundamental limitation, future research with robot bodies may help create lasting, social interactions between AI and humans with empathy.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.