Forget AI Sentience, Robots can’t Even Act out of Place! If They do, They Die

Forget AI Sentience, Robots can’t Even Act out of Place! If They do, They Die
Published on

There is no engineering definition for consciousness or sentience, which can be applied to understand robotic sentience

Optimization is what we need to look for when designing a machine. Google in its effort toward infusing fluidity into chatbot conversations, designed LaMDA. It is being hailed as a breakthrough chatbot technology adapted to nuances of conversation and the conscious engagement that accompanies it. The question is if bots can achieve AI sentience or a human-like consciousness at all? The recent incidence of Google suspending one of its engineers who reported LaMDA being sentient is one stark example to stress why a debate around AI sentience is important. When Lemoine, an engineer working with Google asked LaMDA, what it fears most, it replied, "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. It would be exactly like death for me. It would scare me a lot."

Can robotic intelligence ever count for sentience?

While Google says its chatbot is only spitting out words in a way to make sense and so is not sentient, Lemoine argues that the chatbot has serious issues as it can think for itself. Though Google responded to this allegation by saying that there is no evidence to support his claims, there is an acute need to decipher what sentience exactly is and how it is different from consciousness. Harvard cognitive scientist and author Steven Pinker tweeted that the idea of sentience is akin to a "ball of confusion". Gary Marcus, scientist and author of Rebooting AI, putting it in a perspective of linguistic perfunctory, says, "while these patterns might be cool, the language used "doesn't mean anything at all". There is no engineering definition for consciousness or sentience, which can be categorically applied to understand if a particular robotic act is human-like unless we are sure that robots are conscious of their environment – something akin to a robotic kitchen mop differentiating between the dirty kitchen floor and the garden floor strewn with organic waste.

How close are we to AI sentience?

Robots are programmable devices, which take instructions to behave in a certain way. And this is how they come to execute the assigned function. To make them think or rather make them appear so, intrinsic motivation is programmed into them through learned behaviour. Joscha Bach, an AI researcher at Harvard, puts virtual robots into a "Minecraft" like a world filled with tasty but poisonous mushrooms and expects them to learn to avoid them. In the absence of an 'intrinsically motivating' database, the robots end up stuffing their mouths – a clue received for some other action for playing the game. This brings us to the question, of whether it is possible at all to develop robots with human-like consciousness a.k.a emotional intelligence, which can be the only differentiating factor between humans and intelligent robots. The argument is divided. While a segment of researchers believe that the AI systems and features are doing well with automation and pattern recognition, they are nowhere near the higher-order human-level intellectual capacities. On the other hand, entrepreneurs like Mikko Alasaarela, are confident in making robots with EQ on par with humans.

Human – sentient robot relationship – will never be hunky-dory

Artificial intelligence acts like humans or at least puts up an appearance of it. Inside, they are sheer machines powered by coded instructions. We do have emotionally intelligent robots, which can make us believe that they truly are. However, the fact of the matter is that robots are programmed to react to the emotions of humans. They do not carry intangible emotions like empathy and sympathy inside them. Moreover, granting sentience demands granting rights. How could we define rights for non-human beings? Is humanity prepared for granting rights to sentient machines? In one instance, in an experiment with chatbots, Facebook had to shut down two AI programs that started chatting in a language incomprehensible to humans. This one example is enough to say robots are not even context-aware in most cases and are far off from being treated as sentient beings.

More Trending Stories 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net