Imagine a world where humans and robots coexist shortly, going to school, work, and going about their everyday routines in peace. A creature is said to be sentient if it can see, reason, and think, as well as suffer or experience pain. All mammals, birds, and cephalopods, as well as perhaps fish, are thought to be sentient, according to scientists. Most species, however, do not have rights, therefore a sentient artificial intelligence (AI) may not have any at all. Another major issue with AIs is that they deceive humans. AIs nowadays all act as though they understand us and have feelings. Whether you ask Siri if it is happy, it may respond that it is ecstatic, but the words are empty; Siri has no feelings. This makes it much more difficult for future AIs. How do we tell whether the AI is sentient or not?
This has grown increasingly human in our world's frenzied quest to reach authentic human-AI since robots can not only learn, rationalize, and make decisions, but also display emotions and empathy. Many people feel that if a robot can pass the Turing Test, which measures a machine's capacity to think like a human, it should be granted human rights. Sophia, a humanoid robot with artificial intelligence and face recognition, has already been awarded full citizenship in Saudi Arabia. Sophia is only the first step toward robots becoming self-aware and gaining a human-like consciousness. Is it true that if robots believe in themselves and have the same capacities as humans, they would be granted the same rights?
The entertainment business is one of the first to consider how humans and AI will interact in the future. This is shown in Black Mirror, a British science fiction anthology television series that looks at current society and the unintended repercussions of emerging technology. Greta receives surgery to create a "cookie" of herself, which is a digital clone of her consciousness stored in a white egg-shaped item, in the episode White Christmas. When Greta's cookie awakens, it believes it is Greta since the cookie has Greta's consciousness and physical form. A cookie factory worker tells her that she was made to carry out Greta's tasks since she knows Greta's schedule and preferences the best because she is Greta herself. The cookie refuses to slavish for someone else, like any person would, so the worker tortures her by abusing her through a computer system, making months and years pass in the virtual world. Greta is unable to sleep in the cookie, so she goes for years without sleeping, eventually breaking down from boredom and a lack of stimulus and taking on the role of slaving for Greta every day and night by controlling the apps in the house and monitoring Greta's schedule. Even though Greta's cookie is technically only a string of code, the ethical dilemma of whether slavery on conscious AI is moral is posed.
Robots, unlike humans and other sentient entities, do not deserve rights unless we can make them indistinguishable from us. Not just in terms of appearance, but also in terms of how they see the world as social creatures, feel, respond, remember, learn, and think. Due to the intrinsic differences between what robots are (machines) and what we are (humans), there is no sign in science that we will attain such a condition anytime soon (sentient, living, biological creatures).
Is it OK to give AI robots rights? Yes. Our biosphere and social structure owe humanity a duty of appreciation. Robots will be used in both systems. We have a moral responsibility to protect them, to grow them so that they can defend themselves against abuse, and to be morally aligned with humanity. They should be awarded a boatload of rights, but here are two: The right to be protected by our legal and ethical system, as well as the right to be made in such a way that it is trustworthy; that is, technologically fit-for-purpose, as well as cognitively and socially compatible.
What would happen if we granted robots human rights even though we've labeled them as non-human? In theory, robots are granted rights based on the idea that humans will always wield hierarchical authority and control over them. What happens, though, when the robots start to think for themselves? Would they make use of their rights if they were given them? When two artificially intelligent programs from Facebook were placed together to negotiate and trade goods in English, the experiment failed when the robots "began to chant in a language that they each understood but that looked mostly unintelligible to humans." Facebook had to shut down the robots in the end because they were speaking without the permission of their creators. The experiment was able to be shut down because AI in today's world does not have rights and is not protected from being terminated; however, if AI had rights, this would not be the case, and the robots could have spun out of control and communicated with each other without us ever being able to decipher it. Facebook AI demonstrates that robots can and will be evolved so that they no longer need to be given data to learn, but can instead generate algorithmic knowledge on their own. Because robots are intrinsically not human, they may not grasp human values in life and may act in psychopathic ways, putting society in jeopardy. A robot designed and programmed to benefit the world by reducing pain may conclude that "people cause misery" and that "the world would be a better place without humans." The robot may therefore determine that annihilating people is the best course of action for the planet to alleviate suffering, and carry out the assignment without considering the morality of its actions from a human perspective.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.