Demystifying the Pro-social Behaviour of Robots Through Abuse

Demystifying the Pro-social Behaviour of Robots Through Abuse
Published on

The Prosocial Behaviour strengthens the Human-Robot Interaction by prompting human participants to intervene when the robot shows a sad response.

Just like Human abuse, robot abuse is very normal. Ever since its innovation, robots have been subjected to abuse. Several posts and videos, floating around social media can testify the abuse of robots by children and adults. Infact, one such distinctive video by Boston Dynamics, a Softbank Robotic firm, demonstrated a parody video of robot abuse. Followed by which, the social media was flooded with Robot Lovers criticizing the abuse.

The Human-Robot Interaction forms the basis of strong robotics. Robots have been designed to hear, touch, and provide emotional support to humans, especially during a crisis. But can the same be said to humans, especially when it comes to Robot abuse? Or Can Robots prevent abuse from happening? These are some of the questions which define the complexity of Human-Robot Interaction. That's why it becomes imperative to understand the prosocial behavior of robots towards each other during abuse and the reaction prompted by robots for human beings to act through this interaction.

Prosocial Behaviour during Robot Abuse

An experimental study named "Prompting Prosocial Human Intervention in Response to Robot Mistreatment" by Interactive Machine Groups at Yale University, investigated the emotional reactions of bystander robots for motivating human participants in response to human abuse.

Prosocial Behaviour is the actions taken at the personal cost, thus fostering unselfishness and collaboration among the people involved.

The study involved a confederate, a human participant, and three cozmo robots which were painted- yellow, green, and blue in 15 groups. Cozmo robots is a programmable toy robot, which can express emotions, utter non-linguistic phases, move through confined spaces, and sense changes in pose. The group was assigned a collaborative block building task.

In each group one robot made mistakes repeatedly thus, being subjected to physical and verbal abuse by the confederate. It was also imperative in understanding the dynamics of human bullying with group compositions and the role of bystanders.

The conditions of the experiments were No response i.e. the bystander robots did not react to the abuse endured by the other robot, and Sad response, where the bystander robots turned towards the abused robot, expressing sadness over the abuse with the help of preset animation, which was highly anthropomorphic and displayed audio and facial reactions.

It was hypothesized that the sad response would increase the perception of robot mistreatment, induce more empathy for the abused robot, and would lead to more prosocial intervention from participants.

The abuse was detected automatically with the help of the installed camera. A significant change in the head position of the robot was noticed, and the yellow robot reacted by displaying a sad face and shutting down for 10 seconds.

The result of the experiment had strong interventions. It included:

  • Interruptions to the Abuse Script: The reactions from the bystander robots prompted the participant human to interrupt the abuse by preventing the yellow robot from making the same mistakes again, thus decreasing the chances of confederate abusing the yellow robot.
  • Direct Stop: The sad reaction of the bystander robots, let the human participants stop the abuse by saying "You should Stop"," Don't do that".
  • Social Pressure:  In other instances, the comments used by the participants put the confederate about continuing the abuse. Example: "You hurt its feelings".

The study concludes that the effect of the experiment is possible. Researchers conclude that more often the participants in the study intervened when robots in their group expressed sadness due to abuse. If such an experiment comes into effect, the chances of robot breaking after abuse would be reduced thus ensuring the safety of robots due to robot malfunctioning.

Teaching Robots to Fight Back Abuse

Another way through which Human-Robot interaction can be demonstrated with Prosocial Behaviour is by teaching the robot to fight back abuse. A video released by Boston Dynamics in which an experiment was conducted by researchers over SpotMini, the firm headless Robotic Dog.

The experiment demonstrated Spotmini entering the door where a human is already standing with an ice hockey stick. Undisturbed by this the robot continues to grab the door when Spotmini lost its fifth arm on claw due to the attack by a human.

The assault continues with human closing the door on the robot, and grabbing its leash, and yanking so that Spotmini can't cross the threshold. This led to the robot fighting back the human by looking like real dogfighting its owner. As the human gives in SpotMini trudges through the threshold and enters the door.

The firm describes this video as a test for Spotmini to adjust to disturbances. The Boston Dynamics also pointed out that this would not harm the robot, and would improve the operations of the robot.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net