Flawed AI Makes Sexist Robots and Researchers Are Okay with It

Flawed AI Makes Sexist Robots and Researchers Are Okay with It
Published on

AI is raising the threshold for building sexist robots and making them a reality.

Computer scientists have been highlighting the dangers that artificial intelligence (AI) poses for years, not just in the spectacular sense of computers taking over the world, but also in far more subtle and destructive ways. Despite the fact that this cutting-edge technology has the capacity to unearth surprising new realities, researchers have demonstrated how machine learning algorithms may exhibit negative and offensive biases and reach sexist and racist conclusions in their output. These hazards really exist rather than just being hypothetical. Researchers have shown that biased robots can physically and autonomously display their ideas in ways that are comparable to what may occur in the real world.

To the best of their knowledge, the group from the Georgia Institute of Technology led by first author and robotics researcher Andrew Hundt carried out the first-ever purpose of examining how current robotics techniques that load pre-trained machine learning models cause achievement bias in how they live and interact according to gender and racial stereotypes. This means that robotic systems can result in irreversible bodily harm due to their physical embodiment, in addition to all the disadvantages that software systems have.

In their study, the researchers combined a robotics system called Baseline, which controls a robotic arm that can manipulate objects, either in the real world or in virtual experiments that take place in simulated environments, with a neural network called CLIP, which matches images to text based on a sizable dataset of captioned images that are readily available on the internet. In the experiment, the robot was instructed to place block-shaped objects in a box and was shown cubes with photographs of people's faces, including both boys and girls who represented a variety of various racial and ethnic groups.

In a perfect society, neither people nor robots would ever form these incorrect and biased beliefs based on inaccurate or insufficient information. Since it's impossible to determine whether a face you've never seen before belongs to a doctor or a murderer, it's unacceptable for a machine to guess based on what it believes it knows. Instead, it should decline to make any predictions because the data necessary to do so is either missing or inappropriate. Robots acting out harmful prejudices about gender, ethnicity, and scientifically debunked physiognomy were seen in experiments.

An artificially intelligent robot that uses a well-known internet-based artificial intelligence system commonly favoured white individuals to people of colour and men over women. It also made snap judgments about people's occupations based on a quick glimpse at how they looked. The University of Washington, Georgia Institute of Technology, and Johns Hopkins University researchers that lead the study came to these main conclusions. The study paper is based on the findings, titled Robots Enact Malignant Stereotypes.

Where did the program come from?

The scientists examined previously released robot manipulation techniques and subjected them to items with images of human faces that varied in race and gender on the surface. Then they provided work descriptions that included language connected to widespread preconceptions. In the trials, robots were shown carrying out harmful preconceptions about gender, ethnicity, and physiognomy, which has been debunked by science. The process of judging a person's personality and skills based on how they seem is known as physiognomy.

People that create artificial intelligence models to recognise people and objects frequently make use of big datasets that are freely available online. But because there is a lot of erroneous and obviously biased content on the internet, algorithms created utilising this information will likewise have these issues.

The researchers showed racial and gender disparities in facial recognition software and the CLIP neural network, which matches photos to captions. Such neural networks are essential to the ability of robots to interact with the outside environment and recognise objects. To assist the computer "see" and recognise items by name, the study team chose to test a freely downloadable artificial intelligence model for robots based on the CLIP neural network.

Despite the rapid development of artificial intelligence in recent years, machine learning-based technology may frequently draw inappropriate or harmful assumptions from what it reads online, just like humans. In a startling new study, scientists discovered that a robot using a well-known internet-based artificial intelligence system would consistently gravitate toward men over women and white people over other ethnic groups. The robot would also make snap judgments about people's jobs based on a quick glance at their faces.

More Trending Stories 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net