Artificial Intelligence

Google and Microsoft call ‘Emotion AI’ risky but only limits usage

Aishwarya Banik

Google claims that its emotion detection technology is readily available, however, Microsoft claims its better accessibility

As part of a new framework for the ethical use and application of artificial intelligence, Microsoft will stop selling so-called emotion-detecting artificial intelligence (AI) software and restrict the usage of face recognition capabilities. It becomes the newest major tech company to abandon contentious methods to combat the possibility of prejudice and discrimination in AI.

A few months back Microsoft said that it will stop making a cloud-based AI technology that infers people's emotions available to everyone. Despite the company's admission that emotion recognition technology has "risks," it turns out that the technology will still be available in a vision-impaired user's app. In fact, despite rising skepticism regarding the creation and use of contentious emotion detection in common software, both Google and Microsoft still include AI-based capabilities in their offerings.

The Seeing AI person channel, according to Saqib Shaikh, a software engineering manager and project lead for Seeing AI at Microsoft who worked on the app's development, "enables you to detect individuals and to obtain a description of them, including an estimate of their age and also their sentiment." When he used Seeing AI to take a picture of his acquaintance, the app's automated voice identified him as a "36-year-old guy wearing spectacles, looking joyful." "That's pretty interesting," said Shaikh, "since you can instantly determine someone's facial expression."

Microsoft said on June 21 that it will "retire" its facial recognition software, which tries to determine a person's emotions, gender, age, and other traits. The firm highlighted privacy issues, "the lack of agreement on a definition of "emotions," and the "inability to generalize the association between facial expression and emotional state across use cases, geographies, and people."

But for Seeing AI, accessibility goals were more important than those problems. Microsoft said in a statement sent to Protocol that it worked with people from the blind and low vision populations who "gave crucial input that the emotion detection feature is essential to them, to lower the equality gap between experience and sighted people." The company declined a request.

Google's stock emotion recognition software

Google has also struggled with how much computer vision-based AI to use to determine if a person is likely to be displaying a particular mood or set of facial features. According to a corporate description, the company's Cloud Vision API contains "pre-trained Vision API models to detect emotion, read the language, and more." According to the algorithm, a face in a picture is either "unknown" or "extremely unlikely" to "highly probable" to be showing anger, pleasure, grief, or surprise.

In addition, Google's ML Kit tool for mobile apps contains a component that recognizes face "landmarks" and categorizes facial features to determine whether or not someone is smiling or having their eyes open.

A Google spokesman questioned the idea that their computer "detects emotion," pointing out that its Vision API predicts the perception of facially displayed emotions rather than detecting stated emotions, although occasionally stating this in its literature. The sincerity of feeling AI has drawn a lot of attention and frequently raises ethical concerns. Advocacy organizations like the Brookings Institution and the AI Now Institute have called for a restriction on the technology for particular use cases.

Google declined to give an interview for this story, citing a Reuters report from 2021 that stated that the company decided against adding new capabilities to its Cloud Vision API tool to detect the likelihood of additional emotions other than anger, joy, sorrow, and surprise after conducting an internal ethics review. The study "found that inferring emotions might be insensitive because, among other reasons, facial signals are connected with moods differently across cultures." Mitchell revealed to Protocol that while she was employed at Google, she was a member of the team that worked to persuade the firm not to add more emotional states to the list of the four that were already supported by the Cloud Vision API.

Mitchell, a co-leader of Google's ethical AI team, lost his job in February 2021 as a result of an internal probe over security protocol violations when moving business data. Her resignation came after another prominent termination of Timnit Gebru, the co-leader of her AI ethics team. Conflict over a study that questioned the social, economical, and environmental consequences of large-language machine-learning models is one of the reasons Gebru was sacked.

Decreased accessibility

Emotion AI is being advanced by researchers. Some approved research papers at the International Computer Vision and Pattern Recognition conference in New Orleans in June dealt with work on facial landmark identification and face emotion recognition, for example.

Nirit Pisano, chief psychology officer at Cognovi Labs, said, "We're just breaking the surface, and everywhere I turn there's more and more [emotion AI] developing." Cognovi Labs provides emotion AI technology to advertisers and pharmaceutical manufacturers who use it to gauge consumer reactions to marketing messages and comprehend how people feel about specific drugs. Microsoft said that by keeping emotion recognition in Seeing AI, it will further its accessibility objective. In a blog post published last month, Sarah Bird, principal group product manager at Microsoft's Azure AI, stated that the company "remains committed to supporting technology for people with disabilities" and that it would "continue to use these capabilities in support of this goal by integrating them into applications such as Seeing AI."

The emotion detection system, which employs computer vision to identify facial data, is criticized by Gebru, a computer vision expert. Although "there are numerous situations where access is utilized as a cause" for emotion detection, such as to increase accessibility for those with vision impairment, she said Protocol can be helpful "all the time" depending on the situation.

More Trending Stories 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

AI Cycle Returning? Keep an Eye on Near Protocol, IntelMarkets, and Bittensor to Rally Before 2025

Ethereum and Litecoin Rallies Spark Excitement, But Whales Are Targeting a New Altcoin for 20x Gains

Solana to Double its 2021 Rally Says Top Analyst, Shows Alternative that Will Mirrors its Gains in 3 Months

Here Are 4 Altcoins You’ll Regret Not Holding In This Crypto Bull Run

What is MicroStrategy Doing with Bitcoin?