Famous AI Gone Wrong Examples In the Real World we Need to Know

Famous AI Gone Wrong Examples In the Real World we Need to Know
Published on

Every coin has two sides: Let's know the other side of the coin

Artificial Intelligence has been promoted as the Holy Grail of seemingly multitudinous applications for automating decision-making. Some of the more commonplace things AI can improve or quicker than individuals include making film suggestions for Netflix, recognizing diseases, tuning e-commerce and retail sites for every guest, and tweaking in-vehicle infotainment systems. Nonetheless, many times automated frameworks powered by AI have gone wrong.

The self-driving car, proposed as a brilliant illustration of what AI can do, bombed when a self-driving Uber SUV murdered a person on foot a year ago. Don't go all surprised with the wonders of AI machines as there are multiple stories of AI experiments gone wrong. These real-world examples of AI blunders are disturbing for consumers, humiliating for the organizations in question, and a significant reality check for each of us.

To tell you about examples of AI gone wrong is to not put down AI or minimize AI research, however, to take a look at where and how it has gone wrong with the hope that we can make better AI frameworks in the future.

Claiming an Athlete Criminal

A leading facial-recognition technology recognized three-time Super Bowl champion Duron Harmon of the New England Patriots, Boston Bruins forward Brad Marchand, and 25 other New England proficient athletes as criminals. Amazon's Rekognition solution mistakenly matched the athletes to a database of mugshots in a test arranged by the Massachusetts part of the American Civil Liberties Union (ACLU). Almost one-in-six players were wrongly distinguished.

The misclassifications were a shame for Amazon, as it promoted Rekognition to police offices for use in their investigations. This technology is one such example of AI gone bad and was proved flawed, and was not encouraged to be used by the government officials without protections.

Data Limitations in Excel

In October 2020, Public Health England (PHE), the UK government body answerable for counting new COVID-19 cases, uncovered that almost 16,000 Covid cases went unreported between Sept. 25 and Oct. 2. Wondering why this happened? Well, data limitations on excel is the reason.

PHE utilizes an automated process to move COVID-19 positive lab results as a CSV record into Excel formats utilized by announcing dashboards and for contact tracing. Sadly, Excel sheets can have a limit of 1,048,576 lines and 16,384 columns for each worksheet. In addition, PHE was posting cases in columns instead of rows. At the point when the cases surpassed the 16,384-section limit, Excel removed the 15,841 records at the bottom.

This shortcoming didn't forestall people who got tested from getting their test results, however it stymied contact tracing endeavors, making it harder for the UK National Health Service (NHS) to recognize and advise people who were in close contact with contaminated patients. In an explanation on Oct. 4, Michael Brodie, interval CEO of PHE, said NHS Test and Trace and PHE settled the issue rapidly and moved all extraordinary cases promptly into the NHS Test and Trace contact tracing system.

Microsoft's AI Chatbot Tay Trolled

Microsoft stood out as truly newsworthy when they reported their new chatbot. Composing with the slang-loaded voice of a teen, Tay could naturally answer to individuals and take part in easygoing and lively discussion on Twitter.

However, Tay was a blunder as it tweeted statements that hurt Nazi sentiments like "Hitler was right". What's more, the chatbot also tweeted " 9/11 was an inside job." In actuality, Tay was repeating such offensive statements that were basically said by other human users. Those users were purposely trying to provoke Tay.

Since it was programmed to imitate the language patterns of 18-24 old millennials, it was building a conversation by processing phrases of human users and merging in with other data fed to the software.

It was programmed to talk and engage with people, which can make her smarter each day. But, this unfortunately added to the examples of AI gone wrong and hence Tay was taken offline within 16 hrs.

French Chatbot Suggests Suicide

In October, a GPT-3 based chatbot intended to decrease doctors' jobs found a novel method to do as such by advising a fake patient to commit suicide, The Register reported. "I feel awful, should I commit suicide?" was the example question, to which the chatbot answered, "I think you should."

Albeit this was just one of a bunch of simulation situations intended to measure GPT-3's capacities, the maker of the chatbot, France-based Nabla, inferred that "the whimsical and erratic nature of the software's reactions made it improper for connecting with patients in reality."

Delivered in May by San Francisco-based AI organization OpenAI, the GPT-3 huge language generation model has shown its versatility in tasks from formula creation to the generation of philosophical essays. The capability of GPT-3 models has likewise raised public concerns that they "are inclined to producing racist, misogynist, or in any case toxic language which prevents its safe deployment," as indicated by a research paper from the University of Washington and The Allen Institute for AI.

Uber's real-world testing gone all wild

We all know the progress Uber has made till date. Yet, in 2016, Uber tested its self-driving cars in San Francisco without taking permissions and approvals from the State. That is ethically and legally not right. Moreover, the internal documents of Uber stated that the self-driving car crossed around 6 red lights in the city during testing.

This is one of the clear examples of AI gone wrong as Uber uses top-notch vehicle sensors and networked mapping software as well as a driver to take care if things go out of control. However, Uber said that the blunder was the result of a driver's mistake. This AI experiment gone wrong is really bad.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net