Necessity to Put ‘Humans in The Loop’ While Designing AI Systems

Necessity to Put ‘Humans in The Loop’ While Designing AI Systems
Published on

Image Credit: Hackernoon

Do you remember the 2018 Accident Case of Self Driving Uber Car? The car collided with a pedestrian and caused her death. Since then the scrutiny has raised at another level for the security of such autonomous vehicles. Many have claimed that rolling out self-driving cars on the road at this stage is extremely dangerous and criticized the autonomous tech development. However, considering a different angle from a general perspective, National Transportation Safety Board (NTSB) said, "Had the vehicle operator been attentive, she would likely have had sufficient time to detect and react to the crossing pedestrian to avoid the crash or mitigate the impact."

According to a BBC report, in the car was safety driver Rafaela Vasquez who, according to investigators, had been streaming a TV show on her mobile phone while behind the wheel. Dashcam footage showed Ms. Vasquez spent 36% of the journey that evening looking at the device. In its experiments with driverless cars, Uber has mandated that a human operator pays attention at all times so they can take over in difficult situations or when the vehicle encounters a situation it does not know how to handle.

Is it completely operators' fault or the system engineers are too responsible for it by not including humans in the loop?

For a Fortune interview, Abhishek Gupta, a machine learning engineer at Microsoft and founder of the Montreal AI Ethics Institute pointed out that AI systems to always be designed with a "human in the loop," who is able to intervene when necessary.

Gupta says that in principle, this sounds good, but in practice, there's too often a tendency towards what he calls "the token human." At worst, this is especially dangerous because it provides the illusion of safety. It can just be a check-the-box exercise where a human is given some nominal oversight over the algorithm, but actually has no real understanding of how the AI works, whether the data analyzed looks anything like the data used to train the system, and whether its output is valid.

If an AI system performs well in 99% of cases, humans tend to become complacent, even in systems where the human is more empowered. They stop scrutinizing the AI systems they are supposed to be supervising. And when things go wrong, this humans-in-the-loop can become especially confused and struggle to regain control: a phenomenon known as "automation surprise."

To exemplify, he recalled, 'This is arguably part of what went wrong when an Uber self-driving car struck and killed pedestrian Elaine Herzberg in 2018; the car's safety driver was looking at her phone at the moment of the collision."

Its high time that automation and AI should not be seen as a replacement for humans rather they should be designed with human participation. This will even enhance the efficiency of intelligent automation which would be further open to amendments as per human feedbacks.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net