The efficiency of Artificial Intelligence (AI), machine learning, robotics and cognitive automation is taking over the globe. The proportionate introduction of these technologies entrust to the acceleration in the quality and quantity of training data that the systems are exposed to. AI is predicted to sculpt the future of the world as everything around us cuddles with smart technologies.
Watching movies, reading articles on pros and cons of AI, make us wonder that one day AI could take over the world. But experts warn that there is a much more expensive job to deal with – Biased AI. Google's AI chief John Giannandrea once expressed that he wasn't fretting about super-intelligent killer robots, instead, he was worried about intelligent systems learning human prejudice. He asserted that he was concerned about the threats that may be sneaking within the machine learning algorithms that make millions of decisions in a blink of an eye.
You must be wondering what is AI bias that can make even Google AI Chief sweat.
When theoretically neutral and without prejudice programs hinge themselves on faulty algorithms or deficient data to thrive foul biases against certain people, it is considered as Biased AI. Recent studies highlighted it as the problem of the present.
Example – Facial recognition topped the news captions for not being racially comprehensive. According to MIT's (Massachusetts Institute of Technology) study, approximately, 35 percent of images of dark complexioned women experienced an error on facial recognition software. Also, lighter-skinned males faced an error rate of around 1 percent comparatively.
Google's decision of blocking gender biased pronouns from one of its AI enabled innovation – smart compose feature, was due to bias problem only.
It can be closely observed that how some of the real-world biases influence technology.
Tracing the definition, we can say that, AI programs are made up of algorithms or a set of rules that enables them to recognise arrangements so they can resolute judgements with little interference from mortals. At times human prejudices can diffuse into the platform and hinder the algorithms' need to be fed with data in order to learn certain rules.
Antony Cook, Microsoft's associate general counsel for Corporate, External and Legal Affairs for Asia said – "Having access to large and diverse datasets helps to train algorithms to maintain the principle of fairness…. the issue of bias is not solely addressed by the generation of large amounts of data but also how that data is used by AI systems."
CEO of consulting think tank, Future Advocacy, Olly Buston elucidated human biases reflection on machines, quoting an example he said – "if an algorithm used to shortlist people for senior jobs is trained on data that reflects the fact that historically, more senior jobs have been held by men, then the algorithm's future behaviour may reflect this, locking in the glass ceiling."
Professionals are considering enhancement in the diversity of AI field believing that it would overcome biases.
Kay Firth-Butterfield, head of AI and machine learning at the World Economic Forum told a media firm (CNBC) this year – "When we're talking about bias, we're worrying first of all about the focus of the people who are creating the algorithms… We need to make the industry much more diverse in the West."
Microsoft's Cook said – "Stakeholders from various fields need to constantly engage in discussions of what constitutes inclusive AI — a human concern that should not be handled only by experts in technology"
He also added – "A "multi-disciplinary approach" is needed "to make sure that you've got the humanists working with the technologists. That way we'll get the most inclusive AI… Human decisions are not based on ones and zeros … (but on) social context and social background. The debate around the right ethical rules to apply to AI should involve technology companies, governments and civil society."
Bias AI possess some serious threat and can lead to life-altering aftermaths for individuals.
Reportedly, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) program used by US judiciary in some states to aid in deciding parole and other sentencing conditions in 2016, had racial biases.
According to a paper by Toby Walsh, an artificial intelligence professor at the University of New South Wales – "COMPAS uses machine learning and historical data to predict the probability that a violent criminal will re-offend. Unfortunately, it incorrectly predicts black people are more likely to re-offend than they do."
In an interview with CNBC, Walsh asserted – "While biases in AI exist, it is important that certain decisions are not left to software…. That's especially when such decisions can directly harm a person's life or liberty. If we work hard at finding mathematically precise definitions of ethics, we may be able to deal with bias in AI and so be able to hand over some of these decisions to fairer machines… But we should never let a machine decide who lives and who dies."
As per Walsh's quotation, examples of such decisions may include the potential of AI being used in hiring decisions or used during defence conflicts as part of autonomous weapons.
AI software are good enough only when it comes to the data it is trained to analyse. It is possible that an AI that is trained on data from one population will act less efficiently when applied to data from a different population.
Olly Buston, giving an example, said – "there is a chance some AI apps that are developed in Europe or America will perform less well in Asia."
Eugene Tan Kheng Boon, associate professor of law at Singapore Management University stated – "So you could imagine, for an example, data that comes from China and India — with combined population of 2.6 billion people when that data becomes widely be available and used — there will be biases that we might not see in the West but may be very salient or very sensitive in our part of the world."
As per an expert's note, it can be concluded that rising progress of AI in Eastern hemisphere means more instances of bias problems are like to accelerate from the Asian geography.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.