Trustworthy AI Is a Distorted Picture. Thanks to the Bias Filters

Trustworthy AI Is a Distorted Picture. Thanks to the Bias Filters
Published on

Trustworthy AI is rather an unachievable goal unless you understand what standards have to be set

In an interview with Times of India, Arjan Durresi, a professor of computer science and a Purdue researcher based in Indianapolis, discussing the importance of instilling transparency and training AI to be trustworthy, said AI is in a transition phase. Many companies are eager to adopt AI will connect to his words. While on one hand, it has become a must-have technology, on the other hand trusting the outcomes has become a rather difficult proposition. The black box algorithms that run all AI systems are opaque and are plagued with a variety of biases, viz., prejudice bias, because systems like AI chatbots quite often go rogue discriminating against people based on their race, age, or gender and the ones like social media systems that inadvertently spread rumors and disinformation.

Like any technology, AI also has a number of shortcomings but not having access to how a machine takes decisions is truly a concern, particularly when it is deployed in strategic and critical areas like healthcare, warfare, and human resource management. But unfortunately, even though there is a general consensus on the importance of trustworthy AI, there is no clear-cut definition yet, as to what constitutes trustworthiness. The ones that are specific so far are fuzzy and inadequate.

This is easy to understand when you consider the standards that are applicable to one AI are not relevant to another to arrive at a set of standards that make for trustworthiness with no scope for exaggeration. In her book 'Trustworthy AI', Beena Ammanath puts it together in a very nuanced manner placing the responsibility of making trustworthiness not as a property of an AI system but as a property of an organization. If a self-driving car kills a human on the road, who should be held responsible? The car, the engineer, the manager, or the company's CEO? She opines unless there is transparency in the kind of ownership, transparent AI is impossible to achieve.

It doesn't mean we are on the darker side of the tunnel. There are ways AI can be made transparent and trustworthy. For that, initiatives, both at the organizational and government level need to be taken. The steps should include inspecting the bias factor thoroughly before deploying the AI systems, placing many humans in the loop, or designing foolproof protocols and roadmaps setting benchmarks for AI developers. Regulating AI is a tricky domain; nevertheless, governments and international organizations are putting in their best efforts. The USA NIST (National Institute for Standards and Technologies) tests facial recognition algorithms but only when the company submits them for testing. Linkoping University (LiU), has taken up project TAILOR, in collaboration with EU, aiming at laying a research-based roadmap to guide researchers and governments in understanding the development of trustworthy AI. It is touted as the first step towards the standardization of AI algorithms. However, experts opine, research problems must be solved for the project to succeed. This is basically because most AI systems are scrutinized as a legal system while writing legal proposals totally excluding expert knowledge within AI, which is a serious problem according to Heintz.   

Fredrik Heintz, Professor of Artificial Intelligence at LiU, and co-ordinator of the TAILOR project says, "Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It's important that experts have the opportunity to influence questions of this type."  

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net