What is Inside the Black Box of Artificial Intelligence?

What is Inside the Black Box of Artificial Intelligence?
Published on

Artificial Intelligence (AI) is surprising the mankind with newer and awe-inspiring outcomes. But with the surprises comes the concern of trust. There is a gripping 'black box' problem with artificial intelligence, if people don't know how AI comes up with its outcomes and insights, they seldom trust the technology. The lack of trust can be attributed to the not so successful AI efforts for IBM, Watson for Oncology in particular.

Experts pointed out that IBM's attempt to promote its supercomputer programme to cancer doctors (Watson for Oncology) was more of a PR disaster and the trust factor. Watson for Oncology built on AI diagnosis assisted Physicians and provided confirmation but it did not help them reach a diagnosis. When Watson did not agree, physicians lost the trust in its analytics.

If the doctors knew how the platform Watson for Oncology came to its conclusions, the adoption rate would have been different. Since an interaction with something which is not understood can cause an anxiety and make the mind feel that it is losing control and hence not trust on that same.

Understanding the Technology

If oncologists had understood how Watson had come to its conclusions, the trust level would have been much higher and also the adoption and the usage. The need to unlock the black-box has not been restricted to Healthcare alone, there is an urgent need to explain how the algorithms work. The explainability or trying to understand how to make algorithms explain, how they make decisions is touted as the utmost need of the hour. In many instances, AI is deployed to make life-changing decisions where explainability becomes absolutely critical.

It comes as no surprise that the US Department of Defence (DoD) is investing in Explainable AI (XAI) which will be very essential to understand the future war-fighters and trust the emerging generation of artificially intelligent machine partners. Efforts into XAI might soon lead new machine learning systems to explain their rationale, convey an understanding of how they behave in the future and characterize their strengths and weaknesses.

The critical dependability of explainable AI will be extremely important to the healthcare industry to bridge the gap that arises from assessing one's health symptoms, and actually giving clinician's the tools that are best suited to the ailment.

The Opportunity Trade-off

While organizations like DARPA (Defence Advanced Research Projects Agency) have actively invested in XAI, there is an instigating debate whether such efforts are undertaken to make AI algorithms better.

Another concern comes from the hypothesis that is there a need to 'dumb down' AI algorithms to make them explainable.  Experts are of the opinion that, the more accurate the algorithm, the harder efforts it takes for its interpretation. Computers have become an increasingly important part of our lives, and automation is only adding to complexities. Thus, it is increasingly important to understand how and why these complicated AI and ML systems take the decisions and insights.

DARPA is investing $2 billion into the 'third-wave AI systems,' which may well be sufficient to resolve the opportunity trade-off.

AI Explained

Resolving the trust issues that arise with AI is important as they may arise in the future as well. If the growing field of XAI is witnessed in the dystopian future laid out by some AI skeptics, there may exist a path to preventing such nightmares.

The explainability of AI becomes indispensable to curb the HAL 9000, Skynet, or any other Hollywood AI villain, to stop it from breaking free from the control of its human creators and run amuck.

Thanks to the advancements in Explainable AI (XAI) that gives the breathing space towards an essential relief.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net