Artificial Intelligence

Explainable Artificial Intelligence -The Magic Inside the Black Box

Kanti S

As Artificial intelligence gets more widely adopted, the importance of having explainable models increases manifold. The human mind has relied on these systems to be responsible for making major decisions and now feels the need to be able to communicate what the decision is, how it was made and why did the AI do what it did.

Artificial Intelligence technologies are already used to rank your search results or suggest videos to watch on YouTube or smart replies on email and so on. AI algorithms are used in security for facial recognition or behaviour analysis, in medicine to identify tumours, into financial lending to identify if you will be able to repay a loan if sanctioned. But creators of these algorithms are may get prejudiced or get fumbled up explaining how it works. AI often relies on skewed programme logic, poor data sources and even developer biases which mean that the systems can easily get into the maze of human prejudices. There have been allegations that mortgage reviewing bots have become racist while when some algorithms learn that if the only people seeing and clicking on adverts on gadgets are men, then those adverts will only be shown to them, biases isn't it!

Why is Explainable AI So Hard to Explain?

AI models are inbuilt so complex that it becomes very hard to describe what is being done; why, when and where. When AI is deployed making decisions more complicated and more accurate thus harder to explain the rationality behind them in real-terms.

Of the two types of AI, supervised is often mathematically driven, and explicitly programmed to be predictable and the second one, unsupervised learning is focused on using deep-learning; on an attempt to mimic the human brain. Unsupervised learning is fed on data and is expected to learn on its own which makes it nonlinear and chaotic, making it impossible to predict outputs ahead of time. However, the good news is that experts are working on ways and tools to assist with the generalised explanations and make AI, in both cases, more understandable.

The Magic Inside the Black Box

How does machine learning and data mining use knowledge to explain how algorithms make decisions? Professors at the Carnegie Mellon University have worked on a successful anomaly detection model that will comb through transactions on reports and flag items that seem out of place for further investigations. Anomaly detection can be applied to many domains and in some cases directly impact people's lives, such as alert a social worker when a report of child abuse significantly outlines from the others, or when data from emergency rooms may point to a potential disease outbreak.

Professors at the Carnegie Mellon University have identified fraudulent users and fabricated reviews on sites like Yelp and TripAdvisor, but it is not always enough to know that an anomaly exists, as humans who use the results of detection models must understand what the anomaly means.

Anomaly Detection and Investigation

If the algorithm can let the user know about a potential error or suspicious activity and explain why it thinks that, and how the anomalies stand out, the human analyst can make a note and look into it.

Explanations are essential for anomaly detection scenarios, which cannot be fully automated and require a human mind for verification. However, explanations may bring up issues with the detection algorithm, by exposing reliance on undesirable or unexpected clues for detection purposes like claiming terrorist activity based on someone's nationality or biometric data. Many algorithms exist in black boxes, which points that the human mind knows what the outcome of a problem is but not necessarily how it reached that determination.

Anomaly detection could be useful for reducing false positives and false negatives. Further, it may also give an insight into why an algorithm took a specific action or not went for an alternate action in contrast to what the human mind would have thought to taken with the same input at disposal.

Understanding the Black Box may be helpful to customers like for instance a mortgage application is marked for rejection by an algorithm, the consumer can be given useful information about what went wrong and what additional requirements are needed for a successful outcome.

The concept of explainable AI is both possible and desirable. Reaching to clearer explanation frameworks for algorithms will provide the users and customers with better information, and could improve trust in these disruptive technologies over time.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

The Crypto Crown Clash: Qubetics, Bitcoin, and Algorand Compete for Best Spot in November 2024

Here Are 4 Altcoins Set For The Most Explosive Gains Of The Current Bull Run

8 Altcoins to Buy Before Their Prices Double or Triple

Could You Still Be Early for Shiba Inu Gains? Here’s How Much Bigger SHIB Could Get Before Hitting Its Peak

Smart Traders Are Investing $50M In Solana, PEPE, and DTX Exchange To Make Generational Wealth: Here’s Why You Should Too