The Neural Network Method Can Decipher the Mathematics Of AI

The Neural Network Method Can Decipher the Mathematics Of AI
Published on

The new research acts as a stepping stone toward characterizing the behavior of robust neural networks

AI systems, essentially are a set of algorithms, developed to perform specific tasks. They are opaque and largely remain a puzzle to the end users. When you design an AI system to predict a political outcome of an election, for sure it wouldn't be possible to explain the criteria the conclusion is arrived at as with a political secretary. This is called black-box designing in AI circles. Artificial intelligence depends on Neural Networks that make it think like a human mind. There is input and there is output. What happens with Neural Networks in the processing unit largely remains unknown.

What causes the AI black box problem?

Artificial neural networks consist of hidden layers of nodes, and when information passes from one layer to the other, it drastically transforms data making it highly impossible to predict the output. Whenever a layer is infused with information the nodes train themselves to a certain pattern that whenever new data is fed, they try to accommodate the data into the learned pattern with occasional tweaks, making it infinitely complicated by the end, snowballing into a mystery. There seems to be no solution to the problem when it comes to making artificial intelligence answerable. As AI has a sweeping influence in the industrial as well as critical sectors like healthcare, defense, and law and order, they are trying all ways to decipher AI's ways of working. Some researchers test AI systems like scientists test lab rats, tinkering with inputs in the hope of understanding AI's decision-making process. The other approach includes inventing a better way of controlling, throwing additional nets, and seeing how these systems learn to ensure they do not grow beyond our understanding.

The technique of controlling confounds

Researchers at Los Alamos National Laboratory, have reportedly found new ways to compare neural networks that could take a peep into how AI systems function. The study found that it is necessary to control the confound to compare two neural networks that have similar yet distinct features within the individual data points. A technique known as Representation Inversion can isolate the features in an image used by a network while randomizing or discarding all others. This method produces a new inverted dataset wherein each input contains only the relevant features while all others are randomized. Thus, this model attempts to calculate the similarity between an inverting model and an arbitrary network using an inverted data set giving a near accurate similarity of models. The paper," If You've Trained One You've Trained Them All: Inter-architecture similarity increases with Robustness", authored by Haydn Jones, was presented at the Conference on Uncertainty in Artificial Intelligence in Netherlands. Apart from studying network similarities, it acts as a stepping stone toward characterizing the behavior of robust neural networks. "Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI," says Jones.

More the attacks, more the similarity

Jones along with Los Alamos collaborators Jacob Springer, Garrett Kenyon, and Juston Moore applied the new metric of network similarity to adversarially trained neural networks and found out, that as the magnitude of attack increases, adversarially trained neural networks tend to exhibit similar data representations, irrespective of the network architecture. While researchers have been hunting for the right architecture for neural networks, the new research involving adversarial training although cannot suggest the exact model, it helps narrow down the search to a few specifics, culminating in spending lesser time in finding out new architectures.  Haydn Jones says their research will help uncover hints as to how perception happens in certain human beings and other animals.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net