Machine Learning is on the Verge of Triggering ‘Reproducibility Crisis

Machine Learning is on the Verge of Triggering ‘Reproducibility Crisis
Published on

A "Reproducibility Crisis" is about to be brought on by machine learning.

Researchers are increasingly using machine learning as a technique to generate predictions based on patterns in their data in fields ranging from political science to biology. However, a couple of academics at Princeton University in New Jersey believe that many of these studies' assertions are likely to be exaggerated. They hope to raise awareness of a "brewing reproducibility issue" in machine-learning-based sciences.

According to Sayash Kapoor, a machine-learning researcher at Princeton, machine learning is promoted as a technique that academics can pick up and utilize on their own in a matter of hours. But you wouldn't anticipate that a scientist could learn how to manage a lab through an online course, the man argues. According to Kapoor, who co-authored a preprint on the "crisis," few scientists are aware that the issues they face when using artificial intelligence (AI) algorithms are also present in other domains. He claims that because peer reviewers do not have the time to carefully examine these models, academia currently lacks systems to identify works that are not replicable. Guidelines were developed by Kapoor and his co-author Arvind Narayanan for scientists to avoid similar mistakes, including an explicit checklist to submit with each paper. 

What is reproducibility?

The concept of reproducibility provided by Kapoor and Narayanan is broad. Computational reproducibility, which is currently a worry for machine-learning specialists, states that other teams should be able to duplicate the results of a model given the full details on data, code, and circumstances. A model is also deemed irreproducible, according to the duo, when data analysis mistakes are made by researchers and the model is not as accurate as advertised.

Such mistakes are difficult to evaluate objectively and frequently need in-depth expertise in the industry to which machine learning is being applied. Some authors whose work the team has criticized argue that their publications include errors or contend that Kapoor's assertions are exaggerated. For instance, in the field of social studies, scholars have created machine-learning models that are intended to forecast when a nation is most likely to descend into civil war. Once mistakes are fixed, according to Kapoor and Narayanan, these models don't perform any better than conventional statistical methods. The Georgia Institute of Technology's David Muchlinski, a political scientist whose paper2 was studied by the two, claims that the area of conflict prediction has been unfairly disparaged and that subsequent investigations support his findings.

Nevertheless, the team's anthem has been well received. To generate and spread solutions, more than 1,200 individuals have registered for what was ostensibly a short online session on reproducibility on July 28 hosted by Kapoor and colleagues. Each industry will keep running across similar issues, he asserts, "until we do something like this.

When algorithms are used in fields like health and justice, overconfidence in the capabilities of machine-learning models might be harmful, warns Momin Malik, a data scientist at the Mayo Clinic in Rochester, Minnesota, who will be speaking at the event. He warns that machine learning's reputation may suffer if the situation is not resolved. "I'm a little shocked that machine learning's credibility hasn't already crashed. However, I believe it may happen very soon.

Problems with machine learning

Similar difficulties, according to Kapoor and Narayanan, arise when machine learning is applied to several fields. The researchers discovered 329 research publications whose findings could not be properly replicated due to issues with how machine learning was applied after analyzing 20 studies in 17 different research fields. Narayanan is not exempt either; one of the 329 is a 2015 study on computer security that he co-authored. This community as a whole has to confront the issue together, according to Kapoor.

He continues by saying that no particular researcher is to blame for the failures. Instead, inadequate checks and balances combined with AI hype are to blame. Data leaking, which occurs when information from the data set a model learns on contains data that it is subsequently assessed on, is the main problem that Kapoor and Narayanan draw attention to. If these are not fully distinct, then the model has already seen the solutions and its predictions appear to be far more accurate than they are. Researchers should be on the lookout for eight key forms of data leaking that the team has discovered.

Some data leaks are undetectable. Temporal leakage, for instance, is an issue since the future depends on the past and training data sometimes contains points from later than test data. Malik cites a 2011 paper4 that said that a model analyzing the emotions of Twitter users could forecast the closing value of the stock market with an accuracy of 87.6 percent. However, he claims that the algorithm had essentially been given access to the future because the researchers had evaluated the model's prognostication using data from a period earlier than portions of its training set.

According to Malik, there are bigger problems, such as training models on datasets that are smaller than the population they are eventually supposed to reflect. For instance, an AI that detects pneumonia in chest X-rays but was trained only on elderly people may perform less well on younger people. According to Jessica Hullman, a computer scientist at Northwestern University in Evanston, Illinois, who will present at the workshop, another issue is that algorithms frequently wind up depending on shortcuts that don't always work. When presented with an image of an animal on a mountain or beach, for instance, a computer vision algorithm can fail since it has learned to detect a cow by the grassy background present in most cow photographs.

Repairing data leaks

Researchers should give proof that their models don't contain each of the eight categories of leaking with their papers, according to Kapoor and Narayanan's proposed solution to the problem of data leakage. The authors offer what they refer to as "model info" sheets as a template for this kind of material.

According to Xiao Liu, a clinical ophthalmologist at the University of Birmingham in the UK and co-creator of reporting criteria for research including AI, such as those used in screening or diagnosis, biomedicine has advanced significantly with a similar strategy during the past three years. Liu and her coworkers discovered in 2019 that just 5% of more than 20,000 studies employing AI for medical imaging were sufficiently defined to determine if they would function in a clinical setting. Although guidelines don't immediately enhance anyone's models, she claims that they do provide regulators with a resource by "making it pretty evident who the individuals who've done it well, and maybe others who haven't done it well, are".

Malik believes that collaboration may be advantageous. He suggests including specialists in the relevant discipline as well as those with knowledge of machine learning, statistics, and survey sampling in investigations. The method, according to Kapoor, is likely to have a substantial influence on fields like drug development where machine learning identifies leads for more study. However, other areas will need additional research to demonstrate its benefits, he continues. He counsels academics to avoid the kind of confidence crisis that followed the replication problem in psychology a decade ago, even though machine learning is still in its infancy in many fields. "The problem will only become worse the longer we put it off", he says.

More Trending Stories 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net