Our hardworking ancestors have taught us about the disadvantages of shortcuts and the fact that they never work. This stands true even in the age of reason as Jean-Paul Satre points out, " The best work is not what is most difficult for you; it is what you do best." Best work never entails shortcuts because the best of works take a good amount of time and patience.
However, the age we are currently residing in is also known as an age of convenience where every facility is available on small portable devices. This hyper-availability of services and privileges has curtailed the concept of attentive and intricate work even on AI experts and scientists.
New research from the University of Washington unveils that AI-powered gadgets too are inclined to shortcuts like humans. This tendency of relying on shortcuts is justified by the fact that artificial intelligence mimics human intelligence after all.
Doctors and medical experts have expressed their intimidation and indignation on the major cons that AI shortcuts can result in. If AI tools are deployed in the medical practice, they can yield erring results that can have a negative impact on the diagnoses of patients.
Alex DeGrave, a medical student at the University of Washington, along with his fellow students has discovered that the algorithms deployed in testing Covid-19 patients, relied on text markers and patient positioning, specific to each data set to detect a COVID-19 positive patient.
In DeGrave's words, a physician relies on specific patterns of the image that reflects disease processes. However, a system using shortcut learning instead of relying on the patterns can give rise to risky repercussions in the process of diagnosis and treatment.
Such a problem of leaning on shortcuts is termed as lack of transparency, also known as the "black box" phenomenon. Researchers have found that this problem is existent in almost all COVID-19 detection models. Medical practitioners have also highlighted the "Worst-case confounding". Worst-case confounding is a situation in which an AI tool lacks a sufficient amount of training data to learn the underlying pathology of a disease. Such a problem occurs when a model is removed from its original setting.
The team of medical practitioners trained the deep convolutional neural networks on an X-ray image from a specific data set. It was observed that the model's test performance faltered when it was removed from its original setting and the accuracy level fell by half when the model was exposed to the external environment.
Su-In Lee, a PhD associate professor in Allen School asserts that the study on the AI models for Covid-19 detection underscores the fundamental role that explainable AI will play in ensuring the safety and efficacy of these models in decision-making in medical science.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.