DCNN’s Vision Incapability Could Put Real-World AI Apps in Danger

DCNN’s Vision Incapability Could Put Real-World AI Apps in Danger
Published on

AI is currently facing several complex challenges that might jeopardize the existing models 

Artificial intelligence has become the largest innovation that the tech industry has ever witnessed. The innovation and research directed by AI are aiding global industries like healthcare, retail, and finance toward advancements. Even though evolving technology has become an important element in business development, AI is bringing new perspectives to the global entrepreneurial paradigm with its avant-garde approaches. AI applications and models are being deployed by some of the major tech companies in the world like Amazon, Apple, Facebook, and IBM. These companies constantly evolve their AI integration pipeline based on the evolving tech landscape. AI's most important and gradually advancing branch is deep learning since the importance of convolutional neural networks has become quite evident to business leaders.

The growing demand for deep neural networks and deep convolutional neural networks has forced researchers and scientists to advance their prospects. But unfortunately, after several rapid advancements, convolutional deep neural networks or DCNNs still do not possess the same accuracy as humans do, and since AI is literally supposed to automate and perform tasks for humans, AI neural networks need accuracy and efficiency to imitate human intelligence. According to recent reports, DCNNs do not envision objects the way humans do and this could prove to be quite dangerous for the AI applications that are already being deployed in projects.

Deep Convolutional Neural Networks have Failed to Attain Sensitivity like Humans

In a collaborative study conducted by Elder, from York Research Chair in Human and Computer Vision and the Co-Director of York's Centre for AI and Society, and Assistant Psychology professor Nicholas Baker from Loyola College in Chicago explores how human brains and DCNNs react to 'Frankensteins', a novel visual stimuli. Frankensteins are basically simple objects that have been taken apart and put back together, in the wrong way. Generally, when presented with a Frankenstein object, humans get confused, whereas, DCNNs do not reveal any insensitivity to these configural object properties.

The researchers claimed that the streamlining of these AI models has failed under certain critical conditions and clearly demonstrate that AI development should focus more on tasks beyond object recognition. Apparently, these models use "shortcuts" while solving complex recognition tasks. Sometimes these shortcuts work, based on several cases, but unfortunately, these might become quite dangerous when it comes to real-world AI applications that the industry and government are currently using.

Bottom Line

Based on the predicaments released by researchers, modifications to the training and architecture of AI models can make these neural network projects act more as the human brain would usually do. They speculate that to accurately match human configural sensitivity, all networks should be trained to solve a broader range of cutting-edge tasks which goes beyond object recognition. In a nutshell, these are the basic shortcomings that AI researchers should work on overcoming. AI is not just a technology that can be used to automate complex human tasks, it was innovated for human intelligence, it might sound dangerous, but scientists should now focus on meeting complex challenges to successfully attain these goals.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net