What machine learning cannot, deep learning can. This is being proved as deep learning is accurately solving machine learning's most pressing challenges. But the dependency on high-performance computing systems is the factor that limits deep learning's capabilities. Supercomputer clusters and GPU arrays work on highly complex and large computational architectures that facilitate the heavy workload required to run deep learning algorithms. Furthermore, deep neural networks require machine learning experts to create complex architectures and fine-tune them to leverage productivity.
Many field experts and researchers have raised several flags talking about the complex issues to create a demand for deeper and larger networks to boost cognitive accuracy. Over time, this has become a pressing concern because experts are unable to take advantage of this powerful technology due to scarce energy and computational resources that are hampering the designing of their architectures.
Researchers from the Vision and Image Processing Lab at the University of Waterloo have developed strategies to aid this issue. Taking a different approach to enable powerful yet operational deep intelligence, the team is considering the idea of creating neural networks that evolve naturally over the course of time to become powerful and efficient.
Evolutionary deep intelligence refers to evolving deep neural networks over the course of generations to become smart and efficient. The core of each deep neural network is encoded computationally to put to use, along with stimulated environmental factors that encourage computational and energy efficiency. With natural selection, the deep neural network can periodically create new networks which will have advanced capabilities than the previous deep networks.
Researchers at the University of Waterloo have conducted an experiment using the MSRA-B and HKU-IS datasets. The results of the experiment demonstrated that the synthesized "offspring " deep neural networks can achieve state-of-the-art F-beta scores by having a more efficient architecture. The newer networks had approximately 48 times fewer synapses by the fourth generation compared to the initial ones.
This high performance was further put to test by using the MNIST dataset and the results added to the state-of-the-art demonstration, highlighting 99% accuracy with advanced neural networks that had approximately 40 times fewer synapses by the seventh generation. By the time the neural networks reached their thirteenth generation "offsprings", the accuracy was approximately 98% accurate and had up to 125 times fewer synapses compared to the first generation networks.
This concept of evolutionary deep intelligence by the University of Waterloo won several awards and accolades including a Best Paper Award at the NIPS Workshop on Efficient Methods for Deep Neural Networks, a Best Paper Award at the Conference on Computational Vision and Intelligence Systems, and named by the MIT Technology Review as one of the most interesting and thought-provoking papers on arXiv.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.