Everything you Need to Know about Geometric Deep Learning?

Everything you Need to Know about Geometric Deep Learning?
Published on

What is Geometric Deep Learning? Let's learn about various networks in this article.

The deep learning algorithms like Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) have done significant work in solving problems of various fields like speech recognition, computer vision, and a lot more in the last few years. Although the results had great accuracy, it mostly worked on euclidean data. But when it comes to Network Science, Physics, Biology, Computer Graphics, and Recommender Systems, we have to deal with non-euclidean data, i.e. manifolds and graphs. Geometric Deep Learning deals with this non-euclidean data with a sense of deep learning techniques as a whole to the manifold or graph-structured data.

Working with 2D data is becoming passe as more and more researchers tap 3D data to develop AI models. Geometric deep learning, as the field is popularly called, deals with complex data such as graphs to create competitive models. Geometric deep learning, which Michael M. Bronstein first mentioned in the paper titled Geometric deep learning: going beyond Euclidean data, is now finding applications in areas such as 3D object classification, graph analytics, 3D object correspondence, and more.

Reinforcement Learning

Reinforcement learning has been used successfully in driving the search process for better architectures. The ability to navigate the search space efficiently to save precious computational and memory resources is typically the major bottleneck in a NAS algorithm. Often, the models built with the sole objective of a high validation accuracy end up being high in complexity–meaning a greater number of parameters, more memory required, and higher inference times.

Neuroevolution

Floreano et al. (2008) claim that gradient-based methods outperform evolutionary methods for the optimization of neural network weights and those evolutionary approaches should only be used to optimize the architecture itself. Besides deciding on the right genetic evolution parameters like mutation rate, death rate, etc. There's also the need to evaluate how exactly the topologies of neural networks are represented in the genotypes used for digital evolution.

Designing the Search Strategy

Most of the work that has gone into neural architecture search has been innovations for this part of the problem which is finding out which optimization methods work best, and how they can be changed or tweaked to make the search process churn out better results faster and with consistent stability. There have been several approaches attempted, including Bayesian optimization, reinforcement learning, neuroevolution, network morphing, and game theory.

Artificial Neural Network

It is a type of neural network designed as a feed-forward network. Information passes from one layer to other without revisiting the previous layers. It is designed to identify the pattern in raw data and improve on every new input it gets. The design architecture overlays three layers, where each layer adds weight to the passage of information. They are popularly known as Universal Functional Approximators, as they are capable of learning non-linear functions. Mostly used in predictive processes such as in business intelligence, text prediction, spam email detection, etc., it comes with few drawbacks and advantages over other algorithms. 

Convolution Neural Network

Widely used for its computer vision applications, it comes with three layers viz. convolutional layer, pooling layer, and fully-connected layer. Computer vision, which is applied in image identification anchors on CNN networks. The complexity of algorithms increases with each layer. They analyze the input through a series of filters known as kernels. They are like matrices that move over the input data, used to extract features from the images. As the input images are processed, the links between neurons develop as kernels in the layers. For example, to process an image, kernels go through sequential layers and change accordingly in the process of identifying colors, shapes, and eventually, the overall image. 

Recurrent Neural Networks

Voice recognition and natural language processing are the two linchpins of the RNN network. Be it voice search with Apple's Siri, Google Translate, or Picasa's face detection technology, it is all possible because of RNN algorithms. Contrary to feed-forward networks, RNN networks leverage memory. While for traditional neural networks inputs and outputs are assumed to be independent, the RNN network depends on previous outputs within the sequence. RTT Networks use a backpropagation technique that is slightly different from that used by other networks, which is specific to the complete sequence of data.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net