Neural Architecture Search: The Process of Automating Architecture

Neural Architecture Search: The Process of Automating Architecture
Published on

Neural Architecture Search (NAS) has become a popular subject in the area of machine-learning science

Handcrafting neural networks to find the best performing structure has always been a tedious and time-consuming task. Besides, as humans, we naturally tend towards structures that make sense in our point of view, although the most intuitive structures are not always the most performant ones. Neural Architecture Search is a subfield of AutoML that aims at replacing such manual designs with something more automatic. Having a way to make neural networks design themselves would provide a significant time gain, and would let us discover novel, good performing architectures that would be more adapted to their use-case than the ones we design as humans.

NAS is the process of automating architecture engineering i.e. finding the design of a machine learning model. Where it is needed to provide a NAS system with a dataset and a task (classification, regression, etc), it will come up with an architecture. And this architecture will perform best among all other architectures for that given task when trained by the dataset provided. NAS can be seen as a subfield of AutoML and has a significant overlap with hyperparameter optimization. 

Neural architecture search is an aspect of AutoML, along with feature engineering, transfer learning, and hyperparameter optimization. It's probably the hardest machine learning problem currently under active research; even the evaluation of neural architecture search methods is hard. Neural architecture search research can also be expensive and time-consuming. The metric for the search and training time is often given in GPU-days, sometimes thousands of GPU-days. 

Modern deep neural networks sometimes contain several layers of numerous types. Skip connections and sub-modules are also being used to promote model convergence. There is no limit to the space of possible model architectures. Most of the deep neural network structures are currently created based on human experience, requiring a long and tedious trial and error process. NAS tries to detect effective architectures for a specific deep learning problem without human intervention.

Generally, NAS can be categorized into three dimensions- search space, a search strategy, and a performance estimation strategy.

Search Space:

The search space determines which neural architectures to be assessed. Better search space may reduce the complexity of searching for suitable neural architectures. In general, not only a constrained but also flexible search space is needed. Constraints eliminate non-intuitive neural architecture to create a finite space for searching. The search space contains every architecture design (often an infinite number) that can be originated from the NAS approaches.

Performance Estimation Strategy:

It will provide a number that reflects the efficiency of all architectures in the search space. It is usually the accuracy of a model architecture when a reference dataset is trained over a predefined number of epochs followed by testing. The performance estimation technique can also often consider some factors such as the computational difficulty of training or inference. In any case, it's computationally expensive to assess the performance of architecture.

Search Strategy:

NAS relies on search strategies. It should identify promising architectures for estimating performance and avoid testing of bad architectures. Throughout the following article, we discuss numerous search strategies, including random and grid search, gradient-based strategies, evolutionary algorithms, and reinforcement learning strategies.

There is a need for a way to design controllers that could navigate the search space more intelligently.

Designing the Search Strategy

Most of the work that has gone into neural architecture search has been innovations for this part of the problem that is finding out which optimization methods work best, and how they can be changed or tweaked to make the search process churn out better results faster and with consistent stability. There have been several approaches attempted, including Bayesian optimization, reinforcement learning, neuroevolution, network morphing, and game theory. We will look at all of these approaches one by one.

Reinforcement Learning

Reinforcement learning has been used successfully in driving the search process for better architectures. The ability to navigate the search space efficiently to save precious computational and memory resources is typically the major bottleneck in a NAS algorithm. Often, the models built with the sole objective of a high validation accuracy end up being high in complexity–meaning a greater number of parameters, more memory required, and higher inference times. 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net