Latest News

Top 10 Neural Network Architectures Every ML Engineer should Know

Veda

Here is the list of the popular 10 neural network architectures that every ML engineer should learn.

Neural networks are a subset of machine learning and are at the heart of deep learning algorithms. It is a series of algorithms that endeavours to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. And the neural network architecture is made of individual units called neurons that mimic the biological behaviour of the brain. Here is some popular neural network architecture that every ML engineer should learn.

LeNet-5: It is one of the earliest pre-trained models proposed by Yann LeCun, it has a very fundamental architecture. Machine learning engineers used this architecture for recognizing the handwritten and machine-printed characters and it was used in detecting handwritten cheques by banks based on the MNIST dataset. The main advantage of this architecture is the saving of computation and parameters. In an extensive multi-layer neural network.

SqueezeNet: A SqueezeNet architecture stacks a bunch of fire modules and a few pooling layers. The squeezing and expansion behaviour is common in neural architectures. It is a convolutional neural network that employs design strategies to reduce the number of parameters, notably with the use of fire modules that "squeeze" parameters using 1×1 convolutions.

ENet: It was designed by Adam Paszke. It is a semantic segmentation architecture that utilizes a compact encoder-decoder architecture. It is a very lightweight and efficient network.

Network-in-network: It is a neural network architecture that provides higher combinational power and has simple & great insight. It enhances model discriminability for local patches within the receptive field. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input.

Dan Ciresan Net: In 2010 Dan Claudiu Ciresan and Jurgen Schmidhuber published one of the very first implementations of GPU Neural nets. There were up to 9 layers of the neural network. It was implemented on an NVIDIA GTX 280 graphics processor, and it had both backward and forward.

VGG: VGG stands for Visual Geometry Group, it is a standard deep CNN architecture with multiple layers. Oxford was the first to use much smaller 3×3 filters in each convolutional layer and also combined them as a sequence of convolutions. In VGG, large filters of AlexNet like 9 x 9 or 11 x 11 were not used.

AlexNet: It won the Imagenet large-scale visual recognition challenge in 2012. The model was proposed by Alex Krizhevsky and his colleagues. It scaled the insights of LeNet into a much larger neural network that could be used to learn much more complex objects and object hierarchies.

Overfeat: It is a classic type of convolutional neural network architecture, employing convolution, pooling, and fully connected layers. In 2013 the NYU lab from Yann LeCun came up with Overfeat, which is a derivative of AlexNet. Many papers were published on learning bounding boxes after learning the article proposed bounding boxes.

Bottleneck: Inference time was kept low at each layer by the reduction of the number of operations and features by the bottleneck layer of Inception. There are 3 convolutional layers instead of 2. The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimension

ResNet: It stands for Residual Network, introduced by Microsoft researchers. It is a powerful backbone model that is used very frequently in many computer vision tasks. ResNet uses a skip connection to add the output from an earlier layer to a later layer and helps in mitigating the vanishing gradient problem.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Top Cryptocurrencies for Privacy and Anonymity

7 Altcoins That Will Outperform Ethereum (ETH) and Solana (SOL) in the Next Bull Run

Invest in Shiba Inu or Dogecoin? This is What $1000 in SHIB vs DOGE Could Be Worth After 3 Months

Ripple (XRP) Price Skyrocketed 35162.28% in 2017 During Trump’s First Term, Will History Repeat Itself in 2025?

These 4 Altcoins Are Set for a Meteoric Rise as Bitcoin (BTC) Enters Price Discovery Mode