On smart speakers and other low-power linked devices, a new technique leverages optics to speed up machine learning calculations.
A graphic depicts a revolutionary piece of hardware called a smart transceiver that drastically accelerates one of the most memory-intensive stages of running an ML model by utilizing silicon photonics technology. This can make it possible for an edge device, such as a smart home speaker, to conduct computations with an energy efficiency increase of more than 100 times.
It takes a smart home appliance a few seconds to reply when you ask it for the weather forecast. One factor contributing to this latency is that linked devices lack the memory and processing capacity necessary to store and run the massive machine-learning models required for the device to comprehend what a user is requesting of it. The solution is calculated and sent to the device from a data center that may be hundreds of miles away from where the model is kept.
Researchers at MIT have developed a brand-new technique for performing computations straight on these gadgets that significantly lowers this latency. Their method moves the memory-intensive machine-learning model operations to a central server where the model's elements are encoded onto light waves.
The waves are transmitted to a connected device using fibre optics, which enables the transmission of enormous amounts of data over a network at breakneck speeds. The components of a model transmitted by those light waves are then instantly computed by the receiver using a simple optical apparatus.
This technology significantly improves energy efficiency as compared to earlier methods—more than a hundred times. It might also improve security because user data won't need to be routed to a central location for processing.
With the help of this technique, a self-driving automobile might be able to make judgments instantly while consuming a very small amount of the energy that power-hungry computers now consume. Additionally, it might be used to analyse live video via cellular networks, allowing high-speed picture categorization on a spaceship millions of kilometres from Earth, or even offer latency-free communication between a user and their smart home gadget.
In ML, neural networks use layers of interconnected nodes, or neurons, to identify patterns in datasets and carry out tasks like speech recognition and image classification. Nevertheless, these models might use a huge number of weight parameters, which are numerical values that change the input data as it is processed. Remembering these weights is necessary. The data translation process requires billions of algebraic computations to be performed simultaneously, which uses a lot of power.
They created the Netcast neural network architecture, which stores weights in a central server coupled to a revolutionary piece of hardware known as a smart transceiver. The silicon photonics technology used by this smart transceiver, a thumb-sized data receiver, and transmitter, allows it to retrieve trillions of weights from memory per second.
Weights are received as electrical signals, which are then imprinted onto light waves. The transceiver transforms the weight data, which are encoded as bits (1s and 0s), by turning on and off lasers. A laser is switched on for a 1 and off for a 0. In order to avoid having a client device contact the server in order to get them, it mixes these light waves and then periodically sends them across a fiber optic network.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.