Artificial Intelligence

A Pen Stand or a Pressure Cooker can be Your Ultimate AI Device Soon

Satavisa Pati

Researchers are aiming to use physical neural networks to make physical objects work on their own.

Imagine if the objects around you started working on their own without zero help from you! As magical as that sounds, it is actually possible and none but AI is here to make that happen. Any object like a pen stand or a pressure cooker can work on its own, as the central processor in neural networks, a type of artificial intelligence that loosely mimics the brain to perform complex tasks. That's the promise of new research that, in theory, could be used to recognize images or speech faster and more efficiently than computer programs that rely on silicon microchips. "Everything can be a computer," says Logan Wright, a physicist at Cornell University who co-led the study. "We're just finding a way to make the hardware physics do what we want."

Current neural networks usually operate on graphical processing chips. The largest ones perform millions or billions of calculations just to, say, make a chess move or compose a word of prose. Even on specialized chips, that can take lots of time and electricity. But Wright and his colleagues realized physical objects also compute in a passive way, merely by responding to stimuli. Canyons, for example, add echoes to voices without the use of soundboards. To demonstrate the concept, the researchers built neural networks in three types of physical systems, which each contained up to five processing layers. In each layer of a mechanical system, they used a speaker to vibrate a small metal plate and recorded its output using a microphone. In an optical system, they passed light through crystals. And in an analog-electronic system, they ran current through tiny circuits.

In each case, the researchers encoded input data, such as unlabeled images, in sound, light, or voltage. For each processing layer, they also encoded numerical parameters telling the physical system how to manipulate the data. To train the system, they adjusted the parameters to reduce errors between the system's predicted image labels and the actual labels. In one task, they trained the systems, which they call physical neural networks (PNNs), to recognize handwritten digits. In another, the PNNs recognized seven vowel sounds. Accuracy on these tasks ranged from 87% to 97%, they report in this week's issue of Nature. In the future, Wright says, researchers might tune a system not by digitally tweaking its input parameters, but by adjusting the physical objects warping the metal plate.

The Potential of PNNs

By breaking the traditional software-hardware division, PNNs provide the possibility to opportunistically construct neural network hardware from virtually any controllable physical system(s). As anyone who has simulated the evolution of complex physical systems appreciates, physical transformations are often faster and consume less energy than their digital emulations. This suggests that PNNs, which can harness these physical transformations most directly, may be able to perform certain computations far more efficiently than conventional paradigms, and thus provide a route to more scalable, energy-efficient, and faster machine learning. PNNs are particularly well motivated for DNN-like calculations, much more so than for digital logic or even other forms of analog computation. As expected from their robust processing of natural data, DNNs and physical processes share numerous structural similarities, such as hierarchy, approximate symmetries, noise, redundancy, and nonlinearity. As physical systems evolve, they perform transformations that are effectively equivalent to approximations, variants, and/or combinations of the mathematical operations commonly used in DNNs, such as convolutions, nonlinearities, and matrix-vector multiplications. Thus, using sequences of controlled physical transformations, we can realize trainable, hierarchical physical computations, that is, deep PNNs.

Lenka Zdeborová, a physicist and computer scientist at the Swiss Federal Institute of Technology Lausanne who was not involved in the work, says the study is "exciting," although she would like to see demonstrations on more difficult tasks. Wright is most excited about PNNs' potential as smart sensors that can perform computation on the fly. A microscope's optics might help detect cancerous cells before the light even hits a digital sensor, or a smartphone's microphone membrane might listen for wake words. These "are applications in which you really don't think about them as performing a machine-learning computation," he says, but instead as being "functional machines."

More Trending Stories 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

AI Cycle Returning? Keep an Eye on Near Protocol, IntelMarkets, and Bittensor to Rally Before 2025

Ethereum and Litecoin Rallies Spark Excitement, But Whales Are Targeting a New Altcoin for 20x Gains

Solana to Double its 2021 Rally Says Top Analyst, Shows Alternative that Will Mirrors its Gains in 3 Months

Here Are 4 Altcoins You’ll Regret Not Holding In This Crypto Bull Run

What is MicroStrategy Doing with Bitcoin?