The gadgets which surround our daily routine are about to get more intelligent soon. Devices including smartphones, security cameras and speakers will soon be running with artificial intelligence software. The combination of AI and these devices are expected to boost the image and speech processing actions. Do you know what is making this to happen?
It's a compression technique called quantization. The technique is paving the smooth way for reducing computation and energy cost by compressing deep learning models into a smaller unit. But on the other hand, the smaller models become an easy target of cyber-crime. It becomes convenient for malicious attackers to intervene into the AI system and manipulate the chores.
A new study by IBM and MIT researchers show the vulnerability of compressed models. The study also offers a solution to the issue. It suggests adding a mathematical constraint during quantization to minimize the risk of AI falling prey to the attack.
• The reduced bit length of the model is more likely to misclassify altered images.
• This can happen due to an error amplification effect.
• The altered image becomes more distorted with every step of processing.
• Ultimately by the end of whole processing, the model is expected to mistake a frog for a deer.
• The AI models compressed to 8 bits or fewer are more prone to fall prey to adversarial attacks.
However, by controlling the Lipschitz constraints during the quantization can reinstall some flexibility or recovery. If survive or prevented from the malicious attack, the quantized models have the ability to outperform the 32-bit model.
Song Han, an assistant professor in MIT's Department of Electrical Engineering and Computer Science and a member of MIT's Microsystems Technology Laboratories said, "Our technique limits error amplification and can even make compressed deep learning models more robust than full-precision models. With proper quantization, we can limit the error."
Study co-author Chuang Gan said, "The team plans to further improve the technique by training it on larger datasets and applying it to a wider range of models. Deep learning models need to be fast and secure as they move into a world of Internet-connected devices. Our Defensive Quantization technique helps on both fronts." Chuang Gan is a researcher at the MIT-IBM Watson AI Lab.
The team of researchers including MIT graduate Ji Lin is ready to present the outcomes of the study at the International Conference on Learning Representation in May.
Han himself is using artificial intelligence to propel the limits of quantization model compression technology. In recent work, Han and his colleagues have displayed that reinforcement learning can be employed for automatically discovering the smallest bit length for each step of the process in a quantized model.
Han said, "This flexible bit width approach reduces latency and energy use by as much as 200 percent compared to a fixed, 8-bit model."
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.