Improve Neural networks to Address Face Recognition Systems’ Vulnerabilities

Improve Neural networks to Address Face Recognition Systems’ Vulnerabilities
Published on

With data being the foundation of a majority of tasks that organizations take up, it is important for them protect the data from attacks. It is solely because of this why researchers design techniques in order to make machine learning models efficient to the extent that they can withstand these attacks. One of the toughest challenges posed is the fact that a neural network can be exploited by the attackers in ways that surpass the anticipation levels.

Neural networks, as a result of their ease of construction, training, and deployment are deployed in a variety of models. However, what many do not understand is that the Face Recognition Systems (FRS), which rely heavily on the neural networks inherit the network's vulnerabilities. This is why FRS is prone to a number of attacks. Here are the most common attacks faced by the organizations in the domain of FRS.

  • Presentation attack – This is, by far, the most obvious of all the attacks wherein an attacker simply holds a picture or video of the target person in front of the camera. An attacker coming up with every possible way to exploit the models comes as no big a surprise. What serves to be one among the easiest ways to fool the Face Recognition System is to use a face mask. Yet another variation of a presentation attack is that of a physical perturbation attack wherein the attacker is seen wearing something specially crafted to fool the FRS. For example – a pair of glasses. A human could easily make out that there is a stranger on the other side. However, this is not the case of a FRS neural network as it is subject to getting fooled very easily.
  • Digital attack – A point that has always remained an area of concern is that face recognition systems are much more vulnerable to digital attacks. What makes digital attacks much more insidious than physical attacks is the fact that an attacker with sound knowledge about neural networks and FRS holds the potential to fool the network and impersonate anyone. One of the most annoying attacks under digital attacks is the noise attack wherein the image of the attacker is modified by a custom noise image. A point worth noting is that each pixel value is changed by a maximum of 1%. When a human is put to test, he or she can easily identify the image that looks different but for a neural network, this is not the case. What happens here is that a neural network registers it as a completely different image. What follows is the attacker going unnoticed by both a human operator and the FRS. Transformation and generative attacks also make into the list of those FRS attacks that are witnessed on a regular basis. In case of transformation attacks, the attacker rotates the face or moves the eyes in a way intended to fool the FRS. In case of generative attacks, the attacker employs sophisticated generative models to create a facial structure that is similar to the target.

What could be done to improve the robustness of neural networks to address the vulnerabilities of Face recognition systems?

One of the best ways to handle this is to deploy machine learning robustness. This is where one can find answers when it comes to mitigating adversarial attacks. Incorporating adversarial examples into training is yet another remarkable idea to implement. Though the model would be less accurate on the training data, but it would be better suited to detect and reject adversarial attacks when deployed. Also, the model would be in a position to perform more consistently on real world data, despite it being noisy and inconsistent. Can it get any better?

In a nutshell, to be in a position to combat the vulnerabilities, machine learning robustness needs to be applied to an extent that it is a lot easier to ensure that the adversarial attacks are detected and prevented.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net