Latest News

Data Poisoning Can Install Backdoors in Machine Learning Models

sirisha

The way data changes go unnoticed is too pretentious to consider data poisoning innocuous

Machine learning is treading into new horizons with each passing day. Now that cloud computing capabilities like high performance and easy storage are within reach, companies want to accelerate their businesses and ML-based process is their new 'mantra'. Around 47% of organizations worldwide had implemented AI into their operations, and another 30% experimenting with the idea. As vendors rely invariably on ML processes, the unsuspecting users come to trust the algorithms for making decisions including critical ones. But what users are not aware of is, that these algorithms can be injected with malicious data, which is called data poisoning. It is not a simple hit-and-run case of data manipulation, companies across the world are losing billions just because they fall victim to data poisoning.

As online consumers, we come across recommendation systems, which literally rule our lives. Be it in online shopping malls, social media, or entertainment platforms, they follow you faithfully collecting the data to be fed back into the algorithms for the cycle to repeat. And, it is the part of the machine learning cycle, i.e., a process of how machines learn from data to make better recommendations than before. Security experts warn this technology can be misused by adversaries to derive undesirable results and even take control of your lives. In a typical case of social media manipulation, the manipulators skew the recommendation system using fake accounts from 'troll farms' to spread fake information. "In theory, if an adversary has knowledge about how a specific user has interacted with a system, an attack can be crafted to target that user with a recommendation such as a YouTube video, malicious app, or imposter account to follow," Andrew Patel, a researcher with the Artificial Intelligence Centre of Excellence at security.

What is data poisoning?

In simple terms, it is tampering with the user data a machine learning model is trained on. It is considered an integrity issue as when a model is tampered with, it falls behind the benchmark against its output is set. Besides, unauthorized access can leave the model vulnerable to malicious cyber activity. For example, just by changing the minor details in the data for recommendation engines, they can make someone download malware or click on an infected link. It can be achieved by compromising the data integrity in the following ways:

  • Confidentiality – Attackers can manipulate the supposedly confidential data by including unnecessary details
  • Availability – Attackers disguise the data to prevent the correct classification of data
  • Replication – Attackers reverse engineer the model to duplicate the model either to inject a vulnerability or exploit it to seek financial gains

The way data changes go unnoticed is too pretentious to consider data poisoning innocuous having only a short-term effect. For the end-user, it doesn't make much difference if product B is displayed beside product A which is in alignment with his choices. But there are certain serious cases where Amazon's recommendation algorithm has been manipulated to recommend anti-vaccination literature alongside medical publications and in other cases, it ended up pushing the notorious 4-chan troll campaign through its poisoned product recommendations.

Fixing a poisoned model – an option worth forgetting:

ML models are trained for a long time and in some cases for years. When a vendor comes to know that product B is being sold alongside his product A, he needs to go through the entire history of the algorithm. Finding the data points related to other products, and the mechanisms the fake users adopted to induce the behavior is quite tedious. In a way, the model has to be retrained with new data or clean the old data. And, there is no guarantee that the algorithm will not be poisoned again particularly when it is difficult to tell fake manipulation from real manipulation. Social media platforms are flooded with heaps of fake data accounts every day and cleaning or retraining algorithms would only be viable when it involves instances like inciting hate speech or online harassment. In one particular case GPT-3, it has cost OpenAI around $16 million to retrain the model. It seems there is no viable solution in near future except for developing a Golden data set capable of detecting regressions, as suggested by Google's researcher Bursztein.

More Trending Stories 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Ethereum’s Comeback Sparks Interest—Can It Last? Lunex Surges Ahead While BRETT Stumbles

Litecoin Holders See Record Profits Since April! Why WIF and Lunex Are Must-Haves This Bull Run

Top 100 Blockchain Companies in 2025

Can XRP Hit ATH as Google Searches Surge? Lunex Soars with Massive Hype While Bonk Dips

Vote-to-Earn Meme Coin Hits $2.5M Milestone — Early Investors Looking at Massive Gains