From the Cesspool of Cambridge Analytica How Has Facebook Fared so far?

From the Cesspool of Cambridge Analytica How Has Facebook Fared so far?
Published on

While fighting misinformation is a gargantuan task, Facebook is trying its best to step up its game using artificial intelligence

By the time you have come across this feature, you might have read about the now-viral article titled "How Facebook got addicted to spreading misinformation" by MIT Technology Review. While it is no brainer that social media has become synonymous with misinformation about various topics, how did Facebook get caught up in its shackles? Is the modern technology of artificial intelligence to be blamed for this mayhem?

Remember the infamous Cambridge Analytica Scandal, a few years ago? A British data firm, Cambridge Analytica played a key role in mapping out the behavior of voters in the run-up to the 2016 US election. The scandal involved sifting through Facebook data of 87 million people being used for advertising during elections. In October 2018, the UK's data protection watchdog fined Facebook £500,000 for its role in the Cambridge Analytica scandal. Though the punishment didn't fit the crime, that's not all, Cambridge Analytica is also linked with Britain's exit from the European Union. Even if these incidents resulted in the mass deletion of the app, employees resigning in protest – the problem never stopped nor was it properly addressed.

The MIT Technology Review article states "In late 2018 Facebook admitted that the existing hate speeches and misinformation activity had fueled a genocidal anti-Muslim campaign in Myanmar for several years. In 2020 Facebook started belatedly taking action against Holocaust deniers, anti-vaxxers, and the conspiracy movement QAnon."

With the initiation of the Responsible AI team, Facebook also tried its best to counter the problem of AI bias. However, the condemnable incident of US Capitol riots this year, points out that Facebook failed again in tackling misinformation. But how did the artificial intelligence algorithms that were built to analyze user behavior turn into something dark and malicious agent of chaos and hatred?

The answer for this is clearly and well explained in the MIT Technology Review article, which we do not want to disclose for those who haven't read it yet. However, it once again highlights how the greed to boost Facebook's market growth and popularity among the users via analysis of user behavior-based activity and generating user-specific fined tuned content led to putting the social media giant walk on eggshells in the future. Yet the battle is far from over and things do not seem bright yet.

Since the outbreak of COVID-19, Facebook has doubled down its efforts on using artificial intelligence to detect coronavirus misinformation and hate speech. Yes, the stakes are high and the misinformation and hate speech keep on resurfacing, but Facebook is not quitting. For starters last May, the company was working with 60 fact-checking organizations, including the Associated Press and Reuters, to review coronavirus content on the social network. In the previous month, it had put warning labels on approximately 50 million COVID-19 related posts. Also, it had removed more than 2.5 million posts about the sale of masks, sanitizers, surface disinfecting wipes and COVID-19 test kits. Using artificial intelligence tool – SimSearchNet, Facebook identifies copies of false information containing images by matching them against a database of images that contain misinformation.

Last November, Facebook launched SimSearchNet++, which is based on SimSearchNet. SimSearchNet++ is an improved image matching model that is trained using self-supervised learning to match variations of an image with a very high degree of precision and improved recall. It's deployed as part of Facebook's end-to-end image indexing and matching system, which runs on images uploaded to Facebook and Instagram. For images with texts, it uses optical character recognition (OCR) verification to group misinformation matches at high precision.

Facebook's computer vision-based ObjectDNA helps to automatically detect new variations of content that independent fact-checkers have already debunked. After it detects new variations of misinformation visuals (images), Facebook flags them for its' fact-checking partners for their review.

But whether it is fighting against COVID-19 misinformation or preventing another Cambridge Analytica, consistent measures must be taken to update the current artificial intelligence models and prototypes. As the MIT Technology Review article opened how the plans for propelling Facebook's growth became a hurdle in fighting misinformation, today, we can no longer afford to make similar historical mistakes that cost privacy and freedom.

Yes, deploying and maintaining artificial intelligence models is a complex responsibility, yet it is a greater good bargain. A recent article on The Guardian reveals that in a study by non-profit human rights group Avaaz, it was observed that only 30% of comparable misinformation in Spanish is flagged. This is significantly less slow than the observation that 70% of misinformation in English on Facebook ends up flagged with warning labels. The research blames Facebook as the main reason for this. It cites that Facebook doesn't dedicate enough resources to Spanish-language moderation, which includes a failure to hire enough Spanish-speaking workers. This instance shows that not everything can be blamed on artificial intelligence. Sometimes, we ourselves are responsible for the spread of misinformation on social media channels too. Though there may not be any use placing blame now, the least we can do is ensure Facebook takes stringent action to prevent the wildfire of misinformation in long run using the same artificial intelligence technology.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net