Artificial Intelligence

How Facebook Addresses Terrorism and Violence Through AI/Content Moderators?

Smriti Srivastava

The usage of Facebook has become a cultural, economic and social phenomenon that contributes to a lot of aspects of life including business, communication, social connections, and most importantly journalism. The social media site does not only inform about the whereabouts of your friends and family but also you can gain awareness of worldly events. In a fast-paced world where digital journalism has become prominent, it has become quintessential to monitor and analyze the kind and source of information reaching to people.

Specifically, Facebook is an extremely crucial platform as its target audience spans across all age groups. As a matter of fact, the site has an unquestionable impact on its audience. In the past decade, the company has faced several criticisms regarding its unhealthy content and post. From cybersecurity issues to war-crime provoking stuff, numerous Facebook contents have made headlines over time. As the company is continuously facing privacy and security issues, and the user's exposure to sensitive data on the platform, it has become rather famous for the spread of misinformation along with offensive content which can subsequently deliver wrong messages to the audience. However, the company has said that it is trying to narrow down and remove such content as per its capability.

The fact is not unknown that Facebook has hired several content moderators across the world including its AI services that can potentially detect offensive content over the site.

With the advent of new-age technologies, most of the content moderation tasks done by machine learning systems leave moderators with less content for reviewing. AI tends to do all the work beforehand.

To understand how far the company has come to fight the spread of misinformation, we can analyze its efficiency when not a long time back, Facebook claimed that it has identified 98 percent of terrorist photos and videos before it reaches out to the public.

Well as of now the company is training its ML systems to detect and label objects as dangerous in the videos posted on the site. Facebook is leveraging neural networks to detects objects based on their behaviors and features. It then labels them with evident percentage and confidence.

Currently, the tech giant is training such networks on different videos along with pre-labeled videos. Reportedly, these neural networks possess the capability to identify the whole scenario in the picture and highlight danger-implying flags in any.

Also, Facebook forwards data to human moderators as well to review if in case any suspicious behavior is predicted in the images, videos or any other media or content. In case of confirmation, the team creates a hash that will enable Facebook to automatically remove such content or a similar one on the internet if re-uploaded later. The social media site can share these hashes with other social media platforms to indicate them for content removal.

However, the company is still struggling to automate the understanding of language, earning and nuance through a machine that makes Facebook more dependent on its moderators to review any adverse content and situation on the platform. As of now, such machine-enabled systems do not have enough capability to identify much content but the company is hopeful that it might become able in the future.

To recall, Facebook has been criticized for its inability to curb violent content leading to war crimes in Libya. Reportedly, BBC's 2019 investigation discovered the evidence of alleged war crimes in Libya that were being shared on Facebook and YouTube. The media house found the images and videos of the bodies of fighters and civilians being desecrated by fighters on social media sites.

The wise step of content moderation through moderators or AI/ML technology is expected to help Facebook enhance its efficiency in handling such issues further.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

These 2 Affordable Altcoins are Beating Solana Gains This Cycle: Which Will Rally 500% First—DOGE or INTL?

Avalanche (AVAX) Nears Breakout Above $40; Shiba Inu (SHIB) Consolidates – Experts Say This New AI Crypto Could 75X

Web3 News Wire Launches Black Friday Sale: Up to 70% OFF on Crypto PR Packages

4 Cheap Tokens That Will Top Dogecoin’s (DOGE) 2021 Success in the Next Bull Run

Ripple (XRP) Price Eyes $2, Solana (SOL) Breaks Out While Experts Suggest a New Presale Phenomenon Could Be Next Up