Using AI to Fight Against Bad Actors in Social Media

Using AI to Fight Against Bad Actors in Social Media
Published on

Over the past few years, we've heard more and more about how artificial intelligence (AI) will help solve common business problems by identifying fraud, automating processes, or completing many of the repeatable tasks that we rely on humans to complete today. But AI is also valuable for something we may not think of: social media. How will recent advances in AI change the social media user experience, and will it change it for the better?

It's easy to understand how AI changed the social media experience for the worse; we're all familiar with the bots that played a key role in the US presidential election, when they posted automated material intended to polarize voters. That kind of activity has continued, with Facebook deleting 583 million fake accounts in the first three months of this year. Facebook isn't alone in this either. A Pew Research Center report estimates that "two-thirds of tweeted links to popular websites are posted by automated accounts–not human beings."

With these kinds of essentially uncontrollable networks that evolve very quickly, including bots that can be removed and replaced rapidly, there's no way that any manual efforts can effectively monitor and react to everything that is posted. Instead, social media networks will have to rely on AI as a weapon against these "bad actors," who are AI themselves.

The first and most obvious potential—and actually current—use case for AI to impact social media is to combat other AIs that were deployed for malicious purposes. In particular, there's been an influx of bots and other bad actors who take advantage of the power of connection that social networks rely on and then use that against the best interest of the public. In the case of the US election, bots targeted certain users based on their listed interests on their profile, and those targeted posts soon spread like wildfire. From an initial 3,000 posts, the final reach grew to 126 million people. This issue was compounded by the difficulty in determining whether a profile was created by AI or an actual person—and whether that person is using the site for legitimate purposes.

In addition, AI can dynamically identify other instances of inappropriate activity, such as hate speech or other content that is defined in existing policies as impermissible. By using AI to identify posts that are inappropriate or violating a site's policies, social media networks are able to remove the content before users see it. Rather than simply referencing a static list of blacklisted words, this new class of AI is able to look at individual users, analyze how their language evolves over time, and compare that to the evolution of language on the platform as a whole. Over the long term, AI may recognize certain patterns of behavior that indicate malicious intent, which will ultimately lead to changes in site policy, or help pinpoint bots that were created for illicit purposes. However, human reviewers are likely to be relied upon to determine meaning and intent, and all humans bring his or her own biases to the review process when weighing in on intent, especially when it comes down to distinguishing the difference between more straightforward inappropriate content—like nudity—and content that may not in itself be inappropriate, but is used for what the user deems to be ill intent. One such example of this gray area is already being tested by Facebook: suicide prevention. Using AI, Facebook will scan posts for patterns linked to suicidal tendencies, and will then send mental health resources to the user or the user's friends. The AI will also prioritize which users are the most at risk, so their profile information can be sent to moderators or trained mental health counselors.

Beyond text and behavior recognition, AI use cases are expanding to encompass emoji and video review. Google has recently made public some of their advanced AI technology that they're using for semantic tagging and visual tagging throughout their catalog of videos, and as the technology behind this becomes more democratized, its use will become more common. Further advances in AI will allow for more automated filtering of a broader class of media, thereby reducing its exposure within a platform, and will greatly reduce the time it takes to resolve issues.

For most users, this kind of AI implementation would look, ideally, like nothing, if it is deployed optimally. In short, we would know that the AI is working because of a lack of negative events, not because of any change in how the network functions or how users interact with each other. The presumable interface of this first wave of AI will look very similar to what many users have already seen, but generating user adoption, which is essential for a learning system, will require users realizing that it's also up to them to police and define the guidelines of the community that they belong to.

No individual user ever feels that they have the power to tell platforms like Facebook to change their terms and policies, but obviously, these recently highly publicized events demand that something needs to be done. In the past, prior to the recent adoption of chatbots across commercial and social platforms, it might've been difficult to convey the message that you can make a difference for good. Now, people realize that it's up to them to make sure that the power of the social media networks they enjoy are not used against them.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net