Is Artificial Intelligence Capable of Detecting and Removing Fake News?

Is Artificial Intelligence Capable of Detecting and Removing Fake News?
Published on

Is it possible to detect misinformation using AI-enabled techniques based on writing style and how articles are spread on social media?

Big technology and social media companies are working very hard on automatic identification of fake news using AI, network analysis and natural language processing for the prevention of dissemination of fake news. The main idea is to use algorithms to identify fake information and assign it to a lower-ranking making it easier for people to identify it. While AI can be used to spread false information, it can also be used to combat it. It's a powerful tool for detecting and eradicating fake news. Indeed, AI has been able to successfully discern between human and machine-generated content in recent years, thanks to the deployment of many algorithms.

Repetition and Exposure:

The repeated coverage of a news event increases the likelihood of someone believing it from a psychological viewpoint. AI in this regard can be used to identify fraud and minimising its spread and break the cycle of reinforced information consumption habits. But how far is AI successful in detecting such news? current detection relies on determining the dependability of the text (content) as well as its social network. Despite knowing the sources' origins and the pattern of false news transmission, the primary challenge is determining how AI assesses the content's genuine character. The AI-backed classification model should potentially be able to determine whether or not an article contains fake news if the amount of training data is sufficient. Making such distinctions, on the other hand, requires prior political, cultural, and social knowledge, as well as common sense, both of which natural language processing algorithms currently lack.

Human-AI Partnerships

The topic has a significant impact on classification analysis; AI usually divides themes rather than examining the issue's substance to ensure its legitimacy. Articles on COVID-19, for example, are more likely than articles about other topics to be labelled as fake news. One option is to hire people to work with AI to verify the accuracy of data. The Lithuanian defence ministry developed an AI algorithm in 2018 that "flags disinformation two minutes after it is uploaded and sends those reports to human personnel for further study." A similar option in Canada could be to create a specialised government unit or agency to combat disinformation or to fund think tanks, universities, and other third parties to study AI solutions for fake news.

Avoiding Censorship

Controlling the dissemination of fake news could be seen as censorship and a threat to freedom of speech and expression in some cases. Even a human may have difficulty determining whether or not the information is genuine. So perhaps the more important question is: Who or what determines what constitutes fake news? How can we avoid falling into the false positive trap, when AI filters wrongly identify the material as fraudulent based on its accompanying data?

An AI system for detecting fakes could be used for nefarious purposes. Authoritarian governments, for example, may utilise AI to justify the removal of any articles or the prosecution of anybody who disagree with the authorities. As a result, any AI deployment — as well as any related rules or metrics that emerge from its use — will necessitate a transparent system overseen by a third party.

Deep fakes, for example, are anticipated to play a larger part in future information warfare. Deep fakes are "extremely realistic and difficult-to-detect digital alteration of audio or video." Because of end-to-end encryption, disinformation transmitted via messaging apps like WhatsApp and Signal is becoming increasingly difficult to identify and intercept.

Facebook is currently dealing with several issues, but one that isn't going away anytime soon is fake news. As the company's user base grew to cover more than a quarter of the world's population, it struggled to keep track of what they all posted and shared. Unwanted content on Facebook can range from light nudity to serious violence, but hoaxes and misinformation have proven to be the most sensitive and damaging to the company, especially when it has a political bent. So, what will Facebook do about it? The company does not appear to have a defined strategy at this time. Rather, it's a case of tossing everything at the wall and seeing what sticks. It's employed more human moderators; it's providing users more information about news sources on the site, and Mark Zuckerberg hinted in a recent interview that the business might set up some.

The difficulties in developing an artificial intelligence-based automatic fake news filter are considerable. From a technological standpoint, AI fails on several levels because it simply cannot comprehend human writing in the same way that humans do. It can extract basic data and perform a primitive sentiment analysis, but it can't interpret tone, take into account cultural context, or call someone to verify the information. Even if it could perform all of this, which would eliminate the most obvious falsehoods and hoaxes, it would ultimately run across edge circumstances that would be perplexing to even humans. There's no way we can educate a machine to decide what is and isn't "fake news" if individuals on the left and right can't agree on what is and isn't "fake news."

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net