Researchers have discovered that AI is way better at debunking conspiracy theories than its human counterparts.
This discovery was a result of a new research study conducted by Cornell University psychologist Gordon Pennycook, published on September 16, 2024. It concluded that AI-powered chatbots can make users change their deeply held but false beliefs.
AI can debunk conspiracies like whether COVID-19 was an effort to control the population or 9/11 was an inside job.
Researchers asked over 2,000 participants to engage with a large language model called GPT-4 Turbo to test their belief in conspiracy theories. Users would input their ideas and the evidence supporting them, and then the AI presents counter-evidence. The surprise for the researchers was that the participants' belief in conspiracy theories declined by about 20%.
“It’s the most uplifting research I’ve ever done,” said Gordon Pennycook, the psychologist behind the Cornwell University study. He further noted, “I thought people wouldn’t budge once they fell into the rabbit hole, but AI showed us that evidence still matters.”
A human gets tired of consuming large volumes of data in a go, whereas AI can quickly go through it and point out logical flaws with a consistent reply.
Many participants who had rated their confidence in what they believed to be 100 percent, were convinced by the fact-based dialogue of AI to reduce their ratings to under 50 percent. About a quarter of the participants changed their minds after a conversation with the chatbot.
"It turns out people do respond to evidence," Pennycook said. "AI just delivers it more effectively than humans could."
Elizabeth Loftus, Professor of Psychology, University of California at Irvine said that the reason for AI's success is the ability to deliver the truth without bruising a man’s ego.
"It's not just about data," Loftus said. "It's about showing people how much they don't know, without challenging their ego."
Journalists also tested the AI in practical ways, trying to see if it could debunk popular conspiracies.
Claims such as the government covering up alien life and rumors about a supposed attempt at assassinating Donald Trump were demonstrated quickly through facts and reasonable counterpoints. Even the most outrageous claims, such as immigrants eating pets in Ohio, were treated with reason and clarity.
This research highlights how AI can be used to help combat misinformation spread by social media platforms.
While the experiment centered on conspiracy theories, the bigger implications are that AI may have much potential in countering information across platforms. As technology advances, AI could well become the helping arm for educators, journalists, and policymakers in safeguarding that facts win the day in public discourses.
Its success has also unraveled a major psychological insight that people will change their minds if approached the right way. We may have just found a new ally in the battle against misinformation. This news may provide huge relief to the company’s working towards developing ethical AI like Ilya Sutskever, Open AI's former chief Scientist's new AI venture, Superintelligence (SSI).