Facebook Making Its Own Deepfakes To Tackle Widespread of Misinformation

Facebook Making Its Own Deepfakes To Tackle Widespread of Misinformation
Published on

As the next US Presidential Elections are approaching, Facebook fears AI-generated "deepfake" videos could be the greatest source of spreading misinformation causing severe damage.

To solve this issue, the company is generating its own deepfakes to train the detection tools. Facebook has instructed its AI researchers team to produce several realistic fake videos featuring actors doing routine things. Such videos will cater to the need for testing and benchmarking deepfake detection tool. The tool is expected to release at the end of this year at a major AI conference.

Facebook's CTO Mike Schroepfer said, "deepfakes are advancing rapidly so devising a much better way to flag or block potential fakes is vital." He further added, "We have not seen this as huge problem on our platforms yet, but my assumption is if you increase access—make it cheaper, easier, faster to build these things—it increases the risk that people will use this in some malicious fashion… I don't want to be in a situation where this is a massive problem and we haven't been investing massive amounts in R&D."

Facebook will spend US$10 million for funding the detection technology through grants and challenge prizes. Collaborating with Microsoft over AI and academics from MIT, UC Berkeley and Oxford like institutions, the company is launching "Deepfake Detection Challenge" offering unspecified cash rewards for the best detection methods.

Interestingly, the creation of deepfake typically requires two video clips. AI algorithms can learn the appearance of each face to paste one onto another while balancing smile, nod, and blink. Also, various AI techniques can be used to re-create a specific person's voice.

The biggest worry that concerns the tech-world is that deepfakes can be used to spread highly catastrophic misinformation during upcoming US elections perhaps meddling with the outcomes too. Where several senators have raised the alarm about its threat, Ben Sasse from Nebraska introduced a bill to make creation and distribution of deepfakes illegal. A recent NYU report on election misinformation recognized deepfakes as one of several major challenges for the US election 2020.

The incidents of spreading of manipulated videos over social platforms had already come into light when earlier this year, a clip that appeared to show Nancy Pelosi slurring her speech spread across Facebook in no time. Facebook refused to remove that video or a deepfake of Mark Zuckerberg instead of choosing to flag the clips as fake with fact-checking organizations.

After the fallout from the last presidential elections, it is appropriately sensible for the company to try to get out ahead of the issue. The company had faced a lot of criticism in the past during its last political misinformation campaign emergence.

Although served with a great purpose, the deepfake challenge might cause unintended consequences. An analyst at Deeptrace (Dutch Company), Henry Ajder notes that the narrative around deepfakes can give chance to politicians to dodge accountability by claiming that real information has been manipulated. Deeptrace works on tools for spotting forged clips. He also said, "The mere idea of deepfakes is already creating a lot of problems. It's a virus in the political sphere that's infected the minds of politicians and citizens."

Moreover, Ajder doubts that the deepfake will be weaponized for political ends for some time and believes that it will more immediately become a potent tool of cyber-stalking and bullying.

Well, a few methods to tackle deepfake already exist which involves – analyzing the data in a video file or looking for tell-tale mouth movements and blinking (more difficult for an algorithm to capture and re-create).

A recent method developed by leading experts trains a deep-learning algorithm to recognize certain ways a person moves head. This is not something an algorithm typically learns.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net