Artificial Intelligence

AI-synthesized Faces: Indistinguishable and More Trustworthy

Arti

AI-synthesized faces are indistinguishable from real faces and more trustworthy.

Artificial intelligence (AI)–powered audio, image, and video synthesis—so-called deep fakes—has democratized access to previously exclusive Hollywood-grade, special effects technology. From synthesizing speech in anyone's voice to synthesizing an image of a fictional person and swapping one person's identity with another or altering what they are saying in a video, AI-synthesized faces entertain but deceive.

Generative adversarial networks (GANs) are popular mechanisms for synthesizing content. A GAN pits two neural networks—a generator and discriminator—against each other. To synthesize an image of a fictional person, the generator starts with a random array of pixels and iteratively learns to synthesize a realistic face. On each iteration, the discriminator learns to distinguish the synthesized face from a corpus of real faces; if the synthesized face is distinguishable from the real faces, then the discriminator penalizes the generator. Over multiple iterations, the generator learns to synthesize increasingly more realistic faces until the discriminator is unable to distinguish them from real faces.

Fear of Deep Fakes

After three separate experiments, the researchers found the AI-synthesized faces were on average rated 7.7% more trustworthy than the average rating for real faces. This is "statistically significant", they add. The three faces rated most trustworthy were fake, while the four faces rated most untrustworthy were real, according to the magazine New Scientist.

AI learns the faces we like

The fake faces were created using generative adversarial networks (GANs), AI programs that learn to create realistic faces through a process of trial and error. The study, AI-synthesized faces are indistinguishable from real faces and more trustworthy, is published in the journal, Proceedings of the National Academy of Sciences of the United States of America (PNAS). It urges safeguards to be put into place, which could include incorporating "robust watermarks" into the image to protect the public from deep fakes. Guidelines on creating and distributing synthesized images should also incorporate "ethical guidelines for researchers, publishers, and media distributors," the researchers say.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

The Crypto Crown Clash: Qubetics, Bitcoin, and Algorand Compete for Best Spot in November 2024

Here Are 4 Altcoins Set For The Most Explosive Gains Of The Current Bull Run

8 Altcoins to Buy Before Their Prices Double or Triple

Could You Still Be Early for Shiba Inu Gains? Here’s How Much Bigger SHIB Could Get Before Hitting Its Peak

Smart Traders Are Investing $50M In Solana, PEPE, and DTX Exchange To Make Generational Wealth: Here’s Why You Should Too