Google Photos May Use AI Image Credits to Help Users Identify Deepfakes

The feature would use meta tags to mark AI-altered content, addressing concerns over the rise of deepfakes and manipulated images
Google Photos May Use AI Image Credits to Help Users Identify Deepfakes
Published on

It has been revealed that Google Photos is considering introducing a feature that could automatically determine for users whether the given image stored has been created or altered through artificial intelligence technology. 

As per latest reports, the photo and video web sharing services will form a new strategy that includes new embed meta tags indicating whether media image has been altered or made ‘safe’ through the use of ‘artificial intelligence’ technologies wherein deception through deepfake and image manipulation has increasingly become a trend.

Deep fakes, which are pictures, videos or audios that have been doctored to deceive or propagate untruths, are proving to be a more and more growing problem especially as Artificial Intelligence tools are becoming more advanced and readily available.

The reason being, through the integration of AI image credits into Google Photos, they hope to lessen the dangers brought about by deep fakes by making it clearer where a particular image was obtained from.

New AI Image Attribution in Google Photos

According to a report by Android Authority, the latest version of the Google Photos app, version 7.3, includes identifiers linked to AI-generated or enhanced images. While the feature has yet to go live, the app’s internal code reveals the presence of tags like “ai_info” and “digital_source_type,” which are believed to provide information about an image’s digital manipulation history.

The “ai_info” tag is expected to disclose whether an image has been created or altered by an AI tool that follows transparency protocols. Meanwhile, the “digital_source_type” tag could identify the specific AI model used, such as Google’s Gemini or third-party tools like Midjourney.

These metadata tags would not only help users recognize AI-altered images but also play a crucial role in the broader battle against misinformation, particularly with the increasing prevalence of deepfakes.

Tackling the Growing Threat of Deepfakes

Deepfakes have become an alarming tool for spreading disinformation, influencing public opinion and even tarnishing individual reputations. For instance, notable personalities like Indian actor Amitabh Bachchan have already taken legal action against the misuse of their likeness in deepfake ads. These instances of manipulation highlight the urgent need for more reliable mechanisms to detect and expose AI-generated content.

Google Photos’ potential AI image attribution feature is a huge step in that direction. By embedding transparent data into the metadata of AI-enhanced images, users will have the means to verify whether an image is authentic or has been digitally altered. This will likely reduce the spread of deepfakes and provide users with a better understanding of the media they consume.

How the Information Might Be Displayed

Although it’s still unclear how Google will present this information to users, there are a few possibilities. One option is to integrate AI attribution details into the Exchangeable Image File Format (EXIF) data, which is embedded into every image file. This approach would make it harder to tamper with the information, ensuring the credibility of the attribution. However, this method might require users to access the metadata manually, which could limit how often it is used.

Alternatively, Google Photos could follow Instagram’s lead by adding an on-image badge or label to indicate that an image has been altered by AI. Such a visual marker would make it easier for users to instantly identify AI-generated content without needing to dig through metadata pages.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net