Google Photos is reportedly developing a feature that will allow users to verify if an image was created or enhanced using artificial intelligence (AI). According to a recent report, the photo and video platform is introducing new ID tags that will reveal AI-related details and the digital source of the image. This update aims to reduce the spread of deepfakes. However, the exact method for displaying this information to users remains unclear.
Deepfakes, which are digitally manipulated images, videos, or audio files, have increasingly been used to spread misinformation. For example, actor Amitabh Bachchan recently filed a lawsuit against a company using deepfake videos of him promoting its products without consent.
Google Photos version 7.3 hints at a feature that will allow users to identify AI-generated content in their galleries. Though not yet live, new XML code in the app’s layout files includes resource tags like "ai_info" and "digital_source_type," suggesting the addition of metadata that reveals the AI tool or model (e.g., Gemini, Midjourney) responsible for creating or enhancing the image.