AI-generated images pose a growing challenge, as they are often indistinguishable from real photos. This has led to a rise in deepfakes—realistic AI-generated images used to mislead and spread misinformation.
In a blog post, Google detailed its efforts with the C2PA to develop version 2.1 of the technical standard known as Content Credentials. This updated version offers greater security against tampering and enforces stricter technical requirements. The standard is being applied to images accessible through Google tools.
Content Credentials will soon appear on images in Google Images, Lens, and Circle to Search, allowing users to verify the C2PA metadata in the "About this image" section to determine if the image was AI-created or modified.
Google also plans to implement C2PA metadata into its ad systems, which will influence the company’s policies and enforcement strategies. Additionally, it's exploring ways to bring C2PA metadata to YouTube, helping users discern whether a video was shot on a camera or digitally produced.
In parallel, Google DeepMind has developed SynthID, an in-house watermarking technology. This system embeds AI-content metadata invisibly within image pixels, detectable only with specialized tools.