Google is making its efforts to be more open even stronger by letting users know if a picture was made by hand or changed using generative AI (GAI) tools. It has explained how it plans to use the C2PA (Coalition for Content Provenance and Authenticity) watermarking standard in all of its services, such as Google Search.
Google has worked with companies like Amazon, Meta, and OpenAI to improve the technology for watermarking material made by AI since it became a member of the C2PA’s steering committee. The company was very important in making the most recent version of Content Credentials. This is a technical standard that protects information, which tells you how and when an object was made or changed.
Soon, this project will be built into Google Search. Because it has C2PA metadata, the “About this image” tool will let you know when you look for photos if they were made or changed using GAI. Google Images, Lens, and Circle to Search will all get this feature too.
Google is looking more into how C2PA could be used to let YouTube users know when video was recorded with a camera. The company also wants to gradually enforce key rules by adding C2PA information to its ad systems.
The work that Google is doing is encouraging, but C2PA will only work if camera makers and AI tool writers use it. Also, it might still be hard to find GAI uses after removing metadata.
Meta has changed its rules about pictures that have been edited by AI on Facebook, Instagram, and Threads. Labels that say GAI was used are now less noticeable. This makes me wonder how committed the industry really is to being open about AI material.
Google will be rolling out these changes over the next few months. Users can expect AI-generated pictures to work better.