On Wednesday at Google I/O 2023, Google announced three new features designed to help people spot AI-generated fake images in search results, reports Bloomberg. The features will identify the known origins of an image, add metadata to Google-generated AI images, and label other AI-generated images in search results.
Thanks to AI image synthesis models like Midjourney and Stable Diffusion, it has become trivial to create massive quantities of photorealistic fakes, and that may affect not only misinformation and political propaganda, but also our conception of the historical record as large amounts of fake media artifacts enter circulation.
In an attempt to counteract some of these trends, the search giant will introduce new features to its image search product “in coming months,” according to Google:
Sixty-two percent of people believe they come across misinformation daily or weekly, according to a 2022 Poynter study. That’s why we continue to build easy-to-use tools and features on Google Search to help you spot misinformation online, quickly evaluate content, and better understand the context of what you’re seeing. But we also know that it’s equally important to evaluate visual content that you come across.
The first feature, “About this image,” will allow users to click three dots on an image in Google Images results, search with an image or screenshot in Google Lens, or swipe up in the Google app to discover more about an image’s history, including when the image (or similar images) was first indexed by Google, where the image may have first appeared, and where else the image has been seen online (i.e., news, social, or fact-checking sites).
Later this year, Google says it will also allow users to access this tool by right-clicking or long-pressing on an image in Chrome on desktop and mobile.
This additional context about an image can aid in determining its reliability or indicate if it warrants further scrutiny. For instance, using the “About this image” feature, users could discover that a picture illustrating a fabricated Moon landing was flagged by news outlets as being generated by AI. It could also place it in historical context: Did this image exist in the search record before the impetus to fake it arose?
The second feature addresses the increasing use of AI tools in image creation. As Google begins to roll out image synthesis tools, it plans to label all images generated by its AI tools with special “markup,” or metadata, stored in each file that clearly indicates its AI origins.
And third, Google says it is also collaborating with other platforms and services to encourage them to add similar labels to their AI-generated images. Midjourney and Shutterstock have signed on to the initiative; each will embed metadata in their AI-generated images that Google Image Search will read and display to users within search results.
These efforts may not be perfect since metadata can later be removed or potentially altered, but they represent a notable high-profile attempt at confronting the issue of deepfakes online.
As more images become AI-generated or AI-augmented over time, we might find that the line between “real” and “fake” begins to blur, influenced by shifting cultural norms. At that point, our decision about what information to trust as an accurate reflection of reality (regardless of how it was created) may hinge, as it always has, on our faith in the source. So even amid rapid technological evolution, a source’s credibility remains paramount. In the meantime, technological solutions like Google’s may provide assistance in helping us assess that credibility.