On Wednesday at Google I/O 2023, Google announce Three new features designed to help people spot fake AI-generated images in search results, reports bloomberg. The features will identify an image’s known origins, add metadata to AI images generated by Google, and label other AI-generated images in search results.
Thanks to AI image overlay models like Midjourney and Stable Diffusion, it has become trivial to create massive amounts of realistic fake images, which may not only influence misinformation and political propaganda, but also our Concept of historical record Where large amounts of fake media tools circulate.
In an effort to counter some of these trends, the search giant will introduce new features to its image search product “in the coming months,” according to Google:
62 percent of people believe they come across misinformation daily or weekly, according to 2022 Poynter study. That’s why we continue to build easy-to-use tools and features on Google Search to help you spot misinformation online, quickly evaluate content, and better understand the context of what you see. But we also know that it’s equally important to rate the visual content you come across.
The first feature, “About this photo,” will allow users to click three dots on an image in Google Photos results, search with an image or screenshot in Google Lens, or swipe up in the Google app to discover more about a photo’s history, including when the photo was indexed. (or similar images) were first reported by Google, where the image may have first appeared, and other places the image was seen on the Internet (i.e. news, social, or fact-checking sites).
Later this year, Google says it will also allow users to access this tool by right-clicking or long-pressing an image in Chrome on desktop and mobile.
This additional context about the image can help determine its reliability or indicate whether it requires further scrutiny. For example, with the About This Image feature, users can detect that a photo showing a fake moon landing has been reported by news outlets as having been generated by artificial intelligence. She can also put it in Historical context: Was this image present in the search history before the motive to fake it appeared?
The second feature deals with the increasing use of AI tools in creating images. When Google begins rolling out image aggregators, it plans to label all images generated by its AI tools with special “tags or metadata” stored in each file that clearly indicate their AI assets.
And third, Google says it’s also collaborating with other platforms and services to encourage them to add similar labels to AI-generated images. Midjourney and Shutterstock signed the initiative; Each will include metadata in AI-generated images that Google Image Search will read and display to users within search results.
These efforts may not be perfect because metadata can be removed later or potentially altered, but they are a notable attempt to tackle the problem of deepfakes on the Internet.
As more images are created by AI or enhanced over time, we may find that the line between “real” and “fake” begins to blur, affected by changing cultural norms. At this point, our decision as to what kind of information to trust as an accurate reflection of reality (regardless of how it is generated) may depend, as always, on our belief in the source. Therefore, even amid rapid technological development, source credibility remains of utmost importance. In the meantime, technological solutions such as Google’s may offer assistance in helping us assess that credibility.
“Web specialist. Lifelong zombie maven. Coffee ninja. Hipster-friendly analyst.”