In partnership with Google Cloud, Google Deepmind is the artificial intelligence research division of Google release A tool for watermarking and identifying images generated by artificial intelligence. But only images generated by Google’s own image creation model.
The tool, called SnythID and available in beta to users of Vertex AI, Google’s platform for building AI apps and models, embeds a digital watermark directly into image pixels — making it ostensibly imperceptible to the human eye but detectable by an algorithm. . SynthID only supports Imagen, Google’s text-to-image model, which is available exclusively in Vertex AI.
Google previously It said it would include metadata to denote visual media generated by generative AI models. SynthID obviously goes further than that.
“Although generative AI can unlock enormous creative potential, it also presents new risks, such as enabling creators to spread false information — whether intentionally or unintentionally,” DeepMind wrote in a blog post. “The ability to identify AI-generated content is critical to enabling people to know when they are interacting with generated media, and to help prevent the spread of misinformation.”
DeepMind claims that SynthID, which was developed and partnered with Google Research, Google’s research and development team, to improve it, stays in place even after tweaks such as adding filters, changing colors, and highly compressing images. DeepMind says the tool takes advantage of two AI models, one for watermarking and one for identification, that have been trained together on a “variety” set of images.
Image credits: Deep mind
SynthID cannot recognize watermarked images with 100% confidence. But the tool distinguishes between cases where the image is may be It contains a watermark against the image Very likely to contain one.
“SynthID is not foolproof against severe image manipulation, but it does provide a promising technology approach for enabling people and organizations to work with AI-generated content responsibly,” DeepMind wrote in the blog post. “This tool can also evolve alongside other AI models and methods that go beyond images such as audio, video, and text.”
Watermark techniques for generative art are nothing new. French startup Imatag, launched in 2020, offers a watermarking tool that it claims is unaffected by image resizing, cropping, editing or compression, similar to SynthID. Another company, Steg.AI, uses an AI model to apply watermarks that withstand resizing and other modifications.
But pressure is mounting on technology companies to provide a way to show that works have been generated by AI.
Recently, the Cyberspace Administration of China issued regulations requiring generative AI vendors to tag generated content — including text and image generators — without affecting user usage. And at the recent hearings of the US Senate Committee, Senator Kirsten Sinema (R-Arizona) confirmed The need for transparency in generative AI, including the use of watermarks.
Back in May, at its annual Build conference, Microsoft committed to watermarking images and videos generated by AI “using encryption methods.” Elsewhere, AI startups Shutterstock and Midjourney have adopted it Guidelines To include a tag indicating that the content was generated by a generative AI tool. OpenAI’s DALL-E 2, a text-to-image tool, inserts a small watermark in the bottom right of the images you create.
But so far, a common standard for watermarks — whether for creating or detecting watermarks — has proven elusive.
SynthID, like other technologies that have been proposed, will not be useful to any image creator that does this not like that Imagen – at least not in its current form. DeepMind says it is considering making SynthID available to third parties in the near future. But whether third parties—particularly third parties developing open-source AI image generators, which lack many of the generators’ firewalls behind the API—will embrace the technology is another matter entirely.