AI image generators have been available for several years and are increasingly being used to create “deepfakes,” fake images that mimic the real thing. In March, a fake AI image of former President Donald Trump fleeing police spread on the Internet, and in May, a fake image showing an explosion at the Pentagon caused a temporary stock market crash. Caused. Companies have put a visible logo on his AI images, or attached text “metadata” that indicates the image’s provenance, and both techniques are relatively easy to crop and edit. .
“Obviously the genie is already out of the bottle,” said Rep. Yvette D. Clarke of New York, who has pushed for legislation requiring companies to watermark AI images. “We haven’t seen it maximized in terms of weaponization.”
At this time, Google tools are only available to select paying customers of cloud computing businesses. It also only works with images created with Imagen, Google’s image generation tool. The company says it’s still in the experimental stage and hasn’t asked customers to use it.
Pushmeet Kohli, vice president of research at Google DeepMind, the company’s AI lab, said the ultimate goal is to be able to easily identify most AI-generated images using embedded watermarks. Infallible said it was to help build systems, and warned that the new tools weren’t completely one-size-fits-all. “The question is whether we have the technology to get there.”
As AI improves the ability to create images and videos, politicians, researchers and journalists worry that the lines between truth and falsehood online will be further eroded. Existing political divisions will be further deepened, making it difficult to disseminate factual information. Deepfake technology is advancing as social media companies pull back from policing misinformation on their platforms.
Watermarking is one idea tech companies are coming together as a possible way to mitigate the negative effects of the “generative” AI technologies they are rapidly spreading to millions of people. In July, the White House hosted a meeting with leaders of seven of the most powerful AI companies, including Google and ChatGPT maker OpenAI. The companies promised to develop tools to detect and watermark AI-generated text, videos and images.
Microsoft said it launched a coalition of technology and media companies to develop a common standard for watermarking AI images and is researching new ways to track AI images. The company also places small visible watermarks in the corners of images generated by AI tools. OpenAI, whose Dall-E image generator helped spark a wave of interest in AI last year, is also adding a visible watermark. AI researchers have proposed a way to embed digital watermarks that are invisible to the human eye but identifiable by computers.
Google executive Kohli said Google’s new tool is great because it works even after the image has been changed significantly, and it’s outperformed traditional methods that could have been easily thwarted by altering or flipping the image. significant improvement compared to
“There are other techniques for embedding watermarks, but we don’t think they are very reliable,” he said.
Even if other big AI companies such as Microsoft and OpenAI developed similar tools and social media networks implemented them, images created by open-source AI generators would still be undetectable. Tools that anyone can modify and use, such as the open-source tools developed by AI start-up Stability AI, are already used to create non-consensual sexual images of real people and to create new child sexual exploitation material. used.
“Deepfakes have increased significantly in the last nine months to a year,” said the founder of Ceartas, a company that helps online content creators see if their content has been re-shared without permission. Dan Purcell said. The company’s primary customers so far have been adult content makers trying to stop the illegal sharing of videos and images. More recently, however, Purcell has received commissions from people whose social media images were used against their will to create AI-generated pornography.
As the 2024 U.S. presidential election approaches, pressure is mounting to develop tools to identify and stop fake AI images. Politicians are already using this tool in their election ads. In June, Florida Gov. Ron DeSantis’ campaign released a video containing a fake image of President Donald Trump hugging former White House coronavirus advisor Anthony S. Fauci.
US elections have always featured propaganda, lies and exaggerations in official election ads, but researchers, democracy activists and some politicians believe AI-generated imagery and targeted advertising and social media networks I am concerned that the combination will spread false information and lead to misunderstandings. voters.
“It could be as simple as releasing a visual depiction of a closed key polling place,” said Rep. Clark, a Democrat. “It can depict some kind of violent situation, cause fear, cause panic among the public.”
AI could be used by foreign governments that have already proven willing to use social media and other technologies to interfere in U.S. elections, she said. “As the political season gets into full swing and things get tougher, interference from international adversaries could be easy to see.”
A closer look at a Dall-E or Imagen image usually reveals some inconsistency or odd feature, such as a person having too many fingers or the background intruding into the subject of the photo. But Dor Leitman, head of product and research and development at Connatix, who develops tools to help marketers use his AI to edit and generate videos, said the fake image generator was “absolutely And he will be 100% better and better.”
Reitman said the move would be akin to cybersecurity companies getting into a never-ending arms race with hackers trying to break through newer and better protections. “It’s an ongoing battle.”
Those who try to trick people with fake images will also continue to find ways to confuse deepfake detection tools. Kohli said that’s why Google doesn’t share the basic research behind watermarking technology. “If people knew how we did it, they would try to attack it,” he said.