pope francis dress up Wearing a white down jacket.
mahatma gandhi Smile with a selfie.
When you zoom out of the Mona Lisa, A confused subject was staged. Against the background of wispy clouds and jagged rocks.
All of these viral images are fake, and have inspired awe, amusement, and ridicule over the past year as the field of generative artificial intelligence continues to soar in popularity. The rise of AI-generated images is also sparking debate among policymakers about how best to protect people from fraud and deception.
In late October, US President Joe Biden signed an executive order requiring developers of powerful AI models to share information with the government, highlighting the challenges posed by AI. Established an AI-focused cybersecurity program. It directed the Department of Commerce to develop watermarking guidance to ensure that AI-generated content is clearly labeled.
Biden also met in the summer with seven major AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) to add invisible (but detectable) marks to photos. secured voluntary efforts to develop a watermarking system, which is a mechanism for or use the video to identify it as being generated by AI.
Now, state Sen. Josh Becker, a Menlo Park Democrat, is preparing to take on the challenge at the state level. Becker plans to introduce a bill that would require generative AI companies to watermark images, videos and audio created from their models. These companies will also be required to provide a platform to ask consumers whether the content was created by them.
Becker said in a Jan. 12 statement that the bill aims to “advocate transparency and empower consumers.”
“Artificial intelligence has become an integral part of our daily lives and is influencing the products we use,” Becker said. “It is important that consumers have the right to know whether a product has been generated by AI.”
For Becker, the new AI bill is the second bill targeting technological deception and opacity. Last year, he authored the Takedown Act, which established a system for consumers to opt out of having their information collected by data brokers. Earlier this month, state Sen. Steve Padilla, D-Chula Vista, introduced a pair of bills Regulating AI with privacy and safety standards and establishing an AI research hub to support universities.
Becker said in a statement that conversations with experts revealed that “the ability to distribute high-quality content created by generative AI raises concerns about the potential for abuse.” .
The introduction of what Becker called the California Artificial Intelligence Transparency Act “represents an important step toward establishing clear guidelines for AI-generated products and sets a precedent for other states and jurisdictions to follow.”
“AI-generated images, audio and video can be used to spread political misinformation and create deepfakes,” Becker said. “My bill aims to promote provenance, transparency, and accountability, and empower individuals to make choices that align with their values.”