Election Day this year went largely without a hitch as voters in 10 states went to the polls to make important voting decisions and elect new governors, state representatives, mayors and city council members.
But as voters across the country cast their votes in presidential elections, Congressional races, and a wide range of state and local races, the relative peace of 2023 is poised to give way to some major technological threats next year. It seems that it has been created.
Already, state and local officials are getting a preview of how cyberattacks and artificial intelligence could affect next November’s election.For example, in last year’s midterm elections, Mississippi’s Secretary of State website crash At the time, authorities called it an “unusual increase in traffic.” [denial-of-service] Activities. “
Meanwhile, the 2024 presidential election is already seeing an influx of AI-generated content from the likes of the Republican National Committee (RNC). react in april President Joe Biden’s re-election announcement.
Although Mississippi’s election system was not compromised, the denial of service attack is a glimpse of what may be in the future.and the Brennan Justice Center. I already warned you The group claims that AI is putting elections at risk and suggests that the technology “could facilitate the spread of disinformation and pose other dangers to democracy.”
State and local officials preparing for next year’s elections are concerned about the threat of new types of misinformation and cyberattacks.
Potential rise in deepfakes and other misinformation
The RNC ad is one of the most high-profile examples so far of how AI could be used in the 2024 election. The RNC ad was followed by an ad endorsing Florida Governor Ron DeSantis over the summer. Already used Former President Donald Trump’s voice generated by AI.with them similar deepfakes Lawmakers and advocacy groups have said such ads should include content that makes it clear that AI was used in whole or in part.
New York State Representative Yvette Clark, a Democrat, introduced the bill. In May While regulating AI in political advertising, later that month Democrats Amy Klobuchar (Minnesota), Cory Booker (New Jersey) and Michael Bennet (Colorado) followed suit in the Senate.
After a few months, then petition From the advocacy group Public Citizen, the Federal Election Commission agreed that: I’ll look into it AI-generated deepfake ads impersonating political opponents could be regulated.
The White House also appears to be considering this issue.Biden’s recent presidential order on AI said the government would “help develop effective labeling and content provenance mechanisms so Americans can determine when content is and is not generated using AI.” .
Separately, some states have also taken similar measures.Michigan State Senate Passes invoice package Among other things, it is designed to: Disclaimer required Political ads that include audio, video, or images generated by AI.
But despite these efforts, some are skeptical that they are enough to thwart malicious actors.
Given that social media platforms and governments have fewer resources to fight misinformation and are more reluctant to provide information, Samir Jain, vice president of policy at the Center for Democracy and Technology, said in a recent webinar. It warned of a “very difficult information environment.” It is considered a violation of the First Amendment.
“The fact that this kind of deepfakes and generated AI content can exist also has the indirect effect of undermining trust in authentic content, as voters and others are not sure what is real and what is real. “It becomes difficult to tell what is not real,” Jain said. “That makes the whole information environment a little more difficult to navigate.”
The public is also concerned. The discovery was made by the Government Research Institute at the University of California, Berkeley. In a recent survey 84% of Californians are concerned about disinformation, deepfakes, and AI. More than 70% say state governments have a responsibility to act to protect voters, and 87% say tech companies and social media platforms should be required to label deepfakes and other AI-generated content. I answered.
Campbell Cowie, head of policy, standards and regulation at software company iProov, said it’s not just about educating the public about the threat of AI.
“An important part of this story is to encourage media consumers to be literate about the risks that are out there,” Cowie said. “But I don’t think it’s fair to put the burden on consumer education. I think it’s solution providers and social media companies. [that must do more]”
Cowie said intelligence and insights about deepfakes and AI-generated content by malicious actors need to be better shared. Intelligence sharing will require “true roundtable engagement” between all parties, as well as prioritization of which threats have the potential to cause the most damage. That should help alleviate what Jainism describes as a “nightmare” scenario.
“The nightmare is that a few days before the November election, there was some kind of deepfake video circulating that showed one of the candidates in some kind of compromising position or saying some very nasty things. “There just isn’t enough time to sort things out.” To counter that,” Jain said. “So if a lot of voters go to the polls believing that deepfake and the election is close, it could theoretically even sway the election.”
State and local authorities will also play an important role in the fight against deepfakes and misinformation.
Eric Goldstein, executive assistant director of the Cybersecurity and Infrastructure Security Agency, said, “State and local officials are the best and most effective advocates for providing relevant and accurate information to voters.” “Often,” he said. at an event. “Our goal is to ensure that our state and local partners are advocates for their communities and are properly informed.”
Cybersecurity remains a hot topic for election officials
In addition to concerns about AI-generated deepfakes, cybersecurity has become a top priority for state and local officials as they prepare for next year. Their concerns are shared by the federal government, the Department of Homeland Security’s intelligence agency, Homeland Threat Assessment. said the agent predicts the 2024 election cycle will be a “watershed moment” as cybercriminals seek to exploit networks and data used by political parties and election officials.
Even as CISA cyber grants continue to trickle down, funding for cybersecurity protection remains the biggest challenge for state and local governments. reflects a continuing trend. low funds For state and local election administration.
At a U.S. Senate Rules Committee hearing early this monthArizona Secretary of State Adrian Fontes Rutherford County, Tennessee Elections Administrator Alan Farley Both men called for increased federal funding to protect voting infrastructure.
Fontes said that while the funds distributed under the Help America Vote Act are helpful, they are “intermittent and woefully inadequate to provide the predictable and sustained support that local governments need.” said. Mr. Fontes said he was “concerned” that there appears to be no funding under the law in the next federal budget.
In addition to asking for more funding, observers say state and local election officials are pushing back on their employees, including regular training on how to avoid phishing emails, which remain the biggest vulnerability. He said there is a need to ensure appropriate cyber hygiene measures. Gary Barrett, former chief information officer for the U.S. Postal Service’s Office of Inspector General and current federal chief technology officer for security firm Illumio, says he conducts these drills more regularly, especially as the big day approaches. He said that the office would be able to provide adequate services.
“They should step up their efforts as we get closer to the election,” Barrett said. “The closer we get to the election, the more likely it is that people will be targeted.”
While some elected officials have highlighted potential cybersecurity issues with voting machines being hacked, Barrett said the security of networks, servers and vote tabulation methods are just as concerning. He said he is doing so. In some cases, these vote tabulation efforts are low-tech and can be exploited, especially if someone accidentally grants access to malicious actors.