Technology in elections was a key issue in the 2020 presidential election, and with the recent explosion of advances in artificial intelligence, it is likely to once again permeate the political landscape in this year’s election.
For Dr. Milos Manik, Director of Virginia Commonwealth University, Cyber Security Centerthe impact of AI on undecided voters is of particular concern.
“Will some of those undecided voters be targeted in some way? Will they be given carefully selected information and specifically misinformed information?” he asked.
Professor Manik Computer Science Department VCU Faculty of Engineering and, Commonwealth Cyber Initiativehas received more than 40 research grants in the areas of data mining and machine learning applied to cybersecurity, infrastructure protection, energy security and resilient intelligent control. FBI Director’s Community Leadership Award Advances in AI to protect national infrastructure from cyber attacks.
VCU News asked Manik for his thoughts on the intersection of AI, elections and society at large.
What are the main vulnerabilities we face in terms of election integrity?
Social media can very quickly and easily give you the illusion of group consensus, that people are on the same page, but 90% of the time that’s not true. The human psyche, human psychology, human fragility, that’s where the battle is now.
Psychological warfare has been around for hundreds of years. In a world war, someone might have been given a pamphlet and read it to an entire village. Maybe pamphlets were dropped from airplanes. Now it’s psychological warfare using technology. Humans have what we call emotional triggers, and we respond to them. Humans have cognitive biases.
Is there a way, however imperfect, to quickly address such shortcomings?
Let’s look at four areas. The first is policy. The second is safe and secure AI. The third is user awareness. And the fourth is cross-border. alliance.
On the policy front, the world at large is late to the AI game, and many organizations – government, non-government, non-profit – are currently trying to address policy, governance, regulatory frameworks, and so on.
With today’s resources, algorithms and knowledge, there is very little we can’t do given enough time. The question is, are we developing this for the right reasons and in the right way?
Six or seven years ago, we looked at different algorithms to solve cybersecurity problems and showed that for almost every type of problem, you can find at least three algorithms that solve it correctly. The question is, will that solution work in production? Are we looking at a transformation that will be as important in the future as it is today?
What about the other three areas?
A key question to enabling safe and secure AI is: Can we focus on developers who are ethical, unbiased, and trustworthy?
The next step is user, or public, awareness and understanding. What are we posting? What are we sharing? What are we exposing to others? And how do we know who we are exposing it to? Facebook users might say, “I know who I’m sharing with.” No, they don’t. They think they do. But how do they know that their friends haven’t been compromised and become someone else?
And the next step is to go beyond borders. In the cyber world, there is something called the Five Eyes. [Australia, Canada, New Zealand, the United Kingdom and the United States] Form intelligence alliances. Enemies are now based in computers and networks across continents, so without an alliance to communicate with, it becomes much easier for bad actors to infiltrate.
Do you work closely with election integrity or political disinformation?
The key here is real-time AI fraud detection. Given enough resources, given enough computer resources, AI can be very powerful. Given enough time and data, there’s very little it can’t do. But the question is, can you do it in real time?
VCU is a leader in the Commonwealth Center for Advanced Computing (CCAC), a statewide effort. The centerpiece of the center is an IBM z16 machine, which is one of a kind, an on-chip AI-based supercomputer designed specifically for real-time AI for fraud detection. Today, there are many great machines in industry and academia, and many very powerful algorithms. But the question is, can you run it in real time? VCU is a leader and steward of this resource for our state.
As AI becomes more capable of producing inclusive text and realistic images, how do you assess its potential to threaten (or protect) our political and electoral processes?
Again, the key is whether you can use AI to accomplish anything in real time, because these attacks, scams, and disinformation happen in real time if you let them. If you give it enough time, it almost doesn’t matter if you respond a week later, because your opinion has already influenced people. It’s almost the same as not responding at all. It’s too late to change people’s minds.
What worries me is that these machines could become smart enough to tailor their human-like responses to the user — the same information could be presented differently to me and to my daughter, for example, because her perception of reality is so different from mine. So the ability for machines to learn and adjust in real time is what scares me.
We need to focus on the human mind, human weaknesses. What are we weak at? It’s not just engineering or computer science anymore. It’s human factors, psychology experts, etc.
Outside of the election, what else is keeping you up at night related to AI at the moment, and what helps you sleep better?
It’s a moving target. The problem is that the moment bad actors find something we can detect, they take action. [to make deep fakes] Even better.
Some say we need to develop better tools. Others say, who’s going to guard the guard? Who’s going to ensure that the screening tools aren’t compromised? But this is an age-old cyber thing. It’s a cat-and-mouse game. So I’m sure it can’t continue like this. But I’m also sure it keeps the good actors up at night.
Real-time detection will be key for the next few years. We need to develop faster solutions that can detect deepfakes in nanoseconds and provide a review process that is tied to the tools you are using. If you use a browser like Chrome, Firefox, or Safari, there are tools that will flag it in real time in some way.
Humans are fragile, and that bothers me. How do we respond to these changes and make decisions in an unbiased way? We are bringing a lot of practices from the cyber world into AI.
What I’m more worried about is humans making last-minute decisions because something happened. I think that’s the biggest arena of conflict right now. Humans change their minds based on what they see, hear, are told, etc. So human fragility, the human mind, is going to continue to be the arena of cyberwarfare. But I’m not really worried about the technology side of things. It all starts and ends with humans.
Subscribe to VCU News
To subscribe to VCU news: newsletter.vcu.edu Receive handpicked stories, videos, photos, news clips and event listings right to your inbox.