SAN FRANCISCO (Reuters) – Google-owned US cybersecurity firm Mandiant said on Thursday that the use of artificial intelligence (AI) to conduct information manipulation campaigns online has increased in recent years. Limited for now.
Since 2019, researchers at the Virginia-based firm have found “numerous instances” of AI-generated content, such as fabricated profile pictures, being used in politically motivated online influence campaigns. .
These included campaigns by groups linked to the governments of Russia, China, Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador and El Salvador, the report said.
It comes amidst the recent boom in generative AI models such as ChatGPT that make it much easier to create compelling fake videos, images, text, and computer code. Security officials have warned that such models are being used by cybercriminals.
Mandiant researchers said generative AI could enable groups with limited resources to create high-quality content for large-scale influence campaigns.
For example, a pro-China information campaign dubbed Dragonbridge has “exponentially” expanded to 30 social platforms and 10 different languages since it first started targeting pro-democracy protesters in Hong Kong in 2019. said Sandra Joyce, vice president of Mandiant Intelligence. .
However, the effectiveness of such campaigns has been limited. “From an effectiveness standpoint, there aren’t a lot of wins there,” she said. “They haven’t redirected the threat yet.”
China has denied US accusations of involvement in such influence campaigns in the past.
Mandiant, which helps public and private organizations respond to digital breaches, said it had yet to see AI play a key role in threats from Russia, Iran, China and North Korea. According to researchers, the use of AI in digital intrusions is expected to remain low in the near term.
“So far, I have never seen an incident response where AI played a role,” Joyce said. “It has not yet introduced any practical use beyond what we can achieve with the usual tools that we have seen so far.”
But she added, “I can assure you that this is a growing problem over time.”
Reported by Zeba Siddiqui of San Francisco.Editing: Alexandra Hudson
Our criteria: Thomson Reuters Trust Principles.