analysis Despite the hype about criminals using ChatGPT and various other large-scale language models to make malware creation easier, this generative AI technology can help with that kind of work. doesn’t seem to be very good at it.
This is why we see findings from this week that while some criminals are interested in using source-suggestion ML models, the technology isn’t actually widely used to create malicious code. is our opinion. Perhaps it’s because these generative systems aren’t up to the task, or there aren’t enough guardrails to make the process tedious enough for cybercriminals to give up.
If you want useful and reliable exploits and post-breach tools, you need to pay big bucks, get them for free, like on GitHub, or have the programming skills, patience, and time to develop them from scratch. there is. AI does not provide the shortcuts that criminals expect, and AI prevalence among cybercriminals is said to be on par with other tech industries.
the study
In two reports released this week, Trend Micro and Google’s Mandiant examine the topical AI techniques and both come to the same conclusion. So, while internet thugs are interested in using generative AI for illicit purposes, its use remains limited in practice.
“AI is still in its early stages in the criminal underworld,” said Trend Micro researchers David Sancho and Vincenzo Ciancarini. I have written on tuesday.
“The progress we’re seeing isn’t groundbreaking. In fact, it’s moving at the same pace as every other industry,” they said.
Meanwhile, Mandiant’s Michelle Cantos, Sam Liddell and Alice Reverie have been tracking criminals’ use of AI since at least 2019. In a study released Thursday, they said: point out “AI adoption in intrusion operations remains limited and is primarily related to social engineering.”
Two threat intelligence teams came to similar conclusions about how criminals are using AI for illegal activities. That means generating text and other media to lure Mark to phishing pages and similar scams, and less likely to automate malware development.
“ChatGPT is best suited for creating believable texts that can be abused in spam and phishing campaigns,” the Trend Micro team wrote, adding that some of them are sold on criminal forums. products are starting to incorporate ChatGPT interfaces that allow buyers to create phishing emails.
“For example, the spam processing software called GoMailPro, which supports AOL Mail, Gmail, Hotmail, Outlook, ProtonMail, T-Online and Zoho Mail accounts, is primarily used by criminals to send spam emails. Confirmed, to the victims,” Sancho and Chancarini said. “On April 17, 2023, the software creator announced in his GoMailPro sales thread that he had ChatGPT integrated into his GoMailPro software for drafting spam emails.”
AI can not only help create phishing emails and other social engineering scams (especially in languages the criminals don’t speak), but it can also create content for disinformation campaigns, such as deepfake audio and images. is also excellent.
Fuzzy LLM
According to Google, one of the things AI is good at is fuzzing, aka fuzz testing. It is a technique for automating vulnerability detection by injecting random or carefully crafted data into software to trigger and discover exploitable bugs.
“With LLM, we can increase code coverage for critical projects using OSS-Fuzz services without having to manually write additional code,” said Dongge Liu of the Google Open Source Security Team. , Jonathan Metzman and Oliver Chang. I have written on wednesday.
“Using LLM is a promising new way to scale security improvements across the 1,000+ projects currently fuzzed by OSS-Fuzz and remove barriers to adopting fuzzing in future projects,” they said. added.
This process took a lot of time, but rapid engineering The team said their other work ultimately improved the project’s code coverage by 1.5 to 31 percent.
And in the coming months, Googlers said they will open source the evaluation framework so that other researchers can test their own automated generation of fuzz targets.
Mandiant, on the other hand, divides its image generation functions into two categories. One is a generative adversarial network (GAN) that can be used to create realistic headshots of people, and the other is a generative text-to-image model that can generate customized images from text prompts.
Although GANs tend to be more commonly used, especially by nation-state threat groups, “the text-to-image model is more powerful than GANs because it can be used to support deceptive narratives and fake news.” It also likely poses a significant deceptive threat.” Mandiant Trio.
This includes, for example, pro-Chinese propaganda proponent Dragonbridge, who also uses AI-generated videos to create short “news segments.”
Both reports acknowledge criminals’ interest in using LLM to create malware, but that doesn’t necessarily translate into real code.
As regular developers have noticed, AI can help improve code, develop snippets of source and boilerplate functions, and make unfamiliar programming languages easier to learn. However, it is true that using AI to create malware requires some level of technical proficiency, and will likely require checks and fixes by human programmers.
So anyone using AI to create real, usable malware could potentially write that code themselves anyway. LLMs exist primarily to potentially speed up development, rather than drive automated assembly lines for ransomware and exploits.
What’s stopping the bad guys from doing this? This is part of the restrictions placed on LLMs to prevent them from being abused, and that’s why security researchers see some criminals as being discovered. Advertising service To peers who can circumvent the safety measures of the model.
Additionally, as Trend Micro points out, there is a lot of talk about ChatGPT jailbreak prompts, especially in the “Dark AI” section of hack forums.
Criminals are willing to pay for these services, according to Sancho and Ciancarini, so some speculate that in the future there may be so-called ‘ready-to-work engineers’. Mr. and Mr. Siancarini added: “We reserve judgment on this forecast.” ®