Author: Dr. Jacob Yukelson
The impact of AI on our way of life is accelerating and, not surprisingly, AI is gaining prominence in the contradictory areas of cybercrime and cybersecurity. The dynamics between these opposing sides have not changed since the beginning of the digital age. Both have always leveraged the capabilities of emerging technologies to achieve their goals. AI is now playing a role in both cyber attack and defense. Those responsible for the security of their organizations, from technologists to executives, need to stay informed about how AI is being used by adversaries and how it can counter these threats. .
First, we need to clarify some of the terminology used when discussing AI. Without going into too much detail, think of AI as an umbrella term that encompasses a variety of computer-based technologies that replicate human problem-solving and decision-making. Machine learning and machine inference are two types of AI technologies used to solve a variety of problems.
Machine learning (ML) applies statistical analysis and pattern recognition to large datasets to uncover patterns in behavior. There are several subsets or types of machine learning, differentiated based on the type of data they use (structured or unstructured), the size of data sets they can work with, and the type of services they provide. Applications of ML are diverse. Examples include services such as fraud detection, self-driving cars, and customer retention.
Generative AI (such as ChatGPT) is a learning-based AI that can create original text, images, audio, and data. Cyber attackers use Generative AI in several ways, which are discussed later in this article.
Machine learning is based on the statistical identification of hidden patterns in large amounts of data through correlations, whereas machine inference is based on using facts and relationships and drawing conclusions from them. For example, a reasoning system can distinguish between the meaning of the word “wear” in the sentences “get dressed” and “put on a show.” Personal assistants like Siri and Alexa use machine reasoning to generate answers to our questions, including questions we have never encountered before.
Cybercrime is big business. Recent estimates put global cybercrime revenue at $1.5 trillion annually. The total cost of cybercrime is even higher, by some estimates at around $6 trillion.
Like any other business, cybercriminal companies strive to increase revenue and reduce costs. His KPIs (key performance indicators) for the company are cost per attack, success rate, and revenue per attack. Learning AI has proven to be highly effective in driving the success of cybercrime as a business. Learning-based AI systems are proving highly effective in the following ways:
- Generative AI allows attackers to create more convincing phishing emails quickly and cheaply. These AI systems are well-adapted to create convincing emails that appear to come from a legitimate source. These generation systems learn and improve over time, increasing the input data set and adjusting based on the effectiveness of past attempts.
- Generative AI is also used to perform highly targeted spear phishing attacks. These attacks are often emails or voicemails based on very specific information or context about key people in an organization. Many of these are business email compromise (BEC) attacks that trick victims into divulging sensitive information or authorizing wire transfers. These attacks involve impersonating trusted parties (company executives, vendors, etc.) to trick victims into making financial transactions or submitting sensitive data.
- Attackers use AI to generate self-learning malware that adapts its course of action depending on the situation, specifically targeting victim systems. AI-generated malware can evade detection and adapt to the target environment and defenses.
- AI tools such as chat bots are used to perform so-called “deep fake” attacks, which use the voice of a trusted party to trick a victim into performing an action. For example, in 2020, a Hong Kong bank manager received a call from a familiar director asking him to approve a $35 million wire transfer. This request was backed up by what appeared to be a legitimate email, and the transfer was executed. Deepfakes can imitate sounds and images and can be used to interact with victims in a conversational mode.
- Advanced persistent cyberattacks, known as Advanced Persistent Threats (APTs), occur when an intruder enters a network undetected and remains there for an extended period of time to steal sensitive data. APTs often use artificial intelligence to avoid detection and target specific organizations or individuals.
Cybercrime statistics confirm the impact that AI is having on the ability of criminal enterprises to carry out attacks more cheaply and effectively than ever before. For example, phishing attacks have increased at an annual rate of 150% since 2019 (” Figure 1 This is facilitated by the ability to generate attacks using AI-driven automation.