For over 20 years, the Open Worldwide Application Security Project (OWASP) Top 10 Risks list has been a go-to reference in the battle to secure software. In 2023, OWASP released a new addition: an overview of AI-specific risks. Two draft versions of the AI Risks list were published in the spring/summer of that year, with the official version 1 released in October.
Since then, LLMs have become increasingly established as a business productivity tool. Most companies are using or considering using AI, but while some disadvantages are well known, such as the need to constantly check the work of LLMs, others remain unrecognized.
Our analysis shows that the vulnerabilities identified by OWASP fall into three broad categories:
- Access Risk Related to abused privileges or unauthorized actions.
- Data Risk Such as data manipulation or loss of service.
- Reputation and business risks This is caused by improper AI output or actions.
In this blog, we’ll take a closer look at the specific risks in each case and offer some suggestions on how to address them.
1. AI-based access to risk
Of the 10 vulnerabilities listed by OWASP, three are specific to access and privilege abuse. Unsafe plugin design, unsafe output handling, and Excessive power of attorney.
According to OWASP, the use of insecure LLMs can result in loss of access control, exposing you to malicious requests and unauthorized remote code execution. On the other hand, plugins and applications that work with large language models should be aware of the following: output Running without secure evaluation can expose backend systems to XSS, CSRF, and SSRF attacks that can result in unwanted actions being performed, unauthorized privilege escalation, or remote code execution.
Because AI chatbots are “actors” capable of making and executing decisions, it matters how much discretion (i.e., agency) they are given. As OWASP explains, “Excessive agency is a vulnerability that allows harmful actions to be taken in response to unexpected/ambiguous output from the LLM (whether the LLM malfunctions due to hallucinations/confabulations, direct/indirect prompt injection, malicious plugins, poorly designed benign prompts, or simply a poorly performing model).”
For example, a personal email reader assistant with message sending capabilities could be exploited by malicious emails to spread spam from a user’s account.
In all these cases, large language models are a pathway for bad actors to infiltrate the system.
2. AI and Data Risks
Tainted training data, Supply chain vulnerabilities, Disclosure of confidential information, Prompt injection vulnerability and Denial of Service These are all data-specific AI risks.
Data can be intentionally polluted by bad actors, or it can be inadvertently polluted when an AI system learns from untrusted or unverified sources. Both types of contamination can occur within active AI chatbot applications, or from the LLM supply chain, where reliance on pre-trained models, crowdsourced data, and insecure plug-in extensions can lead to biased data output, security breaches, or system failures.
Contaminated data and supply chains are concerns regarding inputs: allowing private, sensitive, personally identifiable information into the training data for a model could also lead to unwanted leakage of sensitive information.
With prompt injection, malicious input can cause large language model AI chatbots to expose data that should remain private or take other actions that could lead to data compromise.
AI denial-of-service attacks are similar to traditional DOS attacks, aiming to overwhelm large language models and deny users access to their data and apps, or force systems to consume excessive resources, incurring huge costs as many AI chatbots rely on pay-as-you-go IT infrastructure.
3. Reputational and business risks associated with AI
The last two OWASP vulnerabilities are: Model Theft and Excessive reliance on AIFirst, if an organization has its own LLM model, if that model is accessed, copied, or taken away by unauthorized users, it could be misused to negatively impact business performance, create a competitive disadvantage, or cause the disclosure of confidential information.
Over-reliance on AI is already having an impact around the world. Large-scale language models are generating fake quotations and Case Law To Racism and sexism language.
OWASP notes that relying on AI chatbots without proper oversight can put organizations at risk of publishing misinformation or offensive content, potentially damaging their reputations or exposing them to legal action.
Considering all these different risks, it begs the question, “What can we do about it?” Fortunately, there are some protective steps that organizations can take.
What companies can do about AI vulnerabilities
From Trend Micro’s perspective, defending against AI access risks requires a zero trust security stance with strict system isolation (sandboxing). Generative AI can mimic trusted entities, so it challenges zero trust defenses in ways that other IT systems do not, but a zero trust posture adds checks and balances that make it easier to identify and curb unwanted activity. OWASP also advises that large language models “should not be self-monitoring,” and calls for controls to be built into application programming interfaces (APIs).
Sandboxing is also key to protecting the privacy and integrity of your data: keeping sensitive information completely separate from shareable data and inaccessible to AI chatbots and other public systems.
Proper separation of data prevents large language models from including private or personally identifiable information in their public output or from publicly soliciting users to interact in inappropriate ways with secure applications, such as payment systems.
In terms of reputation, the simplest measure is to not rely solely on AI-generated content or code, and not to publish or use AI output without first verifying that it is true, accurate and trustworthy.
Many of these defenses can and should be built into corporate policy. With the right policy foundation in place, security technologies such as Endpoint Detection and Response (EDR), Extended Detection and Response (XDR), and Security Information and Event Management (SIEM) can be used to enforce policy and monitor for potentially harmful activity.
Large-scale language model AI chatbots are here to stay
The OWASP AI Risk Catalog proves that concerns about rushing to adopt AI are well justified. At the same time, it’s clear that AI isn’t going away, so understanding the risks and taking responsible steps to mitigate them is crucial.
Setting appropriate policies to govern the use of AI and implementing those policies with the help of cybersecurity solutions is a good first step. It’s also important to stay informed. In Trend Micro’s view, the OWASP Top 10 AI Risks list should become an annual must-read list, similar to the original application security list since 2003.