Artificial intelligence (AI) has become a key element in cybersecurity in recent years, but the widespread adoption of large-scale language models (LLMs) makes 2023 an especially exciting year. In fact, LLMs have already begun to transform the landscape of cybersecurity. But it has also created unprecedented challenges.
On the other hand, LLMs make it easier to process large amounts of information and make AI available to everyone. It can deliver incredible efficiency, intelligence, and scalability in managing vulnerabilities, preventing attacks, handling alerts, and responding to incidents.
On the other hand, adversaries may also leverage LLM to make their attacks more efficient or exploit additional vulnerabilities introduced by LLM, which could lead to unintended data loss due to the proliferation of AI. Further cybersecurity issues such as breaches may occur.
Implementing an LLM requires a new way of thinking about cybersecurity. It’s more dynamic, interactive and customized. In the era of hardware products, hardware only changed when it was replaced by the next new version of the hardware. In the cloud era, you could update software and collect and analyze customer data to improve the next version of the software, but only when a new version or patch was released.
In the new era of AI, the models used by customers have their own intelligence and can continue to learn and change based on customer usage to better serve customers or be biased in the wrong direction. can. Therefore, not only do we need to build in security at design time, but we also need to build secure models and ensure that our training data is not contaminated. He must also continue to evaluate and monitor the safety, security, and ethics of the LLM system after deployment.
Most importantly, intelligence needs to be built into security systems so that children do not deviate easily and can adapt to make good and reliable decisions (rather than simply regulating their behavior). (e.g., instilling correct moral standards). Invalid input.
What did the LLM bring to cybersecurity, for better or worse? We share what we learned over the past year and our predictions for 2024.
Looking back at 2023
When I wrote “The Future of Machine Learning in Cybersecurity” a year ago (before the LLM era), I highlighted three unique challenges for AI in cybersecurity: accuracy, lack of data, and lack of ground truth; I pointed out three challenges for AI. In cybersecurity, explainability, talent shortages, and AI security are more serious.
Now, after much research and a year later, we are finding that LLMs are a huge help in four of these six areas: lack of data, lack of ground truth, explainability, and lack of talent. I did. His other two areas, precision and AI security, are very important but still very difficult.
Two areas summarize the biggest benefits of using an LLM in cybersecurity.
1. Data
labeled data
Using LLM, we were able to overcome the challenge of not having enough “labeled data.”
High-quality labeled data is required to make AI models and predictions more accurate and suitable for cybersecurity use cases. However, these data are difficult to obtain. For example, it is difficult to discover malware samples to learn attack data. Compromised organizations are often reluctant to share that information.
LLM helps collect initial data and synthesize data based on existing real-world data and extends it to generate new data about attack sources, vectors, techniques, and intent. This information is used to build new detections without being limited to field data. .
ground truth
As I said in an article a year ago, the truth about cybersecurity isn’t always available. LLM can be used to significantly improve ground truth by finding gaps in detection and multiple malware databases, reducing false negative rates, and frequently retraining models.
2. Tools
The LLM excels at making cybersecurity operations easier, more user-friendly, and more viable. To date, the LLM’s biggest impact on cybersecurity is for security operations centers (SOCs).
For example, a key feature behind SOC automation with LLM is function calls. This helps translate natural language instructions into API calls that can directly interact with the SOC. The LLM helps security analysts handle alerts and incident response more intelligently and quickly. LLM allows you to integrate advanced cybersecurity tools by receiving natural language commands directly from your users.
explainability
Previous machine learning models worked well, but they couldn’t answer the question “why?” LLM has the potential to change the game by accurately and confidently explaining why, fundamentally changing threat detection and risk assessment.
LLM’s ability to quickly analyze large amounts of information is useful for correlating data from a variety of tools, including events, logs, malware family names, information from common vulnerabilities and exposures (CVEs), and internal and external databases. Helpful. This not only helps identify the root cause of alerts and incidents, but also significantly reduces mean time to resolution (MTTR) for incident management.
shortage of human resources
Unemployment in the cybersecurity industry is negative. There is a shortage of experts and humans are unable to respond to the large number of alarms. LLM is popular among security analysts thanks to its advantages in quickly assembling and digesting large amounts of information, understanding commands in natural language, breaking them down into necessary steps, and finding the right tools to perform the task. Significantly reduces your workload.
From acquiring domain knowledge and data to analyzing new samples and malware, LLM can help you build new detection tools faster and more effectively. From identifying and analyzing new malware to identifying fraudsters, this can be done automatically.
We also need to build the right tools into our AI infrastructure so that anyone can leverage AI in cybersecurity without being a cybersecurity expert or AI expert.
Three predictions for 2024
When it comes to the growing use of AI in cybersecurity, it is clear that we are at the beginning of a new era, or the early stages of what is often referred to as “hockey stick” growth. The more we learn about LLMs that can improve our security posture, the more likely we are to stay ahead of ourselves (and our adversaries) in getting the most out of AI.
While there are many areas in cybersecurity that are ripe for discussion about the growing use of AI as a force multiplier to combat growing complexity and attack vectors, three stand out: there is.
1. Model
AI models make great strides in creating detailed domain knowledge rooted in cybersecurity needs.
Last year, a lot of attention was focused on improving common LLM models. Researchers have worked hard to make models more intelligent, faster, and cheaper. However, a significant gap exists between what these generic models can provide and what cybersecurity requires.
Specifically, our industry doesn’t necessarily need giant models that can answer questions as diverse as “how to make a Florentine egg” or “who discovered America?” yeah. Instead, cybersecurity requires ultra-accurate models with deep expertise in cybersecurity threats, processes, and more.
In cybersecurity, accuracy is mission critical. For example, Palo Alto Networks processes over 75 TB of data from its SOCs around the world every day. Even a 0.01% false positive decision can have fatal consequences. Providing customized services focused on customers’ security requirements requires highly accurate AI with rich security background and knowledge. In other words, these models need to perform fewer specific tasks with greater accuracy.
Engineers are making great strides in creating models with more vertical industry and domain-specific knowledge, and I believe we will see the emergence of cybersecurity-focused LLMs in 2024.
2. Use case
Innovative use cases for LLMs in cybersecurity will emerge. This makes the LLM essential for cybersecurity.
In 2023, everyone was very excited about the amazing features of LLM. People used that “hammer” to try all sorts of “nails.”
In 2024, we will understand that not all use cases are best suited for LLM. We plan to offer actual LLM-enabled cybersecurity products targeted at specific tasks that closely match LLM’s strengths. This truly increases efficiency, increases productivity, improves ease of use, solves real-world problems, and lowers costs for our customers.
Imagine being able to read thousands of playbooks on security issues, such as configuring endpoint security appliances, troubleshooting performance issues, onboarding new users with the right security credentials and permissions, and analyzing security architecture designs by vendor. please try.
LLM’s ability to consume, summarize, analyze, and generate the right information in a scalable and agile manner will transform security operations centers and revolutionize when, where, and how security professionals are deployed.
3. AI safety and security
In addition to using AI for cybersecurity, another big topic is how to build and use secure AI without compromising the intelligence of the AI model. There is already a lot of discussion and great work being done in this direction. In 2024, real solutions will be introduced, even if they are temporary, and will be a step in the right direction. Additionally, an intelligent evaluation framework needs to be established to dynamically evaluate the security and safety of AI systems.
Please note that LLM can also be accessed by malicious parties. For example, hackers can use her LLM to easily generate much higher quality and larger quantities of phishing emails. LLM can also be used to create entirely new malware. But the industry is acting more collaboratively and strategically in its use of LLM, helping us stay ahead of the curve and stay ahead of the bad guys.
On October 30, 2023, U.S. President Joseph Biden issued an executive order targeting the responsible and appropriate use of AI technologies, products, and tools. The purpose of this directive touches on the need for AI vendors to take all necessary steps to ensure that their solutions are used for appropriate and not malicious purposes.
AI security and safety is a real threat. We need to take this seriously and assume that hackers are already designing to attack our defenses. The simple fact that AI models are already widely used has significantly expanded the attack surface and threat vector.
This is a very dynamic field. AI models are improving every day. Even after an AI solution is deployed, models are constantly evolving and never remain static. Continuous evaluation, monitoring, protection, and improvement are critically needed.
Attacks using AI will continue to increase. As an industry, we must make developing secure AI frameworks a top priority. This will require a current moonshot that involves collaboration across the technology ecosystem, including vendors, businesses, academic institutions, policymakers, and regulators. There is no doubt that this is a difficult task, but I think everyone understands how important this task is.
Conclusion: The best is yet to come
In some ways, the success of general-purpose AI models like ChatGPT has ruined us in the cybersecurity field. We all hoped that we could build, test, deploy, and continually improve LLMs to make them more cybersecurity-centric, but cybersecurity is very difficult to apply AI to. I realized that this is a unique, specialized area that requires attention. For it to work, you need to get all four key aspects right: data, tools, models, and use cases.
The good news is that smart, determined people with the vision to understand why we must push for more accurate systems that combine power, intelligence, ease of use, and perhaps most of all, relevance to cybersecurity. It means that you have many connections with strong people.
I’ve been fortunate to work in this field for a long time, and I’m always excited and satisfied by the daily progress of my colleagues within Palo Alto Networks and in the surrounding industry.
Returning to the tricky part of being a prophet, it’s difficult to know much about the future with absolute certainty. But I know two things about him:
- 2024 will be a phenomenal year for the use of AI in cybersecurity.
- 2024 will pale in comparison to what’s to come.
Learn more about here.