Most security teams can benefit from integrating artificial intelligence (AI) and machine learning (ML) into their daily workflows. These teams are often understaffed and overwhelmed with false positives and noisy alerts that can drown out signals of genuine threats.
The problem is that too many ML-based detections are missing the mark in terms of quality. And, perhaps even more concerning, incident responders tasked with responding to these alerts are not always able to correctly interpret their meaning and significance.
With all the breathtaking hype about the potential of AI/ML, it’s no wonder why so many security users feel overwhelmed. is. And what needs to happen in the next few years for AI/ML to fully deliver on its cybersecurity promise?
Breaking the AI/ML Hype Cycle
AI and ML are often confused, but cybersecurity leaders and practitioners need to understand the difference. AI is a broad term that refers to machines that mimic human intelligence. ML is a subset of AI that uses algorithms to analyze data, learn from it, and make informed decisions without explicit programming.
When faced with the bold promises of new technologies such as AI/ML, it is difficult to determine what is commercially viable, what is just hype, and when those claims will yield results. There may be cases.of Gartner Hype Cycle Visualize technology and application maturity and adoption. This helps reveal how innovative technologies are relevant in solving real business problems and exploring new opportunities.
But the problem arises when people start talking about AI and ML. UVA Professor Eric Siegel said, “AI suffers from an inexorable and incurable problem of ambiguity. AI is an umbrella term and does not consistently refer to any particular method or value proposition. “No,” writes UVA professor Eric Siegel in his book. harvard business review. “Calling ML tools ‘AI’ overstates what most ML business implementations actually do,” Siegel says. “As a result, most ML projects fail to deliver value. In contrast, ML projects that are centered around specific operational goals have a good chance of achieving those goals.”
While AI and ML have undoubtedly made great strides in strengthening cybersecurity systems, they are still nascent technologies. When ML’s capabilities are overhyped, users eventually become disillusioned and begin to question the value of ML in cybersecurity at all.
Another key issue hindering the widespread deployment of AI/ML in cybersecurity is the lack of transparency between vendors and users. As these algorithms become more complex, it becomes increasingly difficult for users to decompose how a particular decision was made. Vendors often don’t provide clear explanations of the features of their products, citing intellectual property confidentiality, which erodes trust and makes users more likely to rely on older, familiar technology.
How AI and ML deliver on the promise of cybersecurity
Bridging the gap between unrealistic user expectations and AI/ML expectations requires collaboration between stakeholders with different incentives and motivations. Consider the following suggestions to fill this gap.
- Bring together security researchers and data scientists early and often. Currently, data scientists may be developing tools without fully understanding their usefulness for security, while security researchers are attempting to create similar tools but do not fully understand their utility for data science and ML. You may lack deep knowledge. To combine expertise and unleash its full potential, these two very different disciplines need to collaborate and learn from each other productively. For example, data scientists can use ML to enhance threat detection systems by identifying meaningful patterns in large heterogeneous datasets, and security researchers can use ML to improve their understanding of threat vectors and potential vulnerabilities. You can contribute.
- Use normalized data as a source. The quality of the data used to train a model directly impacts the results and success of an AI/ML tool. In our increasingly data-driven world, the old adage “garbage in, garbage out” is truer than ever. As security moves to the cloud, normalizing telemetry at the point of collection means the data is already in a standard format. Organizations can immediately stream normalized data to the discovery cloud (security data lake), making it easier to train and improve the accuracy of ML models without having to deal with format mismatches.
- Prioritize user experience. Security applications aren’t known for creating easy-to-use and streamlined user experiences. The only way to ship something that people can use correctly is to start with the user experience instead of adding it at the end of the development cycle. By incorporating clear visualizations, customizable alert settings, and easy-to-understand notifications, security professionals are more likely to adopt and utilize this tool. Similarly, when applying an AI/ML model to a security context, a feedback loop is created so that a security analyst or threat researcher can register input and modify it to adjust the model to the organization’s requirements. It is essential to be prepared.
The ultimate goal of cybersecurity is to prevent attacks from occurring, rather than simply responding to them after the fact. By providing security teams with actionable ML capabilities, they can break through the hype cycle and begin to realize its lofty promise.