Introducing artificial intelligence into the cybersecurity field creates a vicious cycle. Cyber experts are now leveraging AI to power their tools and improve detection and protection capabilities, but cybercriminals are also leveraging AI for attacks. Security teams then use more AI in response to AI threats, threat actors ramp up their AI to keep up, and the cycle continues. Despite its great potential, AI is severely limited when adopted in cybersecurity. AI security solutions have reliability issues, and the data models used to develop AI-powered security products always seem to be at risk. Additionally, AI often conflicts with human intelligence when implemented. The double-edged nature of AI makes it a complex tool that organizations must better understand and use more carefully. In contrast, threat actors are using AI almost without restrictions. One of the biggest challenges when implementing AI-driven solutions in cybersecurity is building trust. Many organizations are skeptical of AI-powered products from security companies. This is not surprising, as some of these AI security solutions are overhyped and underachieving. Many products touted as AI-enhanced do not live up to expectations. One of the most touted benefits of these products is that they greatly simplify security tasks and allow non-security personnel to complete them. This claim is often disappointing, especially for organizations struggling with a shortage of cybersecurity talent. AI should be one of the solutions to the cybersecurity talent shortage, but companies that over-promise and under-deliver aren’t helping to solve the problem. In fact, it undermines the credibility of AI-related claims.
Full opinion: While AI’s effectiveness is limited in cybersecurity, it is limitless in cybercrime.