Generative AI (GenAI) offers countless opportunities for civilian agencies and the military to transform how they approach innovation, efficiency, and situational awareness. However, the use of GenAI is a double-edged sword: it presents a golden opportunity for aggressive adversaries to attack the vast federal IT space. In response, federal agencies are turning to AI to supplement human cybersecurity talent.
MeriTalk recently sat down with Cisco’s Christina Hausman, who has more than 20 years of cybersecurity product expertise, to discuss how GenAI is changing the cybersecurity landscape and how AI-enabled security solutions can not only protect agencies’ networks and data, but also ensure the safe and responsible use of GenAI tools.
Meritalk: GenAI is enhancing the way many of us go about our work by automating tasks, enhancing creativity, and improving communication, and it is changing the way threat actors operate. What attack vectors have been most impacted by GenAI, and what do you expect to see in the future?
Hausman: From a cybersecurity perspective, GenAI creates huge opportunities for both novice and experienced attackers. GenAI truly lowers the barrier to entry for both groups, allowing them to work faster and deploy more effective attack campaigns. Attackers can use AI to generate new malware types based on zero-day vulnerabilities and evade traditional detection methods. For experienced attackers, AI can make reconnaissance easier and faster, examine stolen data, and use that data to tailor their next attack.
In the future, GenAI and large-scale language models will make phishing attacks much harder to detect. Whereas in the past, phishing emails were often easily spotted by grammar and spelling mistakes, AI tools can now craft the perfect email, increasing the chances of a successful attack. Deepfake voice adds another layer. Imagine a senior executive on a business trip who “urgently” needs network access, and an employee receives a realistic voice clip. AI will make it harder for all of us to determine what is legitimate and what is not.
AI can also help attackers quickly find unpatched systems. When a new vulnerability is announced, administrators may have limited time to validate software patches. Administrators always want to balance security risks with the likelihood that a patch will disrupt operations, but for vulnerable systems, they may have to patch and cross their fingers, and need to deploy a fix quickly before they can be confident that it won’t impact critical operational resources.
Meritalk: As you point out, attackers are leveraging AI to launch faster, more powerful attacks. How are agencies adapting their own AI strategies to proactively identify and mitigate these threats?
Hausman: The federal government’s vast attack surface makes it a prime target for financially motivated attacks as well as sophisticated nation-state cyberattacks aimed at disrupting operations and stealing sensitive data. Today, with cybersecurity staffing shortages in the public and private sectors, AI has become a vital tool for security operations centers to manage vast amounts of data and bolster defenses against new AI-enabled attacks and breaches.
Cybersecurity vendors will play a key role in educating organizations on how AI can augment security teams and reduce noise – the high volume of irrelevant or low-level security alerts and data that can cause alert fatigue and prevent analysts from detecting actual malicious activity.
AI-powered solutions can automate analysis and pinpoint attack patterns, allowing agencies to focus resources on critical vulnerabilities and areas of greatest risk. Machine learning can be implemented to proactively identify system and network weaknesses and reduce the risk of breaches through rapid patching. AI can also automate risk assessments, allowing IT teams to prioritize security measures based on the likelihood and potential impact of an attack.
Meritalk: How can AI be used to streamline security policy implementation??
Hausman: By continuously running machine learning algorithms, they can analyze network traffic, user behavior, and system logs to identify suspicious activity and automatically block malicious IP addresses. Additionally, AI tools can instantly shut down compromised systems and user accounts.
AI is also increasingly being used in intent-based security, where administrators can define their security goals in easy-to-understand language and the system will analyze the intent and automatically generate configurations that enforce the desired security policies.
Meritalk: The rapid adoption of GenAI tools like ChatGPT represents a tremendous opportunity for government agencies to streamline workflows and improve efficiency. But these tools also raise concerns about potential security breaches and unintended consequences. What challenges will agencies face as federal employees adopt and use these tools?
Hausman: Government agencies must protect vast amounts of intellectual property and sensitive information, which requires effective Data Loss Prevention (DLP) and application visibility and control capabilities that enable administrators to discover GenAI tools being used in their environments and assess the risks these tools pose to data security and privacy.
IT teams should also consider the limited transparency of GenAI tools, which means there is a lack of insight into how these tools reach their conclusions, what data they analyze to reach a particular conclusion, and whether that data is valid. IT teams need to evaluate whether they can trust the tool’s decisions.
It is crucial for IT and cybersecurity teams to evaluate the pros and cons of GenAI and put technology policies and processes in place to use it safely. Most organizations that handle sensitive information have a data lifecycle model, and the same type of model needs to be developed for the use of GenAI.
Meritalk: More agencies Incorporating GenAI, how do Is Cisco helping government agencies enforce policies around GenAI and prevent data loss?
Hausman: Cisco Umbrella for Government gives security administrators visibility and granular control over the use of GenAI apps in their environment. They can monitor who is using GenAI tools and how frequently they are using them. Once GenAI usage is understood, administrators can monitor the usage of any data classifications of concern to assess the risk to their agency. Once risk is established, Umbrella policy controls can be deployed to control access and usage of GenAI applications, such as DLP policies to prevent sensitive data from leaking through GenAI. If an agency has significant concerns about data leakage, it may make sense to block all AI usage.
Security teams can discover, block, allow or control over 180 GenAI applications through Umbrella for Government’s Domain Name System (DNS) layer security, secure web gateway, and DLP policies.
Meritalk: Umbrella for Government is working on AI and Machine Learning How do we discover new threats and attacks?
Hausman: Identifying early precursors to an attack is essential to respond quickly and prevent attacks. DNS-Layer Security leverages machine learning to proactively identify malicious domains and act as a first line of defense, filtering threats before they reach your network and endpoints. And by analyzing internet activity patterns, Umbrella for Government automatically identifies attacker infrastructure preparing for the next threat and blocks those domains.
Umbrella for Government security protections are powered by Cisco Talos, the largest non-government threat research and intelligence team, which uses AI algorithms, machine learning, and other deep learning models to analyze the extensive and detailed data and threat intelligence it receives from Cisco’s suite of security products and third-party partnerships to provide continuously updated protection.
Meritalk: As GenAI constantly evolves, how can federal agencies navigate the changing risk landscape and scale their security posture accordingly?
Hausman: First, it’s important to develop an AI strategy that aligns with your agency’s mission and goals, especially with regards to security. Rather than using AI because everyone else is using it, determine the use cases for leveraging AI and the desired outcomes your agency is trying to achieve.
Second, they need to have cybersecurity experts who understand AI, evaluate its use, implement policy controls for AI, and continuously improve those policies. Agencies should also diligently research vendors and solutions to ensure that AI tools provide robust DLP capabilities and application visibility and control. Continuous monitoring and multi-layered security defenses can help minimize risk and maximize the effectiveness of GenAI tools.