Since the introduction of ChatGPT in November 2022, generative artificial intelligence (AI) has taken the world by storm. This new age of AI uses large-scale language models (LLMs) to translate human language into useful results for machines. The results are powerful.
Generative AI allows organizations to accelerate their employees’ ability to collect, organize, and communicate information. Automate more routine language-related tasks, freeing up your workforce to focus on delivering business value. You can also optimize processes and take advantage of valuable AI insights to make better decisions.
These features are just the tip of the iceberg. recent research We found that 73% of IT and security leaders say their employees use generative AI tools or LLMs in the workplace. The business benefits are undeniable. Unfortunately, the same survey respondents also admitted that they do not know how to address the security risks associated with this technology.
understand the risks
The confusion surrounding the protection of generative AI is not all that surprising. Similar patterns can be seen with other technology trends, including internet, mobile, and cloud, where adoption outpaces security. To stay competitive, many organizations are now rushing to use generative AI without considering the risks, leaving security on the back burner. But with AI, this approach can have devastating consequences. Here are just some of the risks organizations face from using generative AI without proper security guardrails in place.
- Extended unprotected attack surface.
- Sharing sensitive information with third parties may result in loss of IP and data.
- Accuracy issues that are difficult to detect and require engineering to mitigate.
- Extensive use of “shadow AI” by employees – This may or may not align with company policy.
- Limited detection and response – In most cases, generative AI apps have minimal transparency.
Additionally, generative AI has significantly lowered the barrier to entry for threat actors. Generative AI models allow people with limited cybersecurity background or technical skills to carry out attacks. This type of AI also makes it much easier for cybercriminals to write malicious code, scan and penetrate networks, and create believable phishing emails. As a result, it’s becoming increasingly difficult for organizations to prevent AI-powered attacks and for employees to distinguish between legitimate and fraudulent emails.
Regardless of the potential value of generative AI from a business perspective, organizations cannot ignore these security concerns.
Install guardrails to use AI safely
Organizations can maximize the benefits of AI by prioritizing security from the beginning. Here are six steps your team can take right away.
- Conduct a readiness assessment. Deploy AI and conduct a thorough review of your organization’s cyber readiness to counter AI-powered attackers. Use these assessments to identify and remediate security gaps.
- Implement security controls and governance. The pace of AI innovation tends to be much faster than the pace of regulation, so organizations should not wait for government standards for AI. Instead, establish internal policies and controls for the use and protection of AI within your organization. Develop policies and controls in collaboration with departments across data governance, privacy, risk, legal, IT, and business stakeholders, covering topics such as use cases, ethics, data processing, privacy, and legality. Once established, create a governance process to ensure employees follow documented security guardrails. Just because an employee initially had good intentions doesn’t mean compliance won’t deviate. To reduce the risks of using AI, it is important to quickly detect when this happens. Frameworks such as the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and his MITER’s ATLAS framework are great starting points for self-governance.
- Ensure AI products adhere to high standards: Treat new AI technologies with the same rigor as any other technology. Be distrustful of new entrants to your environment until they are proven safe. The enthusiasm for new AI products and services has become a highly vulnerable attack vector for attackers to exploit to circumvent security. Evaluate and consider the use of “red teaming” for new additions from a cybersecurity, privacy, compliance, and risk perspective.
- Prioritize monitoring: In addition to using AI, teams should also monitor AI models. Log all prompts and responses for review and threat hunting. Monitoring and logging can help teams understand how employees use AI, detect patterns of misuse or indicators of risk, and ensure AI ethics and fairness.
- Educate end users. As with other areas of security, risk mitigation starts with your employees. Therefore, effectively preparing for AI threats involves educating end users. This should include training on the types of information to share with public AI models and guidance on best practices to avoid blindly accepting false statements. It is essential to foster an environment of healthy skepticism about the use of AI and equip employees with the skills to recognize and respond to potential threats.
- Stay up to date on the latest developments in AI and cybersecurity. AI enhancements are happening at an unprecedented pace. Policies that teams use today are often outdated tomorrow. By staying up to date on the AI and security landscape, your security guardrails evolve as threats change, ensuring no one can slip through the cracks.
When it comes to generative AI, we can choose to continue to fear it and try to ban its use, or we can embrace its potential and find ways to safely adapt. AI is expected to transform nearly every industry, and when that transformation happens, choosing to embrace it will help your organization stay competitive.
Tackling the risk implications of AI can be challenging, but our roadmap follows the same fundamental principles as any other security program, focusing on people, processes, and technology to cover all aspects of risk. I am. By following these six steps, security teams can help organizations use the AI they generate securely to reap business benefits while minimizing the associated risks.
Randy Lariar, AI Security Lead, Optiv