The rise of generative AI is akin to the California gold rush. Just as wealth-seekers flooded San Francisco in the late 1840s, today’s Silicon Valley is setting its sights on generative AI in search of digital treasures. Given how dangerous the Gold Rush was and how long it took to build in safety measures, now is the time for organizations using GenAI to follow secure-by-design principles and follow CISA’s example. is.
The mad dash to GenAI makes sense for enterprises. GenAI can do more than just write fake movie scripts or pass school exams. Generates $4.4 trillion in annual revenue To the world economy. While the hype surrounding its potential is real, it has also created a problematic environment where time to market and cost efficiency seem to take precedence over safety and security. The faster an AI system developer can build and scale advanced AI models, the more “money” he can collect through ROI before the market becomes saturated. This is a dangerous idea that can have serious implications from a cybersecurity perspective.
In the early stages of GenAI’s rise, we reached a major tipping point. It is important to remember that during the Gold Rush era, there were virtually no laws or regulations governing mining activities. Tens of thousands of people died in mine-related accidents and violence due to overwork and lax safety measures. It was not until 20 years later that the United States enacted her General Mining Act of 1872.
History is a great teacher. Advanced AI models add new complexity to the cyber threat landscape while redefining the boundaries of what is possible for digital innovation. This time, we cannot afford to be years, let alone 20 years, behind in governance.
“Google Cloud Cybersecurity Prediction 2024” warned Attackers will use GenAI and Large Language Models (LLM) for cyberattacks such as phishing, smishing, and other social engineering attacks. Additionally, a Fastly research report found that more than two-thirds (67%) of IT decision makers believe GenAI will open up new attack vectors, while nearly half (46%) have They are concerned about not being able to defend against threats.
We’re not talking about the latest doomsday scenario created by cybersecurity vendors who want to sell their products. This is a real and immediate threat posed by the rapid acceleration of digital transformation. Adversaries of the nation-state could use his GenAI to target critical infrastructure facilities in the United States, such as power grids, water treatment plants, and medical facilities, putting lives at risk. We are in a race against time to introduce stronger parameters to promote safe AI systems and foster a safer future. The stakes are too high to be left behind.
CISA’s 2023-2024 CISA Artificial Intelligence Roadmap released November 23, 2023 reinforced that idea and emphasized the importance of integrating security as a core component of the AI system development lifecycle. The roadmap outlines four strategic objectives: cyber defense, risk mitigation and resiliency, operational cooperation, and agency integration, which are driven by:
- Use AI responsibly to support CISA’s mission. Use AI-enabled software tools to strengthen your cyber defenses by using them responsibly, ethically, and safely.
- Ensure AI systems. Accelerate the adoption of secure by design principles and advance the development and implementation of secure AI software across the public and private sectors.
- Protect critical infrastructure from malicious use of AI. We work with government agencies and industry partners to monitor AI-based cyber threats and protect America’s critical infrastructure from adversaries.
- Collaborate and communicate with interagency, international partners, and the public on key AI initiatives. Work with international partners to advance global AI security best practices and devise effective policy approaches for the U.S. government’s national AI strategy.
- Expand the AI expertise of CISA personnel. Lead the effort to proactively recruit and develop an AI-enabled workforce through a skills-based hiring approach and cybersecurity certification training.
Of the five pillars, the second and third are the most difficult. There is no easy solution to doing these at scale, but it starts with AI system developers considering security and business goals equally.
Blending safe design with AI coordination
The introduction of CISA’s roadmap required AI system developers to prioritize Secure by Design principles as a top business priority.
The security challenges associated with AI parallel the cybersecurity challenges associated with previous generations of software that manufacturers did not build to be secure by design, placing the burden of security on the customer. Although AI software systems may differ from traditional forms of software, basic security practices still apply. … As the use of AI expands and becomes increasingly embedded in critical systems, security must be a core requirement and integral to AI system development from the beginning and throughout its lifecycle.
Secure by design principles implemented early in product development can help reduce the exploit surface of an application before it becomes widely available, and can help reduce the exploit surface area of an application by focusing on core rather than technical functionality. Promote customer security as a business requirement. However, simply forcing major AI suppliers like OpenAI and Google to include strict guardrails within their products is not enough. Guardrails can be bypassed. Ask any penetration tester. Consider them a temporary solution.
The bigger challenge is that in addition to ensuring AI systems, we must also protect everything that AI can touch, including critical infrastructure and private networks. Therefore, we must implement Secure by Design through the lens of AI coordination and ensure that systems are built to uphold fundamental human values and ethical boundaries. Beyond the Big Tech power players, there are thousands of lesser-known advanced AI models currently in development, many of them open sourced with weaker guardrails than the GPT-4s and Bards of the world. It is scheduled to be done. Without AI coordination, the right product will simply end up in the wrong hands and wreak havoc.
There’s no denying that tuning AI is an expensive and time-consuming task. OpenAI is dedicating 20% total computing power Developers need to think of AI as a necessary cost of doing business, especially considering the potential consequences of doing nothing.
Assessing the impact of inaction on AI security
In addition to the risks to human life, there may be legal repercussions for AI system developers if they do not prioritize safe and secure AI systems. CISA’s roadmap emphasizes the importance of holding developers more accountable for harm caused by their products, a key point of the Biden administration’s executive order on AI issued in October. There is also. This shifts the burden of responsibility away from victims and creates the potential for criminal or civil penalties to be imposed after large-scale attacks. A similar trend is emerging across cybersecurity amid new federal regulations, with the Security and Exchange Commission recently filing fraud charges against SolarWinds and its CISO for allegedly concealing cyber risks from investors and customers. did.
That’s a dilemma. On the other hand, if an AI-powered attack is successful because a utility provider’s weak security architecture led to a zero-day exploit, should the developer still be held responsible? What happens if standard measures are not taken to prevent a company from being exploited for hostile intent? Who is ultimately responsible? Is it reasonable to share responsibility and liability? Yes, is it possible?
There may be no right answer, but either way, developers need to be aware of the financial and brand reputation risks of doing nothing. The risks to human life are certainly there, but quantifying the correlation between cyber risk and business risk is an effective way to make a difference. Meanwhile, cyber defenders also have a role to play. In today’s threat environment, making cyber resilience a priority for your organization through strong cyber hygiene is non-negotiable.
The rise of GenAI in 2023 showed how much can change in a year. And while we cannot predict where the AI era will take us, a determined effort to promote safe and secure systems is paramount to navigating it safely. By following CISA’s roadmap and blending secure by design and AI alignment throughout the development lifecycle, you can take proactive steps to ensure AI remains an enduring force.
Ed Skoudis, President of SANS Technology Institute, is the founder of the SANS Penetration Testing Curriculum and Counter Hack.