OTTAWA — Top cybersecurity officials are urging technology companies to build safeguards into future artificial intelligence systems they are developing to prevent them from being sabotaged or used for malicious purposes.
OTTAWA — Top cybersecurity officials are urging technology companies to build safeguards into future artificial intelligence systems they are developing to prevent them from being sabotaged or used for malicious purposes.
Without proper guardrails, rogue states, terrorists and others will be able to easily exploit rapidly emerging AI systems to conduct cyberattacks and even develop biological and chemical weapons, U.S. Cyber Security said. said Jen Easterly, director of the Infrastructure Security Agency (known as CISA). .
Companies that design and develop AI software need to work to significantly reduce the number of flaws that people can exploit, Easterly said in an interview. “These capabilities are incredibly powerful and can be weaponized if not created securely.”
The Canadian Cyber Security Center recently joined CISA and the UK’s National Cyber Security Center, as well as 20 international partner organizations, to publish guidelines for developing secure AI systems.
The guidance document states that AI innovation can bring many benefits to society. “However, to fully realize the opportunities of AI, we must develop, deploy, and operate it in a safe and responsible manner.”
When OpenAI’s ChatGPT debuted late last year, it captivated users with its ability to respond to queries with detailed, if sometimes inaccurate, responses. But it also raised alarms about the potential for misuse of the early technology.
The guidelines state that AI has a special security dimension because the system allows computers to recognize patterns in data and capture context without the need for explicitly programmed rules by humans. I am.
Therefore, AI systems are vulnerable to adversarial machine learning phenomena, where attackers can prompt unauthorized actions or extract sensitive information.
“There is agreement across government and industry that we need to come together to ensure these capabilities are developed with safety and security in mind,” Easterly said.
“Even if we aim to innovate, we need to do so responsibly.”
Sami Cooley, director of the Canadian Cyber Centre, said a lot can go wrong if security is not considered when designing, developing and deploying AI systems.
In the same interview, Khoury said the initial international efforts on the new guidelines were “very positive.”
“I think we need to lead by example and maybe others will follow.”
In July, the Canadian Cyber Center issued advice warning of vulnerabilities in AI systems. For example, someone with malicious intent could inject destructive code into a dataset used to train an AI system, distorting the accuracy and quality of the results.
The “worst-case scenario,” Corey said, is for a malicious attacker to poison and cripple critical AI systems “that we rely on.”
The center also warned that cybercriminals could use the system to launch more frequent, automated, and more sophisticated so-called spear-phishing attacks. “Very realistic phishing emails and fraudulent messages can lead to identity theft, financial fraud, and other forms of cybercrime.”
The center also warned that skilled perpetrators could bypass the limitations within AI tools and create malware for use in targeted cyberattacks. “Even people with little or no coding experience can easily use generative AI to create functional malware that can cause harm to businesses and organizations.”
Earlier this year, when ChatGPT was making headlines, a Canadian Security Intelligence Agency briefing note warned of similar dangers. The tool “can be used to generate malicious code, which can be injected into websites and used to steal information or spread malware.”
A Feb. 15 CSIS memo recently released through the Access to Information Act also states that ChatGPT can help generate “fake news and reviews to manipulate public opinion and generate misinformation.”
OpenAI does not use its tools to commit illegal activities, disinformation, generate hateful or violent content, create malware, or attempt to generate code intended to destroy, damage, or gain unauthorized access to computer systems. It says that it is not allowed to be used.
The company also prohibits the use of its tools in activities that pose a high risk of physical harm, such as weapons development, military operations, and the management of critical infrastructure such as energy, transportation, and water.
This report by The Canadian Press was first published Dec. 11, 2023.
Jim Bronskill, Canadian Press