As artificial intelligence (AI) innovation continues to advance rapidly, security experts say 2024 will be a critical time for organizations and governing bodies to establish security standards, protocols and other guardrails to prevent them from getting ahead of AI. I’m warning you.
Large-scale language models (LLMs) leverage sophisticated algorithms and large datasets to demonstrate remarkable language understanding and human-like conversational abilities. One of the most sophisticated of these platforms to date is OpenAI’s GPT-4, which boasts advanced reasoning and problem-solving capabilities that power the company’s ChatGPT bot. And the company began developing GPT-5 in partnership with Microsoft, CEO Sam Altman said. will go further — to the point of possessing “superintelligence.”
These models show great potential for organizations to significantly improve productivity and efficiency, and experts agree that their time has come for the industry as a whole. To address inherent security risks Caused by development and deployment. surely, Recent research by Writerbuddy AIThe company, which provides AI-based content creation tools, found that ChatGPT already has 14 billion visits and continues to grow.
Gal Ringel, CEO of AI-based privacy and security company MineOS, says that organizations “need to combine rigorous ethical considerations and risk assessments” as they move toward AI advancements.
Is AI an existential threat?
Concerns about the security of next-generation AI began to permeate in March, when an open letter signed by nearly 34,000 top technologists called for a halt to the development of generative AI systems that are more powerful than AI. OpenAI’s GPT-4. The letter describes the “serious risks” to society that this technology represents and the “control by the AI Institute to develop and deploy more powerful digital minds that no one, not even its creators, can understand, predict or understand.” “Impossible competition.” definitely control. ”
Despite these dystopian concerns, most security experts are less concerned about doomsday scenarios in which machines become smarter than humans and take over the world.
“The open letter raises legitimate concerns about the rapid advancement and potential applications of AI in the broader sense of ‘Is this a good thing for humanity?'” said a sales engineer at cybersecurity firm Netrix. Division Director Matt Wilson said. “While impressive in certain scenarios, public versions of AI tools appear to be less of a threat.”
Of concern, researchers say, is the fact that AI is being advanced and deployed too quickly to adequately manage risks. “You can’t put the lid back on Pandora’s box,” said Patrick Herr, CEO of SlashNext, his provider of AI security.
Additionally, Marcus Fowler, CEO of AI security firm Darktrace Federal, said that simply “trying to slow down the pace of innovation in this space will not help reduce the risks it poses.” It states that risks need to be addressed on a case-by-case basis. That doesn’t mean AI development should continue unchecked, he says. On the contrary, the speed of risk assessment and the introduction of appropriate safeguards should match the speed of LLM training and development.
“AI technology is rapidly evolving, so governments and organizations using AI must also accelerate the conversation about AI safety,” Fowler explains.
Generative AI risks
Generative AI has some widely recognized risks that need to be considered, and will only worsen as future generations of technology become smarter. Fortunately for humans, nothing so far poses his sci-fi-like doomsday scenario in which AI conspires to destroy its creator.
Instead, they include more familiar threats, such as data breaches with potentially sensitive business information. Exploitation for malicious activities. Also, inaccurate output can mislead and confuse users, which ultimately hurts your business.
Because LLMs require access to vast amounts of data to provide accurate and context-relevant output, sensitive information can be inadvertently exposed or misused.
“The main risk is that employees feed Contains business-sensitive information This is the time to write a plan or request the rephrasing of emails or business documents that contain confidential company information,” notes Ringel.
From a cyberattack perspective, threat actors have already discovered countless ways to weaponize ChatGPT and other AI systems. One method is to use models to create sophisticated business email compromise (BEC) and other phishing attacks, which include socially engineered personalized messages designed to succeed. Requires creation.
“For malware, ChatGPT allows cybercriminals to create infinite variations of code to stay one step ahead of malware detection engines,” Harr said.
AI hallucinations also pose a significant security threat, allowing malicious attackers to uniquely attack LLM-based technologies such as ChatGPT. An AI hallucination is a plausible response by an AI that is insufficient, biased, or completely untrue. “Fictitious responses and other undesirable responses can lead organizations to poor decisions, processes, and misleading communications,” warns Avivah Litan, vice president at Gartner.
Threat actors can also use these hallucinations to contaminate LLMs and “generate specific false information in response to questions,” said Michael Rinehart, vice president of AI at data security provider Securiti. says. “This is extensible to the generation of vulnerable source code and, in some cases, to chat models that can induce site users into unsafe actions.”
An attacker could go as far as to Publishing a malicious version of a software package The LLM may believe it to be a legitimate fix for the problem and recommend it to the software developer. In this way, attackers can further weaponize her AI and launch supply chain attacks.
Way forward
Managing these risks will require careful and collective action before AI innovations exceed the industry’s ability to control them, experts say. But they also have ideas on how to deal with his AI problems.
Mr. Haar believes that:Fight with AI and AThe strategy states that “advancements in security solutions and strategies to thwart the risks facilitated by AI must evolve at an equal or faster pace.”
“Cybersecurity protection must leverage AI to successfully combat cyber threats using AI technology,” he added. “Traditional security technologies, by comparison, have no chance against these attacks.”
However, organizations should also take a measured approach to AI adoption, including: AI-based security solution — Netrix’s Wilson warns against introducing additional risks to the environment.
“Understand what AI is and what it is not,” he advises. “Challenge vendors who claim to employ AI to explain what AI does, how it enhances their solutions, and why it matters to your organization. .”
Securiti’s Rinehart offers a two-tiered approach to gradually introduce AI into your environment by deploying focused solutions and putting guardrails in place just before exposing your organization to unnecessary risk.
“It starts with an application-specific model that can be powered by a knowledge base that is tailored to deliver value in a specific use case,” he says. “Next… we implement a monitoring system that protects these models by scrutinizing messages sent to and from them for privacy and security issues.”
Experts also recommend setting security policies and procedures around AI before deploying it, rather than as an afterthought to reduce risk. You can also set up a dedicated AI risk officer or task force to monitor compliance.
Outside of the enterprise, the entire industry must take steps to establish security standards and practices around AI that can be adopted by everyone who develops and uses the technology. This will require joint action from both the public and private sectors on a global scale. says Mr. Fowler of the Darktrace Federation.
He said: Guidelines for building secure AI systems This is a joint announcement by the US Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Center (NCSC) as an example of their efforts as AI continues to evolve.
“Essentially, 2024 marks the rapid adaptation of both traditional security and cutting-edge AI techniques to protect users and data in this new era of generative AI,” said Securiti’s Rinehart. It will happen.”