The release of the Arabic Large Language Model (LLM) is increasing the adoption of generative AI (GenAI) in the Middle East market. Jace This summer, OpenAI, the creator of ChatGPT, announced a partnership with the government of Abu Dhabi.
The timing is right for a discussion on this topic, and at the upcoming Black Hat Middle East conference, Srijith Nair, CISO at Careem, will lead a panel discussion on GenAI in the region. ”Defense Against the Dark Arts: Generative AI and Enterprise Risk. ”
Dark Reading sat down with Nair to discuss the security elements of GenAI deployment from both an offensive and defensive perspective.
Dark Reading: To what extent do you think generative AI is a business problem, or is it something that’s happening in society and is slowly “invading” business and ultimately cybersecurity? Shall we?
Srijith Nile: Generative AI is a broader social phenomenon, impacting many aspects of our lives. As an extension of that societal impact, business and ultimately cybersecurity are being affected. We are already seeing disruption in various fields (art, coding), and cybersecurity is not immune to this change. Time will tell whether this is evolution or revolution, but the jury is out.
DR: How well do you think cybersecurity is keeping up with the generative AI trend?
SN: This impacts the cybersecurity environment in many ways, from enabling fraud to facilitating phishing attacks against specific individuals. On the one hand, this technology enables a wide range of tool capabilities for security services. Wise use of AI-based features in coding platforms has made writing secure code easier.
DR: I’ve heard that attackers can profit from it and use it to craft more sophisticated attacks, especially phishing messages. Will the defenders be able to keep up?
SN: CSOs need to find ways to enable and adapt to new kinds of innovation. We need to come up with approaches that allow people to use these tools safely. This is a very interesting challenge at the moment.
Generative AI brings more new vectors and threats, but it also provides more tools. These tools not only enable us to counter new risks, but also allow us to shift left more aggressively. This is of interest to security professionals because it tells engineering teams how to write code securely and allows SOC teams to work more efficiently. People no longer have to go out of their way to do things safely. It becomes part of your ready-to-use arsenal.
DR: Machine learning and AI have been the most talked about in the past decade, but will generative AI just add complexity?
SN: That’s certainly true. Machine learning and its models are not entirely new. Models are typically categorized as supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, and have unique characteristics and uses. However, these techniques have traditionally focused primarily on recognizing patterns and making predictions, rather than generating new and original content.
Generative AI goes a step further. These systems can not only recognize patterns but also generate new content that mimics the data used for training.Perhaps the biggest change is that generative AI democratized Utilization of AI. Generative AI is gaining a strong foothold in our daily lives as the use case is closer to casual users.
DR: Is there enough capacity to learn how to use these technologies from a security perspective, how they can be used, and what can be done with them?
SN: You need to be able to train your data and AI teams on how to do things securely, but at the same time, the CSO needs to improve their knowledge as a security team because it’s a management function. You are expected to discern if your team is doing the right thing. So you need to have enough knowledge to ask your team, “Hey, is this right?”
A lot of times, unless you’re actually hands-on, you end up scaling your security team, which is very surprising. A lot of it is also because things have moved so fast in the past two years when it comes to generative AI. I would be very surprised that security within a team could claim to have complete control over AI.
DR: Could AI be the savior of the security staffing problem we’ve been talking about for years?
SN: AI will undoubtedly be of great help in scaling and automating security controls to the required level due to the increasing complexity of protected systems, the heterogeneous environments involved, and the increasing automation and scale of threat actors. . But calling it a “savior” or a silver bullet would be a stretch, at least for now.