Generative artificial intelligence is a transformative technology that has captured the interest of businesses worldwide and is rapidly being integrated into enterprise IT roadmaps. Despite the potential and speed of change, business and cybersecurity leaders indicate they are cautious about adoption due to security risks and concerns. A recent ISMG survey found that sensitive data leakage is the top implementation concern for both business leaders and cybersecurity professionals, followed by inaccurate data intrusion.
Cybersecurity leaders can mitigate many security concerns by reviewing and updating their internal IT security practices to take generative AI into account. Specific focus areas for efforts include: Zero Trust ModeAdopt basic cyber hygiene standards, especially to protect against: 99% of attacksBut generative AI providers also play a key role In secure enterprise use. Given this shared responsibility, cybersecurity leaders may need to seek a deeper understanding of how security is addressed across the generative AI supply chain.
Best practices for generative AI development are constantly evolving and require a holistic approach that considers the technology, its users, and society at large. But within that broader context, there are four fundamental areas of protection that are particularly relevant to enterprise security practices: data privacy and ownership, transparency and accountability, user guidance and policy, and security by design.
- Data Privacy and Ownership
Generative AI providers should clearly document their data privacy policies. When evaluating vendors, customers should ensure that their chosen provider allows them to manage their own information and that it will not be used to train underlying models or shared with other customers without their explicit permission.
- Transparency and accountability
Providers must maintain the trustworthiness of the content their tools create. Like humans, generative AI can make mistakes. Perfection isn’t expected, but transparency and accountability are. To achieve this, generative AI providers must, at a minimum: 1) use trusted data sources to improve accuracy; 2) provide visibility into reasoning and sources to maintain transparency; and 3) provide mechanisms for user feedback to support continuous improvement.
- User Guidance and Policies
Enterprise security teams have an obligation to ensure that generative AI is used safely and responsibly within their organizations, and AI providers can help with that effort in a variety of ways.
Adversarial insider abuse, while unlikely, is another consideration. This would include attempts to use generative AI for harmful purposes, such as generating dangerous code. AI providers can mitigate this type of risk by building safety protocols into their system designs and clearly setting boundaries for what generative AI can and cannot do.
A more common concern is over-reliance by users. Generative AI is meant to assist workers in their day-to-day work, not replace them. Users should be encouraged to think critically about the information provided to them by the AI. Providers can explicitly cite sources and use carefully considered language that encourages thoughtful use.
- Safe by design
Generative AI technologies must be designed and developed with security in mind, and technology providers must be transparent about their security development practices. The security development lifecycle can also be adjusted to account for new threat vectors posed by generative AI. This includes updating threat modeling requirements to address AI and machine learning specific threats, and implementing rigorous input validation and sanitization of user-supplied prompts. AI-powered red teamingis another important security enhancement that can be used to look for exploitable vulnerabilities, generation of potentially harmful content, etc. Red team exercises have the advantage that they are highly adaptable and can be used both before and after a product is released.
This is a strong starting point, but security leaders who want to dig deeper can look to promising industry and government initiatives aimed at ensuring the development and use of safe and responsible generative AI. One such initiative is NIST AI Risk Management Frameworkprovides organizations with a common methodology to mitigate concerns while supporting trust in generative AI systems.
Make no mistake, safe enterprise use of generative AI needs to be supported by strong enterprise IT security practices and guided by a carefully considered strategy that includes an implementation plan, clear usage policies, and associated governance. But the leading providers of generative AI technology know they also need to play a key role, and they’re happy to provide it. Information about their work We advance safe, secure, and trustworthy AI. Working together will not only promote its safe use, but also foster the trust necessary for generative AI to reach its full potential.
Learn more about here.