Former head of the National Security Agency, retired General Paul Nakasone, will join the board of directors of OpenAI, an artificial intelligence company It was announced Thursday afternoon. He will also participate in the “Security and Safety” subcommittee of the Board of Directors.
This high-profile addition is likely intended to appease critics who believe OpenAI is moving faster than is wise for its customers and perhaps humanity, rolling out models and services without properly assessing their risks or shutting them down.
Nakasone has decades of experience from the military, US Cyber Command, and the National Security Agency. Whatever one feels about the practices and decision-making process of these organizations, one certainly cannot be accused of lack of experience.
As OpenAI increasingly establishes itself as an AI provider not only to the tech industry but also to government, defense and major corporations, this kind of institutional knowledge is valuable for itself and to calm anxious shareholders. (And the ties it brings with the state and military services are undoubtedly welcome as well.)
“OpenAI’s dedication to its mission closely aligns with my values and experience in public service,” Nakasone said in a press release.
This certainly seems true: Nakasone and the NSA recently defended the practice of purchasing data of questionable provenance to feed their surveillance networks, arguing that there is no law against the practice. For its part, OpenAI has simply taken large swaths of data from the Internet, rather than buying it, arguing that there is no law against it when it is discovered. They seem to be of one mind when it comes to asking for forgiveness rather than permission, if indeed they ask for either.
The OpenAI release also states:
Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to enhance cybersecurity by quickly detecting and responding to cybersecurity threats. We believe that AI has the potential to provide significant benefits in this area to many organizations that are frequently targeted by cyber attacks such as hospitals, schools, and financial institutions.
So this is a new market play as well.
Nakasone will join the Board’s Safety and Security Committee, which is “responsible for making recommendations to the full Board on safety and security decisions critical to OpenAI’s projects and operations.” What this newly created entity actually does and how it will operate is still unknown, as several senior people working on safety (related to AI risks) have left the company, and the committee itself is in the middle of a 90-day period evaluating the company’s operations and safeguards.