In an unannounced update to its usage policy, OpenAI opened the door to military applications of its technology. While the policy previously prohibited the use of its products for “military and warlike” purposes, that language has now disappeared, and OpenAI did not deny that it is now open to military uses.
Objection I noticed the change for the first timewhich appears to have been launched on January 10.
Unannounced changes in policy formulation occur fairly frequently in technology as the products governing their use evolve and change, and OpenAI is clearly no different. In fact, the company’s recent announcement that user-customizable GPT features will be rolled out publicly alongside a vaguely detailed monetization policy likely necessitated some changes.
But the change in non-military policy cannot be the result of precisely this new product. It cannot be credibly claimed that omitting the phrase “military and war” is merely “clearer” or “more readable,” as stated in a statement issued by OpenAI regarding the update. It is a fundamental and consequential policy change, not a restatement of the same policy.
You can read the current usage policy hereAnd the old one here. Below are screenshots with the relevant parts highlighted:
Before the policy change. Image credits: OpenAI
![](https://techcrunch.com/wp-content/uploads/2024/01/after-openai-military.png)
After the policy change. Image credits: OpenAI
The whole thing has clearly been rewritten, although whether or not it’s more readable is a matter of taste more than anything else. I think the bulleted list of practices that are not clearly permitted is more readable than the general guidelines it replaced. But OpenAI’s policymakers clearly think otherwise, and if this gives them more leeway to interpret a hitherto strictly prohibited practice positively or negatively, that’s just a nice side effect. “Do not harm others” is “broad but easy to understand and relevant in many contexts,” the company said in its statement. More flexible too.
Although, as OpenAI representative Nico Felix explained, there is still a blanket ban on the development and use of weapons – you can see that it was included originally and separately from “military and war”. After all, the military does more than just make weapons, and weapons are made by others other than the military.
Specifically where these categories don’t overlap, I think OpenAI is considering new business opportunities. Not everything the Department of Defense does is strictly war-related; As any academic, engineer or politician knows, the military is deeply involved in all kinds of basic research, investment, small business funds and infrastructure support.
OpenAI’s GPT platforms could be of great benefit, for example, to Army engineers looking to summarize decades of documentation of a region’s water infrastructure. It’s a real dilemma for many companies about how to define and handle their relationship with government and military funds. Google’s “Project Maven” is known to have taken it a step too far, though few seemed bothered by JEDI’s multi-billion-dollar cloud contract. It may be acceptable for an academic researcher holding an Air Force Research Laboratory grant to use GPT-4, but not for a researcher within AFRL working on the same project. Where do you draw the line? Even a strict “no military” policy would have to stop after a few takedowns.
However, the complete removal of the phrase “military and war” from OpenAI’s prohibited uses indicates that the company is, at the very least, open to serving military customers. I asked the company to confirm or deny that this happened, and warned them that the new policy language made clear that anything except a denial would be interpreted as a confirmation.
As of this writing, they have not responded. I will update this post if I hear back.
to update: OpenAI provided the same statement to The Intercept, and did not dispute that it is open to military and customer applications.