The company OpenAI quietly reformulated the ban on the use of its products for military purposes, removing the clause “use for military purposes and for conducting military operations.
About this informs The Intercept.
As noted, until January 10, OpenAI’s “Use Policy” page, which opens in a new tab, contained a prohibition against “activities that have a high risk of physical harm, including, in particular, ‘weapons development’ and ‘military operations.'” This clearly worded prohibition for military use would seem to preclude any official use by a government military agency.
In the new policy, the ban on “using our services to harm oneself or others” remains and the example of “the development or use of weapons” is given, but the complete ban on use “for military purposes and for the conduct of hostilities” has disappeared.
A review of special bots based on ChatGPT, offered by OpenAI, shows that the US military is already using this technology to speed up document flow.
Experts who analyzed the company’s policy changes said OpenAI appeared to be quietly softening its stance against doing business with the military.
We will remind:
Formerly OpenAI launched an online store where people can share customized versions of the popular chatbot ChatGPT.