OpenAI, led by Sam Altman, has made a fundamental policy shift by allowing its AI technologies to be used in military and warfare applications. The change entails the removal of language from the usage policy specifically forbidding the deployment of OpenAI’s technology for military purposes.
The adjustment was justified by OpenAI’s goal of establishing a collection of universal rules that are simple to memorize and apply. “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, injure others, or destroy property,” a business spokeswoman told Business Today. However, there are national security use cases that correspond to our objective.”
OpenAI
“For example, we are already collaborating with DARPA to encourage the development of new cybersecurity tools to secure open-source software on which critical infrastructure and industry rely.” It was unclear whether these advantageous use scenarios would have been permitted under “military” in our earlier policies. “Our policy update’s goal is to provide clarity and the ability to have these discussions,” the spokesperson stated.
While OpenAI has moderated its stance on military uses, it still prohibits AI applications for weapon development. In the ever-changing spectrum of AI technology applications, the balance between allowing military-related tasks and preventing weaponization remains a key point.
Add Comment