OpenAI Opens Door to Military Use of ChatGPT with Policy Update

Large language model developer OpenAI has quietly relaxed its restrictions on military applications of its technology, sparking concerns from some AI experts.

Jan 14, 2024 - 02:27
Jan 14, 2024 - 02:28
 218
OpenAI Opens Door to Military Use of ChatGPT with Policy Update
OpenAI Opens Door to Military Use of ChatGPT with Policy Update

On January 10th, OpenAI updated its usage policies, lifting a broad ban on using its technology for "military and warfare." The new language still prohibits specific uses like developing weapons or harming others, but the shift towards broader principles raises questions about enforcement and potential future military contracts.

This policy change coincides with the launch of OpenAI's GPT Store, a marketplace for users to share and customize versions of ChatGPT, known as "GPTs." The new policy includes principles like "Don't harm others," while still banning specific harmful applications.

Concerns and Potential Implications

Some AI experts, like Sarah Myers West of the AI Now Institute, expressed concerns about the vagueness of the new policy and its potential for misuse. She told The Intercept: "The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement."

Get Your Domain at Name.com

Advertisement

Others worry that the policy change could open the door to future contracts with the military. OpenAI has acknowledged that there are national security use cases that align with its mission, and it already collaborates with the Defense Advanced Research Projects Agency (DARPA) on cybersecurity projects.

The Future of AI and Military Applications

OpenAI's policy shift reflects the growing potential of AI for military applications. AI is already being used for tasks like target identification and logistics, and its capabilities are only expected to grow.

This raises important ethical questions about the development and use of military AI. How can we ensure that AI is used responsibly and ethically? What safeguards are needed to prevent AI from being used for harm?

These are complex questions that require careful consideration and ongoing dialogue. OpenAI's policy change is a significant step in this direction, but it is only the beginning of a much larger conversation.

Also Read: Big News: Sam Altman is Back at OpenAI

iShook Opinion Curated by iShook Opinion and guided by Founder and CEO Beni E Rachmanov. Dive into valuable financial insights at ishookfinance.com for expert articles and latest news on finance.