Openai Defence Pact Triggers User Exodus Arabian Post

(MENAFN- The Arabian Post)

OpenAI’s expanding work with the United States military has ignited a wave of criticism among sections of its user base, with some subscribers cancelling ChatGPT plans and shifting to rival platform Claude, citing concerns over artificial intelligence ethics and safety.

The controversy follows confirmation that OpenAI is collaborating with US defence agencies on projects involving cybersecurity and the responsible deployment of advanced language models. The company has said its tools are being adapted for defensive and research purposes rather than for weapons systems, but the association with military institutions has unsettled a segment of its global audience.

Social media forums, developer communities and technology discussion boards have seen a spike in posts from users claiming they have suspended or terminated paid subscriptions to ChatGPT. While OpenAI has not disclosed any material decline in user numbers, analysts tracking web traffic and app downloads report a noticeable uptick in interest in Claude, the AI assistant developed by Anthropic, a San Francisco-based company that markets itself as prioritising AI safety and constitutional design principles.

OpenAI’s arrangement with US defence bodies builds on a broader trend of technology companies engaging with government clients in areas such as cyber defence, data analysis and secure communications. Company executives have framed the collaboration as part of a commitment to ensuring democratic governments can access secure, state-of-the-art AI tools. In public statements, OpenAI has stressed that its policies prohibit the development or direct support of lethal autonomous weapons and that any defence-related contracts are subject to strict internal oversight.

Nevertheless, critics argue that once advanced AI systems are embedded within military infrastructure, the boundary between defensive and offensive use becomes blurred. Some researchers in the AI ethics community warn that dual-use technologies can be repurposed beyond their original scope, particularly as geopolitical competition intensifies between major powers over artificial intelligence leadership.

See also Iraq corridor cable aims to link UAE and Turkey

The backlash highlights a tension that has followed OpenAI since its transformation from a non-profit research lab into a commercially driven enterprise backed by major investors. Microsoft’s multibillion-dollar investment has enabled rapid scaling of ChatGPT and integration across enterprise software, but it has also heightened scrutiny of OpenAI’s governance and mission. The company maintains a hybrid structure in which a non-profit entity retains ultimate oversight of a for-profit subsidiary, an arrangement intended to balance commercial growth with public benefit.

Anthropic, founded by former OpenAI researchers including Dario Amodei, has positioned Claude as an alternative rooted in what it calls“constitutional AI”, a training approach designed to embed explicit ethical guidelines into model behaviour. The firm has secured significant funding from technology giants such as Amazon and Google, and has attracted enterprise customers in finance, healthcare and legal services. Following the controversy around OpenAI’s defence engagement, some developers have publicly stated they are experimenting with Claude for projects they believe require clearer safety assurances.

Industry analysts caution against overstating the scale of any migration. ChatGPT remains one of the fastest-growing consumer applications in history, with hundreds of millions of weekly active users. Corporate clients continue to integrate its APIs into customer service systems, content workflows and data analytics platforms. There is no public evidence of a large-scale commercial withdrawal by enterprise customers as a result of the defence collaboration.

At the same time, the episode underscores how sensitive AI partnerships have become. Artificial intelligence now sits at the centre of strategic planning in Washington, Beijing and Brussels. Governments are racing to harness machine learning for logistics, intelligence analysis and cyber resilience. Private companies developing frontier models are therefore under pressure to navigate competing expectations: commercial expansion, national security cooperation and ethical restraint.

See also Anthropic pulls back from altered Pentagon AI deal

OpenAI has reiterated that its use policies bar applications intended to harm individuals or groups and that it conducts risk assessments before entering sensitive partnerships. Executives have argued that disengaging entirely from defence institutions could leave such agencies reliant on less transparent or less accountable technologies developed elsewhere. From that perspective, participation is framed as a way to shape standards rather than abandon the field.

Notice an issue? Arabian Post strives to deliver the most accurate and reliable information to its readers. If you believe you have identified an error or inconsistency in this article, please don’t hesitate to contact our editorial team at editor[at]thearabianpost[dot]com. We are committed to promptly addressing any concerns and ensuring the highest level of journalistic integrity.

MENAFN03032026000152002308ID1110809890

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)