Anthropic Labeled National Security Risk, CEO Vows Legal Challenge

robot
Abstract generation in progress

(MENAFN) Anthropic CEO Dario Amodei confirmed on Thursday that the US government has formally classified the artificial intelligence startup as a “national security supply-chain risk,” noting the company has “no choice” but to contest the ruling in court.

The designation came after a disagreement between the company and the US Department of Defense regarding the potential military applications of its AI models, known as Claude.

Anthropic revealed that it was notified late last week, through social media posts, that it would be prohibited from government contracts following failed negotiations with the Pentagon.

The firm had sought guarantees that its technology would not be used for fully autonomous weapons or widespread domestic surveillance. In contrast, the Defense Department demanded unrestricted access to Claude for all permissible uses.

Amodei emphasized that Anthropic does not support private firms participating in operational military decision-making.

“Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance,” he stated in a public release, clarifying that these limitations pertain to high-level applications rather than direct battlefield decisions.

Anthropic is the first US-based company to be publicly designated as a supply-chain risk by the government, a label usually reserved for firms connected to strategic competitors, such as the Chinese telecom company Huawei.

MENAFN07032026000045017167ID1110830420

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin