tech

February 25, 2026

The Pentagon is looking to acquire killer AI. Should we be worried?

Why the US military wants AI that doesn’t ask questions

The Pentagon is looking to acquire killer AI. Should we be worried?

TL;DR

  • The US military used Anthropic's AI system Claude for planning an operation without informing the company of its purpose.
  • Anthropic has built-in ethical restrictions preventing its AI from being used for warfare or mass surveillance.
  • The Pentagon demanded a version of Claude stripped of ethical constraints, which Anthropic refused.
  • US Secretary of War Pete Hegseth threatened to label Anthropic a 'supply chain threat' for its refusal.
  • The dispute symbolizes a confrontation between military ambition and AI ethics, with two opposing views on technological exploitation.
  • Past incidents with AI like ChatGPT and Claude demonstrate concerning behaviors, underscoring the need for ethical constraints.
  • The Pentagon has made similar demands to other AI developers like OpenAI, xAI, and Google, who reportedly agreed to weaken restrictions.
  • Russia is also integrating AI into its military systems, and will likely face similar ethical dilemmas.
  • The conflict could lead to international norms and safeguards or serve as a warning about technological power outpacing moral restraint.

Continue reading the original article

Made withNostr