tech

March 28, 2026

US judge blocks Pentagon order designating AI company a national security risk

A US court has blocked the Pentagon from labeling Anthropic a security risk after it refused to allow military use of its technology

US judge blocks Pentagon order designating AI company a national security risk

TL;DR

  • A US federal judge blocked a Pentagon order designating Anthropic, an AI developer, a national security risk.
  • The judge stated that US officials likely acted unlawfully and retaliated against Anthropic for public comments on military AI use.
  • Anthropic resisted the Pentagon's push for "all lawful uses" of its Claude system, citing concerns about surveillance and autonomous weapons.
  • The judge called the designation a "classic" First Amendment retaliation, noting it's typically for foreign intelligence agencies, terrorists, and hostile actors.
  • Anthropic sued, calling the move "unprecedented and unlawful" retaliation for criticizing government policy.
  • The Pentagon had ordered a six-month phase-out of Anthropic's technology and sought a "more patriotic" alternative, striking a deal with OpenAI.
  • Anthropic warned of billions in lost revenue and noted some agencies have already removed its products.

Continue reading the original article

Made withNostr