AI Ethics in Wartime: Anthropic Refuses Military Use; OpenAI Revises Military Deal
AI company Anthropic refused to allow the US military to use its technology, creating a dispute with the Pentagon and raising questions about AI readiness for military applications. The refusal boosted Anthropic's public reputation but exposed broader concerns about chatbot capability in combat scenarios. Separately, OpenAI revised its deal with the US military following public backlash, with CEO Sam Altman announcing the company would prohibit its systems from spying on American citizens.
Key Facts
- 1Anthropic has refused to allow the US military to use its AI technology, creating a dispute with the Pentagon
- 2The refusal boosted Anthropic's reputation while exposing concerns that chatbots are not capable enough for military use
- 3Consumer support for Anthropic grew following its public dispute with the Pentagon
- 4OpenAI changed its deal with the US military after public backlash, with CEO Sam Altman prohibiting its systems from being used to spy on Americans
- 5The head of Alibaba's Qwen AI division separately resigned two days after the company released updated products
Coverage
Reported by AP News, BBC News, The Independent, and Channel News Asia, reflecting growing global debate over military AI ethics.
Sources (4)
Head of Alibaba's Qwen AI division resigns
Channel News Asia · 1d ago
Anthropic’s stance against military use of AI underscores growing skepticism
The Independent · 1d ago
OpenAI changes deal with US military after backlash
BBC News · 1d ago
Pentagon dispute bolsters Anthropic reputation but raises questions about AI readiness in military
AP News · 2d ago