
Anthropic Sues Pentagon Over AI Safety Red Lines
Anthropic filed two federal lawsuits after the Pentagon labeled it a national security supply chain risk for refusing to drop AI guardrails on autonomous weapons and mass surveillance.

Anthropic filed two federal lawsuits after the Pentagon labeled it a national security supply chain risk for refusing to drop AI guardrails on autonomous weapons and mass surveillance.

Investigations point to outdated AI targeting data as the likely cause of the Minab girls' school airstrike that killed up to 180 people, most of them children.

Alibaba's SWE-CI benchmark tested 18 AI models on 100 real codebases across 233 days of maintenance. Most agents accumulate technical debt and break previously working code. Only Claude Opus stays above 50% zero-regression.

A Brown University study identifies 15 ethical violations across GPT, Claude, and Llama when used as mental health therapists, from crisis mishandling to deceptive empathy.

Anthropic's Claude is now adding over one million users per day with 11.3 million daily active users - a 183% increase since January as the Pentagon backlash against OpenAI shows no sign of fading.

A community fine-tune distills Claude Opus 4.6 chain-of-thought reasoning into Qwen3.5-27B via LoRA, racking up 4,000+ downloads in days. No benchmarks yet - but the approach raises familiar questions.