Update (April 2026): New reports have clarified the role of AI in the Iran strikes — and it’s very different from what people first believed.
Earlier claims suggested that Anthropic’s Claude AI was directly involved in selecting targets. However, new findings show a more complex reality.
🚨 What’s New in This Update?
- Claude AI was not directly selecting targets
- Main system used was Maven AI (Palantir)
- Claude acted as a support tool for analysis and summaries
- Pentagon has now restricted Anthropic tools
🧠 What Role Did Claude Actually Play?
According to updated reports, Claude AI was used for:
- Summarizing intelligence reports
- Helping analysts understand data faster
- Supporting decision-making (not controlling it)
This means Claude was part of the system — but not the decision-maker.
⚠️ The School Strike Controversy
A major controversy erupted after a school was mistakenly hit during the operation.
- Initial blame was put on AI (Claude)
- Later investigations showed the issue was:
- Outdated data
- Fast decision systems (kill chain)
- Lack of human verification time
This shows the problem is not just AI — but how it’s used.
🏛️ Pentagon vs Anthropic
After the incident:
- Pentagon labeled Anthropic as a “supply chain risk”
- Claude is being phased out of military systems
- Other AI companies may replace it
📊 Bigger Picture: AI in War
This conflict is now being called the first AI-integrated war.
- AI is speeding up decisions
- Human control is decreasing
- Ethical concerns are rising globally
✅ Final Verdict (Updated)
Claim: Claude AI controlled Iran strikes
Status: ❌ Misleading
Reality:
- Claude AI was used — but only as a support tool
- Main control system was different (Maven AI)
- Human + system decisions combined
Conclusion: The real story is not just AI — it’s how fast, complex systems are changing modern warfare.
For earlier reports, read our previous coverage: Claude AI Iran Strikes (March 2026).

