On the first weekend of March, Iranian drones struck three AWS data centers in the UAE and Bahrain. Fires broke out. Sprinkler systems activated, flooding server equipment with water. Then power was cut. Dozens of cloud services went dark for over 24 hours.
It may be the first time a military strike has taken a major cloud provider offline. It certainly won't be the last.
The facilities were part of a massive U.S. push to build AI compute infrastructure across the Gulf, with over $2 trillion in investment pledges. Oracle committed $14 billion to Saudi Arabia. Washington's "Pax Silica" initiative focused on securing chip supply chains and political alignment. Nobody planned for drones.
As CSIS put it: "If Compute is the New Oil, War in the Gulf Significantly Raises the Stakes."
The targets weren't AI-specific, just hyperscale data centers running banking, government, and enterprise workloads. But every major cloud region runs AI too. The disruption rippled across the region and, in some cases, globally. That weekend gave the world a blueprint for taking critical AI infrastructure offline.
You don't need a missile
"Unplugging AI" doesn't mean what most people think. You don't need to necessarily cut cables or blow anything up. You just need to degrade systems below operational usefulness, and there are many ways to do that.
A blade server cabinet hits thermal shutdown in 60 seconds without cooling. Last November, a cooling failure at CME Group halted global derivatives trading for 10 hours. Nobody attacked anything. The cooling just stopped. In the Middle East, cooling offline even for a minute is the difference between a data center and an oven.
Last October, a software bug in a single AWS facility cascaded across over a hundred services and generated millions of disruption reports worldwide. The hardware was fine. The control plane broke and everything went with it.
Data centers keep 24 to 72 hours of diesel for generators. Block the fuel trucks for three days and the power stops. Seventy-five percent of building management systems have known exploited vulnerabilities. Compromise one remotely and you control cooling, power, and fire suppression. A BGP hijack in 2024 broke DNS across 300 networks in 70 countries. Destroy a power transformer and the replacement takes four years.
I've seen this firsthand. In 2007, when I was CTO at Rackspace, a truck driver hit a utility pole outside our Dallas-area data center and knocked a transformer off its pad. Our generators kicked in, but when first responders arrived, the utility killed power on both feeds to safely rescue the driver. Redundancy didn't matter. We had to shut servers down proactively to prevent thermal damage. The people saving a life don't care about your uptime SLA.
Every one of these vectors has been demonstrated. All of them could work to attack AI.
The cascade
I'm not a military strategist. But I've spent my career watching complex systems fail, and the pattern is always the same: catastrophic failures are chain reactions. One small disruption triggers the next, and the next, until the ability to respond is overwhelmed.
An adversary doesn't need to destroy data centers. They need simultaneous disruptions across enough vectors (ransomware on utility controls, a few substations hit, a BGP hijack, a BMS compromise) to create a cascading failure that degrades AI-dependent operations across a theater. While drones hit UAE infrastructure that weekend, dozens of hacktivist groups launched attacks across the same region within 72 hours. That's a preview.
The infrastructure playbook has been tested for years. Stuxnet. Industroyer. Volt Typhoon pre-positioned inside U.S. systems since 2023. A Venezuela grid attack timed to military operations in January. Only 10 to 20 percent of the U.S. electricity system is under federal cyber oversight. And we're building the most power-hungry technology in history on top of it.
It's not a contest of intelligence
The Pentagon is building AI to attack adversary infrastructure. Our AI runs on the same kind of infrastructure. The Army knows this. Project Janus puts nuclear microreactors at bases because the military doesn't trust the civilian grid.
Meanwhile, Washington is fighting over which AI companies get to work with the military, not whether the infrastructure those models run on can survive a fight. On February 27, the Trump administration banned Anthropic from all federal contracts and designated it a "supply chain risk" after CEO Dario Amodei refused to remove contractual restrictions on autonomous weapons and mass surveillance. Hours later, OpenAI signed its own Pentagon deal, accepting the "all lawful purposes" standard Anthropic had rejected. MIT Technology Review later reported that OpenAI's contract language on autonomous weapons contains a loophole wide enough to drive a drone through.
The irony: Anthropic's refusal to cooperate made it so popular that Claude suffered a major outage from the surge in new users. The company that took an ethical stand on infrastructure weaponization got taken down by its own success.
That's the debate we're having: which AI model gets to power the weapons. Not whether the data centers running those models can survive the same kind of attack we just watched play out in the Gulf.
The AI arms race is 100 percent focused on who has the smartest model. The first weekend of March proved the real competition is who can keep their systems running if and when things break. A smarter AI that goes dark is worth less than a dumber one that stays online.
They don't need to beat our AI. They just need to unplug it. And this week, someone showed the world how.