Tech companies determine the parameters of war
Agri Ismaïl
This is a cultural article that is part of Aftonbladet's opinion journalism.
Published 05.00
The deadly missile attack on the girls' school in Iran on February 28 this year is very contemporary in nature. Donald Trump blames Iran and Al Jazeera blames Israel, even though photographic evidence suggests that it was carried out by the US. Like the 2023 attack on the Al Ahli hospital in Gaza, the truth is deliberately blurred, until perpetrators and victims are mixed up. But the attack reminds me most of a much older bombing that took place during the Balkan War in 1999. At that time, NATO used a seven-year-old CIA map to bomb what they thought was an arms supplier in Belgrade, but which turned out to be the Chinese embassy.
There is much to suggest that the attack in Iran was a similar mistake. The school was located near buildings belonging to the Islamic Revolutionary Guard Corps; on Google Maps, one of the buildings is tagged as “Islamic Revolutionary Guard Corps Navy Medical Command”. In addition, satellite images show that the wall that separated the school from the base is about ten years old, which suggests that the school building once belonged to the Revolutionary Guards.
But unlike the attack on the Chinese embassy, which then-US President Bill Clinton described as a “tragic mistake”, the US will not take responsibility for what happened. Not only because current President Donald Trump seems incapable of admitting that he ever did anything wrong, but mainly because the school was very likely defined as a military target by the Pentagon’s AI models. We know that the US used the Maven Smart System, a platform built by Palantir and which the military until recently communicated with via Anthropic’s AI model Claude during the first 24 hours of the war, when a thousand military targets were identified and bombed. The most likely conclusion is that this school was also a target identified by an AI system.
Bombed schools are a price that the US military seems prepared to pay
And AI mistakes are not seen as something for which a human is responsible: the moral character of war has been outsourced to the machine. A training manual that IBM had in the 1970s stated that a computer “can never be held accountable”, and therefore a computer was “never allowed to make a command decision”. But new technology makes warfare so fast that a human barely has time to go through it all. Bombed schools are a price that the US military seems prepared to pay.
What is now taking place in Iran is not the first AI war: Israel’s attacks in Gaza are made possible, among other things, by the AI system “Habsora” (“Gospel”), which creates targets for the military to target in real time. But since this is a war involving the chatty US military, our access to information is not as limited as with the Israeli army. The Wall Street Journal’s review of how AI is used therefore gives us a clearer picture of the future of war than we have previously had access to.
Many who have used ChatGPT, for example, probably find it difficult to understand the hype around AI: the bots often give wrong answers, and do not make us more productive. Why should we burn all the world’s raw materials and electricity for a technology that is barely better than Microsoft’s old paperclip Clippy? The answer lies in the intelligence services. After September 11, 2001, the National Security Agency (NSA) began collecting so much data that analysts were only able to review 4 percent of the intelligence material. The whole could only be seen by God himself, said former Pentagon intelligence chief James Clapper.
The most important function of AI tools is therefore not to answer our emails, write high school papers or replace our search engines, but to be precisely the god that Clapper asked for: Israeli intelligence agencies may have monitored hacked traffic cameras in Tehran and intercepted the communications of high-ranking officials, according to the Wall Street Journal, but before AI it would have been impossible to sift through all the material.
Now we can. Instead of taking the US eight months to find Saddam Hussein, as it did in 2003, Ali Khamenei can now be killed on the first day of war. Venezuela's Nicolás Maduro can be quickly kidnapped and flown to the US. Anyone can be killed at any time. That is the true potential of AI
AI Systems Reestablish American Supremacy
And today, only a few countries have access to this technology. Nvidia’s “superchip” processors, the backbone of the AI industry, are designed in the US and Israel and built in Taiwan (the CIA has indeed told American tech companies that China plans to conquer Taiwan as early as next year, if anyone was hoping for a calmer 2027), giving countries that can import these chips a huge advantage.
Technological dominance has always been a central part of warfare, and empires have long been able to wage war without actually having to risk the lives of their own citizens. When the British invaded Sudan in the late 19th century, they were able to capture the city of Omdurman in just a few hours with the help of the Maxim machine gun. That resulted in the deaths of 10,000 Sudanese soldiers and 48 British. During the first Gulf War in 1991, 20,000 Iraqis and 148 American soldiers died. Thanks to drone technology over the past decade, wars have become even more asymmetric: one side can sit half a world away and remotely control warplanes without even being in the war zone.
But drones had a problem: they were too cheap and too easy to build. China, Turkey, India, Pakistan, Iran, and Russia all now have their own drones. In the US’s eagerness to increasingly separate the American soldier from the American war, warfare has become more egalitarian than it has been in a long time.
AI systems are reestablishing American dominance. But this dominance is now controlled by AI companies, as we saw when the Pentagon fell out with Anthropic, when Anthropic, among other things, demanded that their AI not be used to power autonomous weapons. A few days later, competitor OpenAI secured the contract that Anthropic refused to sign. We have thus found ourselves in a situation where the parameters of war are no longer determined by states, the UN, or the Geneva Conventions, but by AI companies whose primary obligation is not to citizens, but to the profit interests of their owners.
Agri Ismaïl is an author and cultural writer, his latest book is the novel Hyper.
Inga kommentarer:
Skicka en kommentar