Humanity’s first ‘AI war’ is unfolding in the Middle East.
The speed has accelerated unprecedentedly, but the outcomes generated by that speed differed from expectations.

On February 28, the U.S. and Israel initiated simultaneous airstrikes against Iran. The operation was named ‘Epic Fury.’ Within the first 24 hours, about 1,000 military targets were hit, a rate double that of the ‘shock and awe’ operation during the 2003 Iraq War.
The AI-based targeting system ‘Maven’ made this speed possible. Developed by the American defense industry company Palantir, this system simultaneously receives and analyzes satellite images, drone footage, intercepted communications, and radar data to automatically generate a list of strike targets, even recommending which weapons to use.
To put it simply, in the past, dozens of intelligence analysts would stay up nights reviewing maps and photos to identify targets. Now, AI processes this in seconds.
Admiral Brad Cooper, commander of the U.S. Central Command, officially confirmed, “Thanks to AI, tasks that used to take hours or sometimes days are completed in seconds.” AI has fundamentally changed the speed of warfare.
But on the first day of the war, a U.S. Tomahawk missile hit an elementary school in Minab, southern Iran. According to Iranian authorities, over 170 people died, most of whom were students and staff attending class.
The missile did not target the school intentionally. The actual target was the adjacent naval base of the Iranian Revolutionary Guard Corps (IRGC). The problem was that the school building originally resided within the base compounds but was separated and became an elementary school between 2013 and 2016. The U.S. targeting database had not updated this change, still classifying the building as a military facility.

Multiple media outlets such as The New York Times, CNN, and the BBC confirmed through satellite image analysis and expert assessments that the school was hit by a U.S. Tomahawk missile. A preliminary investigation by the U.S. Department of Defense also tentatively concluded that the U.S. was likely responsible. More than 120 members of Congress officially inquired, asking, “Was the Maven system involved in identifying the building as a target, and did humans verify this?” The Department of Defense has yet to provide an answer.
Ultimately, the core issue of this incident is not AI malfunctioning but outdated data. No matter how quickly a system can make judgments, if the information upon which those judgments are based does not reflect reality, the results can be erroneous.
Another noteworthy scene in this war is where an AI company clashed directly with the U.S. government over the military use scope of its own technology.
Anthropic, the company developing the AI model Claude, has been supplying technology for the Maven system in actual military operations. Right before the war, Anthropic’s CEO Dario Amodei declared that he would not permit the use of Claude for fully autonomous weapons or mass surveillance of nationals. The reason was clear: “AI is still not trustworthy enough for such purposes.”
The Department of Defense did not accept this. Secretary of Defense Heggeseth designated Anthropic as a ‘supply chain risk company,’ and President Trump ordered government agencies to stop using Claude. Anthropic filed a lawsuit.
On March 26, Judge Rita Lynn of the San Francisco Federal Court ruled in favor of Anthropic. She judged that punishing a company for publicly raising AI safety issues is unconstitutional. This was the first legal battle where an AI developer legally confronted the government over the military use scope.

It is difficult to answer this question with ‘should not be used.’ Realistically, it is already being used, and future nations are unlikely to refrain. Unilaterally giving up on AI combat systems when countries like China and Russia are eagerly developing them is militarily suicidal. The issue is not whether AI should be used but ‘how and to what extent.’
This is where the core debate diverges. AI overwhelms humans in data analysis and pattern recognition. Reviewing tens of thousands of satellite images simultaneously and cross-analyzing hundreds of signals are impossible tasks for humans physically.
Applying this capability to reconnaissance, surveillance, logistics, and communication is a matter of efficiency. However, selecting targets and deciding whether to attack is a different stage altogether. This phase requires legal judgment, ethical judgment, and understanding of the context. AI is currently not at a level to perform these judgments satisfactorily.
The biggest risk of AI in warfare is not malfunction but rather functioning too well. The more quickly and accurately AI recommends targets, the less humans tend to question its outcomes.
Faced with the speed of processing hundreds of targets in seconds, it becomes practically impossible for an officer to manually review each one. While the principle of ‘humans making the final decision’ remains, the actual weight of those decisions increasingly shifts toward machines.
The fundamental question AI war poses is the issue of responsibility. When machines recommend and humans approve, who is accountable for erroneous results? Is it the company that developed the AI, the military that operated the system, or the commander who approved the operation? Currently, there is no legal or institutional framework to answer this question.

When nuclear weapons emerged, the international community eventually established the Non-Proliferation Treaty (NPT). Chemical weapons have the Chemical Weapons Convention (CWC). AI weapons currently lack such international norms.
Although the UN adopted a related resolution last December and announced multilateral consultations scheduled for June this year, they remain at the level of non-binding recommendations. While countries are engaged in an AI arms race, the rules-making process lags behind the pace of technological advancement.
Ultimately, what is needed now is neither a complete ban on AI nor unlimited permission. What is crucial is precisely distinguishing which stages AI can decide and which stages require human intervention.
The key point is that it’s not a technological issue but a matter of human choice. The use of AI in warfare is inevitable, but preventing AI from making war decisions autonomously is possible.
The AI military technology competition is not just a story about superpowers. South Korea ranks as the world’s 5th largest military power and an AI technology holder based on the Global Firepower index.
The Ministry of Defense is already accelerating the development of unmanned and autonomous weapons systems, such as drone-bot combat systems and AI-based surveillance systems. In the face of North Korea, a swift response capability is directly linked to survival, and the pressure to adopt AI is greater than in any other country.
However, precisely because of this, rules are necessary. If systems designed for automatic responses to enemy provocations malfunction or misjudge, the entire Korean Peninsula can be thrown into turmoil. In environments where speed guarantees survival, the catastrophe created by wrong speeds is equally vast.
This is why South Korea needs to voice its opinion in this discussion. During the process of establishing international norms for AI weapons, South Korea has a unique position as both a technology holder and a stakeholder in a disputed region.
Before major powers design rules to fit their own interests, middle powers should first propose principles as a realistic solution. The era of using AI in warfare has already arrived. Who gets to make the rules of that era is the real question South Korea needs to address now.