CHAIRMAN: DR. KHALID BIN THANI AL THANI
EDITOR-IN-CHIEF: DR. KHALID MUBARAK AL-SHAFI

Views /Opinion

Killer algorithms: When code becomes a tool of extermination

Dr. Khaled Walid Mahmoud

09 Jul 2025

In the third decade of the 21st century, artificial intelligence is no longer just an auxiliary tool in our daily lives—it has become a central actor on the battlefield.

Automated systems now make life-and-death decisions without direct human oversight.

This phenomenon, known as “killer algorithms,” marks a qualitative shift in the history of warfare and raises existential questions about the future of armed conflict, the limits of human agency, and the ethics of war in the digital age.

Estimates indicate that over 30 countries are currently developing autonomous weapon systems.

These systems rely on three main pillars: sensor networks that gather data from multiple sources, machine learning algorithms capable of pattern recognition and behavioural prediction, and autonomous execution systems that make decisions without a human operator or commander.

What distinguishes these systems is their ability to operate in complex environments where human decision-making may be too slow or imprecise—such as in counterterrorism operations or high-intensity urban combat.

Perhaps the most striking example of these technologies in action was the series of precise operations targeting prominent Iranian figures in mid-2025.

Evidence suggests that the platforms used were not merely conventional drones, but intelligent combat systems capable of tracking targets for weeks, collecting and analysing vast amounts of data, and selecting the optimal moment to strike—based on variables such as location, weather, civilian movement, and even anticipated media impact.

The accuracy of some operations reached nearly 92%, reflecting the advanced nature of this technology.

The most dangerous aspect of these systems is their capacity for continuous learning.

They do not operate based on fixed programming; rather, they evolve over time and adjust their decisions based on previous experiences and new data.

This makes them more efficient—but also less predictable.

An algorithm that behaves one way today might behave entirely differently next week, even under the same mission parameters.

Here lies a profound ethical and legal dilemma: who is responsible if such systems make a mistake and kill civilians? The programmer? The operator? The state? Or has artificial intelligence become a new kind of legal actor, unbound by identity or accountability?

Israel’s experience exemplifies the shift from traditional deterrence to what might be called “algorithmic deterrence.”

By integrating the capabilities of Unit 8200—specialised in cyber warfare—with startups focused on data analytics and behavioural prediction, Israel has developed systems that detect threats, analyse them, and carry out surgical assassinations before those threats materialise.

This strategy seeks not to respond to attacks, but to prevent threats from emerging in the first place, via what could be described as “predictive preemptive killing.”

Remarkably, such operations require no human presence on the ground; instead, they are orchestrated from high-tech digital command centres where targets are monitored and “engagement criteria” are verified before the strike is executed within a fraction of a second.

This transformation is not exclusive to Israel or the United States. China, Russia, Türkiye, and other rising regional powers have entered the race to develop algorithmic command and combat systems.

In some cases, AI networks have been built to coordinate between autonomous land, air, and sea units without direct human supervision—relying on real-time analysis of data from multiple sensors and intelligence sources.

These capabilities make military decisions faster than any human response, but they also raise grave risks: What happens if rival algorithms clash on the battlefield? Could a war erupt due to a computational error? And what if an attack decision is left to a system that understands neither diplomacy nor intent?

Even more alarming is the proliferation of this technology to non-state actors.

With the spread of open-source programming tools and the decreasing cost of drones, a militant group—or even a technically skilled individual—could design a rudimentary algorithm to target a specific adversary based on facial recognition or digital signals.

This trend opens the door to the “democratization of digital killing,” where warfare is no longer the domain of armies but a chaotic arena for mercenaries, hackers, and anarchists.

War is no longer solely physical; it is psychological and informational. In modern operations, AI-backed cyberattacks aim to destroy morale—through disinformation, deepfakes, and fake accounts that create confusion and mistrust within enemy ranks.

It is a “soft war” that targets the mind before the body and reshapes the internal landscape of political and security decision-making.

All of these developments are occurring in the absence of a clear international legal framework regulating the use of killer algorithms.

Existing treaties—chief among them the Geneva Conventions—were established in a time when war was an entirely human endeavour.

Today, there is no binding treaty governing autonomous killing systems, no obligation for states to disclose their combat algorithms, and no accountability for developers.

There are calls for a “Digital Geneva Convention,” but so far, major powers have resisted any constraint that could limit their strategic superiority.

Despite some researchers’ efforts to embed ethical values into algorithms, these attempts remain limited.

Algorithms do not comprehend the difference between a child and a combatant hiding among civilians.

They analyze probabilities and execute when a certain threshold of “threat” is exceeded. In such cases, ethics become a mathematical variable—not a human principle.

In this new world, the human becomes a variable in an equation—not the decision-maker, but the one who bears the consequences.

One’s fate may be calculated in a predictive report read only by artificial intelligence. And herein lies the gravest dilemma: if we do not set clear limits on what machines can do, we may find ourselves living in an era where killing is executed at the push of a button—without memory, remorse, or responsibility.

In conclusion, what we are facing is not merely a technological evolution but a pivotal moment in the evolution of humanity itself.

Killer algorithms force us to redefine the relationship between human and machine, between power and responsibility, and between war and justice.

If the international community does not act swiftly to craft new rules to restrain this power, the wars of the future will not be fought between armies—but between algorithms.

And we, quite simply, will be digital targets. — The writer is a researcher specialising in cyber politics, and holds a PhD on the topic of “Cyberspace and Power Shifts in International Relations.”