Using Artificial Intelligence for Military and Nuclear Purpose is Imprudent, Says SIPRI Report

According to a SIPRI report, the current AI renaissance is bound to have an impact on nuclear weapons and doctrines. It will generate opportunities but also risks, old and new, for strategic stability

By Sri Krishna

Technology

The premature adoption of Artificial Intelligence (AI) technology by nuclear-armed states for military purposes in general and nuclear-related purposes in particular could increase the risk that nuclear weapons and related capabilities could fail or be misused in ways that could trigger an accidental or inadvertent escalation of a crisis into a nuclear conflict, says a report.

In a report running into about 158 pages prepared by four scholars for Stockholm International Peace Research Institute (SIPRI), they said the nuclear-armed states, even those with the most developed vision of the strategic role that AI could play, have so far issued little official information on their policies on how they would use, or not use, the advances in AI in nuclear weapon systems.

The report said that instead, existing official documents tend to address the legal, ethical and security challenges posed by the increasing use of AI and robotics in conventional and cyber weapons. It can be deduced from the statements that states have made about lethal autonomous weapon systems that they see risks associated with the increasing delegation of tasks to AI systems and that there is a need to ensure that humans retain a form of meaningful control over any nuclear launch decision.

The report is the final outcome of a two-year research project conducted by SIPRI on the impact of advances in artificial intelligence on nuclear weapons and doctrines.

It notes that the connection between AI and nuclear weapons is not new. As early as the 1960s, when the discipline was young, nuclear-armed states identified that AI technology could play a role in the nuclear enterprise.

As the Soviet Union and the United States had both developed launch-on-warning postures, AI was seen as a technology that could allow the development of automated or semi automated early-warning and command-and-control systems. These would allow the strategic command to identify threats and adequate responses more quickly.

Second, machine learning is a multipurpose technology. It can therefore unlock new and varied possibilities for a wide array of nuclear weapon systems, ranging from early warning, via command and control to weapon delivery.

The report said that India, despite its reputation in software and IT, is still in the early phases of policy adoption, and the policy document that outlines its ambition in the field suggests that it currently prioritises development for civilian purposes. Yet the recent establishment of two multi-stakeholder task forces to explore, respectively, civilian and military AI applications and aims indicate that India aims to make progress in both spheres.

China has recently launched a spate of official documents and programmes on AI that indicate that it intends to take a leading role in the field, notably though its unique ability to generate synergies between civilian and military AI advances. Russian official statements and platforms indicate the centrality of AI development in achieving its military aims. The benchmark for both China and Russia seems to be the US’ vision and plans for AI in the military sphere.
Chinese and Russian official documents and expert commentaries often refer to what the US is doing; and overall China and Russia are prioritising the same types of AI-enabled capability that the US has or is developing.

France and the United Kingdom also have ambitions to be great powers in AI, but they have only just begun to articulate concrete visions and plans for how they intend to use AI in their armed forces.

Pakistan’s official vision for AI has so far been limited to initiatives that set general objectives for ensuring Pakistan’s competitiveness in AI.

The current AI renaissance is bound to have an impact on nuclear weapons and doctrines, the report noted. It will generate opportunities but also risks, old and new, for strategic stability. The adoption of recent advances in machine learning and automation in the military sphere, and in nuclear weapons in particular, will be incremental and take time.
However, it is not too early for states and international organisations to look for policy options and identify opportunities to tackle the challenges presented by these technologies. It can even be hoped that such an effort will provide a useful opportunity for nuclear-armed states to discuss nuclear risk reduction among themselves as well as with the global community of states in a constructive and collaborative manner.

– The author is a senior journalist and media consultant