Development Of Military Applications For AI Raises Ethical And Safety Concerns

Development Of Military Applications For AI Raises Ethical And Safety Concerns
Artificial Intelligence (AI) has immense applications for defence and security; therefore, AI military applications have become a national priority in several states. Concern remains however, on how to integrate this new technology into society and govern it at both the national and international level to circumvent its undesirable use.

AI military applications range from intelligence gathering and analysis, autonomous vehicles, weapon systems, improved logistics resulting in value-added military readiness, and efficient command and control that have both constructive and destructive uses. For instance, advanced sensors and large data sets based on AI algorithms provide rapid and precise reconnaissance and situational awareness, and support decision-making. Fielding AI-enabled technologies tends to provide operational advantages as well as improved situational awareness and could help reduce the risk of collateral damage.

Besides decision-making, data-driven technologies are essential for efficient organisation and execution of military operations to avoid force depreciation in battlefield. AI can enable military command and control that tends to provide decision-makers with real-time intelligence and enhance cross-domain coordination among forces and assets. AI-based communications can reduce the risk exposure of military personnel in the battlefield and routine maintenance. The high dependence on digital data makes these applications vulnerable to adversarial manipulation however.

To gain relative advantages, several countries including, nuclear-armed states have formulated diverse policies and plans to invest and develop AI military applications, which is intensifying strategic competition among states. States like the US, France and Israel have published their military AI policies and strategies. The US is integrating AI into drones, fighter aircraft, ground vehicles and naval vessels.

Deploying AI military applications is not free from ethical challenges.



The US Department of Defence (DoD) announced in 2021 that it was “working to create a competitive military advantage by embracing and leveraging AI.” On the other hand, Russia has developed systems such as Uran-6 autonomous mine-clearing robot and KUB-BLA unmanned suicide drone that identifies targets using AI. Putin said, “whoever becomes the leader in this sphere will become the ruler of the world.” Russia established a special department to develop AI-enabled weapons in August 2022 and the experience in Ukraine reportedly would help improve these weapons to become “more efficient and smarter.” Likewise, China is developing and investing in AI-enabled technology and autonomous weapon systems.

Notably, this soaring pressure of strategic competition among states could lead to early and untimely deployment of AI applications in military operations, resulting in failures, accidents, miscalculations and inadvertent escalation. Alternatively, fast-paced states could overwhelm and overtake slow pacers in the deployment and integration of AI into military realm. The wide deployment of advanced AI military applications across the world could be a challenge.

Artificial Intelligence is likely to function as a force multiplier, and could enhance the capabilities of cyber weapons, drones, anti-aircraft weapons and fighting troops.



However, deploying AI military applications is not free from ethical challenges. Scientists and researchers working in computing, algorithms and machine learning are aware of the destructive use of autonomous weapon technology in the battlefield, and have therefore repeatedly called for building controls, conventions and norms around AI military applications. Tech giants such as Google in 2018 flagrantly criticized the Pentagon’s Project Maven that was designed to apply AI into battlefield when almost 3000 employees signed a letter calling their company to withdraw from the project. In addition, campaigners such as the Stop Killer Robots coalition are striving hard against developing and integrating AI-enabled weapons. Despite these concerns, states are still developing these weapons. A rigorous understanding of societal values could help circumvent the development and use of AI military applications for fallacious purposes.

Artificial Intelligence is likely to function as a force multiplier, and could enhance the capabilities of cyber weapons, drones, anti-aircraft weapons and fighting troops. Scholars argue that the weapons technology is essential in determining the relative ease and success of attack and defense in a military operation that in turn affect states’ international relations including arms races, formation of alliances, strategic competition and structure of international system.

AI improves the quality of military force and material that is as important as quantity. Moreover, the development and gradual integration of autonomous weapons into military operations could modify military culture by distancing humans - military operators and decision-makers - from the battlefield, compromising their authority in warfare and transforming the understanding of heroism that has always been associated with humans.

The serious ethical constraints associated with the military applications of AI could arguably hamper their development and deployment by democratic societies as compared to authoritarian societies.



Advancement in military technology over centuries has changed the relationship between soldier and battlefield and military AI will also likely gradually distance soldiers from threats on the battlefield. With every military innovation and modernization, humans willingly cede their utility, value and space to technology in conducting a military operation. The revolution in military affairs (RMA) and innovative developments in computing and digital world have affected our military operations as well as our lifestyles. Several studies found an “automation bias” in humans that leads them to excessively trust computers despite knowing about the flaws and contradictions in computing. It is clear that AI military applications will transform the outlook of the military and its relation to society. Therefore, it is important to align AI-enabled technology with societal values.

States need to democratically balance the need to maintain an open and stable international environment for AI research and to protect AI sensitive technologies from causing major disruptions to the international environment. To ensure an open and stable international environment for AI research, cooperating states need to develop and implement AI norms such as the Organization for Economic Cooperation and Development (OECD)’s set of AI principles, the European Union’s (EU) ethical guidelines for military uses of AI at the global level. Strategic competition among states to develop military AI is not the only concern, because the low-cost and off-the-shelf availability tends to amplify proliferation chances of AI military applications to non-state actors. Therefore, to protect AI-enabled technology from disrupting the international environment, rigorous public-private partnerships in enforcing software and algorithm restrictions on, for instance, autonomous vehicles, robots etc. could help diminish non-state actors’ access to and use of such technologies.

Importantly, the serious ethical constraints associated with the military applications of AI could arguably hamper their development and deployment by democratic societies as compared to authoritarian societies. This eventually could place democratic societies at disadvantageous positions vis-à-vis their antipodes. On the other hand, democratic societies encourage norms-based regulations. This tendency can guard against proliferation of AI-enabled technologies to non-state actors.

The need of the hour is for states to ensure that the development of AI military applications is consistent with liberal democratic values. Cooperative competition under a well-established norms regime can help mitigate the risks associated with the use of AI for non-state actors’ operations, as well as guard against complete loss of human control over military operations. Moreover, fostering public-private partnerships is essential to address automation bias of humans as well. Simultaneously, states’ investment and integration of the private sector in the development of AI applications tends to produce strong and extensive human capital that is important in fielding an AI-enabled military force, weapon systems and tactics. Overall, cooperation among states and integration of public and private sectors could offer development and implementation of norms governing military AI.

Dr. Salma Shaheen teaches at the Defence Studies department at King's College London. She can be reached at shaheensalma7@gmail.com