Joëlle Tirza Fuchs
Introduction
As artificial intelligence (hereafter AI) is developing at a rapid pace in the peace and security sector, the United Nations has been discussing and recognizing its implications not only for lethal autonomous weapons systems (hereafter LAWS) but for cyberattacks, deepfakes and disinformation, encouraging states to implement regulations and national strategies as well as collaborate at the international level (UNIDIR, 2023). The creation of the High-level Advisory Body on Artificial Intelligence in 2023 demonstrates the current need to discuss the opportunities and challenges posed by AI.
As described by the Office for the Coordination of Humanitarian Affairs (hereafter OCHA) (2024), AI has various tasks: supervised, unsupervised and reinforcement learning as well as generative. Through supervised and unsupervised learning, the algorithm is trained to predict an outcome based on label and unlabeled datasets and patterns. Reinforcement learning implies the interaction with an environment validating and invalidating outcomes. Through generative tasks, a new output is created based on unstructured dataset inputs (OCHA, 2024).
Nothing else than machine programming, AI is fueled by a large amount of data which, as debated in class through various social media racist algorithm examples, can contain certain bias. In this sense, Kate Crawford (2021) argues that artificial intelligence is nor artificial nor intelligent as it relies on natural resources and cannot operate independently. This statement suggests that machine training based on datasets, patterns and rewards not only evolves within but reflects existing political and social structures, thereby reinforcing “existing dominant interests” (Crawford, 2021). Furthermore, algorithms governed by defined rules inevitably carry political implications (Musiani, 2013).
Building on this basis, it raises significant concerns and opportunities regarding the use of AI in the military forces, used in command and control (decision-making operations), information management, logistics and training (UNIDIR, 2023).
Indeed, as emphasized by Jensen (2020), AI could help with military effectiveness by maximizing the use of military resources and trainings, simulate defense strategies and increase political and information warfare. However, debates have been emerging on the ethics of AI and to what extend the decision to kill should be entirely conceded to a machine. As discussed in class and through the articles of Robins-Early (2024) and Blanchard (n.d.), little consideration is given to the development of ethics in the use of technology for military purposes, often undertaken by companies that do not transparently share the functioning of the developed technologies.
The aim of this essay, considering the military context mentioned above, is to explore the planetary costs of AI. The three sections will look respectively at the environmental impacts, exploring in particular the natural resources stretched by AI
warfare; the social costs, including ethics, mass surveillance and privacy, discrimination against minorities and consideration of the body; and finally, the democratic impact of the use of AI in conflict, mainly through information warfare and disinformation.
Environmental impacts
As part of national military strategies, emerging technologies have been heavily invested on in the last few years. According to Brenes & Hartung (2024), the United States of America (hereafter USA) have demonstrated a keen interest to modernize its military arsenal towards AI-driven materials. In this sense, along with Saudi Arabia, USA firms have invested from $6 to $100 billion in the tech sector for military purposes and are urging the USA government to follow this trend for more funding and less rigorous monitoring procedures (Brenes & Hartung, 2024). Acquired war materiel is slowly changing in nature: voluminous and expensive weapons such as combat aircraft could be replaced by “systems that can be produced relatively cheaply and in large quantities, while also being able to be replace in short order if large numbers are lost in battle” (Brenes & Hartung, 2024, p.5). However, these weapons require a lot of natural material resources, which has a major impact on the environment. Indeed, according to Crawford (2021) the AI sector reveal to be highly energy hungry and rely heavily on the extractive industry.
To understand the origin of the natural resources necessary to maintain the AI sector, the author and photograph Kate Crawford (2021) visited the Silver Peak Lithium Mine, situated in Nevada’s Clayton Valley in the USA. The land, characterized by a long history of mining and colonization, is highly connected to the Silicon Valley who was already extracting minerals and metals since the nineteenth century. She relates that lithium is particularly important in the
production of any device needing to be recharged, on which AI is dependent. Consequently, the extraction industry left the valley devastated with constatation of high environmental damages as well as poor health for the miners and local citizens. However, Nevada is not the only land who suffered from the AI development boom as mining for the tech sector has become global.
“The cloud is the backbone of the artificial intelligence industry, and it’s made of rocks and lithium brine and crude oil.” (Crawford, 2021, p.31)
In addition to extracting natural resources that requires billions of years for the earth to produce, she related that these are transformed, processed and transported, adding an extra environment cost to the process of building devices necessary for using AI. As a consequence, a growing number of minerals are becoming scarce, including germanium, used in drones. Dominated by China, the rare earth mineral market involves dissolving minerals trough sulfuric and nitric acid to extract 0.2% of its mass used for technology (Abraham in Crawford, 2021). The environmental impact of this process is disproportionate in comparison to its benefice.
In addition to the natural resources it uses, the AI infrastructures need a lot of energy to function. As an example of the AI’s carbon footprint, Crawford (2021) explains that to run one natural language processing (hereafter NLP) model’s amount to 125 flights from New York to Beijing. On the side of generative AI, the creation of a text requires deep calculation in data centers, producing heat which needs to be cooled down with water and air conditioning. According to Verma and al. (2024), a text of 100 words generated by a GPT-4 AI uses more than a 500 ml of water and about one hour of a room lighting electricity. This electricity relies mostly on coal, gas, nuclear or renewable energy which contributed to carbon emissions (Crawford, 2021).
As a result, we can easily make the link between the massive AI-driven arm production that could be integrated in warfare in the next few years and its impact for the environment given the quantity of material it could potentially need.
Social impacts
To gain informational advantage, States engaged in conflict accumulate a vast amount of data, not only about their own population but also about the enemies’. According to Jacobsen and Liebetrau (2023), since 90s, states benefiting from
superior information management have been able to affirm their military superiority and the use of AI has amplified this dynamic, providing states with the potential to increase their battlefield efficiency and capabilities. However, while AI offers significant operational advantages, it also introduces profound ethical and social challenges. These include mass surveillance, facial recognition and violation of privacy in the context of war. At the same time, AI can have positive applications such as improving responses to humanitarian crises by enabling more efficient aid delivery. This section explores both the opportunities and risks associated AI in the context of war.
One prominent intersection of war and technology is the use of the facial recognition technology (hereafter FRT) which has been used, for instance, during the Russo-Ukrainian conflict. FRT is an artificial intelligence designed to analyze facial features for identification purposes (Espindola, 2023). This process involves comparing an individual’s facial features against extensive databases, including information coming from social media (Espindola, 2023).
Firstly, the FRT can have a positive impact on humanitarian operations. Article 17 of the first Geneva Convention (1949) stresses the importance of identifying deceased persons, whether nationals or enemies, insofar as circumstances permit. It is therefore a humanitarian duty to identify the bodies of nationals and enemies in order to return the bodies to the families (Espindola, 2023). According to the same author, in this respect, the FRT offers a major advantage in terms of saving time in the identification process and its use is justified. Increased access to humanitarian data has a positive social impact as it provides humanitarian professionals with an accurate, wide-ranging and complex analysis of the situation on the ground and an effective anticipation tool for the decision-making process (OCHA, 2024; Hirblinger, 2022).
However, FRT also raises significant privacy concerns. Espindola (2023) notes that the large-scale collection and utilization of data without consent of its owners amount to mass surveillance and violation of informational privacy. Extracted data such as people’s faces, activities, gesture are often analyzed and used to train AI models for facial recognition (Crawford, 2021). She argues that stripping personal information of its specific context is problematic, as it transforms personal information into aggregated and collective data. Consequently, according to her, individuals lose control over their information, compromising their autonomy. A controversial example is the use of Clearview AI, which poses significant
challenges for the right to privacy. Espindola (2023) argues that, in the case of Ukraine, such threat to individual rights might be justified by the broader collective gains during war. However, this rationale raises concerns, as citizens, particularly in vulnerable contexts, may be keen to accept the loss of their privacy rights for the sake of war strategies. Such compromises risk leading to the misuse of this data post-conflict, with information being repurposed in contexts unrelated to the original war during which it was collected (Espindola, 2023). In addition, there is a risk of sensitive disclosure of personal information collected on marginalized communities in humanitarian crises, which could increase their vulnerability (OCHA, 2024).
Beyond privacy concerns, FRT and AI in general does not only raise the concern of mass surveillance but may, in certain case, raise questions of biased and discrimination. Espindola (2023) argues that AI can perpetuate systemic biases and
discriminate treatment, particularly affecting minorities or political dissidents. The use of AI for military purpose poses ethical dilemmas, as training data often reflects existing human biases. As a result, these biases, are incorporated in AI system through machine learning, leading to algorithmic bias (Jensen, 2020). As discussed in class, racist and sexist patterns integrated into the training data influence the AI’s decisions, ultimately reproducing societal inequalities and reflecting the social challenges present in the society. In this sense, AI has a societal impact because it reinforces the already existing inequalities of treatment that technology blindly reproduces.
Furthermore, on the known basis of it bias, the integration of AI in military targeting systems, such as drones, further complicates these ethical concerns. Wilcox (2017) explains that AI-driven systems contribute to the categorization of individuals based on physical characteristics, effectively determining who get to live and who get to die. This categorization often aligns with racial and gendered biases, framing certain bodies as “killable” or “manageable” (Wilcox, 2017, p. 14). In this sense, AI does more than merely analyzing physical features in a rational way – it emits judgement through generative effects and algorithmic decision-making “[…] that enables distinctions between more or less worthy forms of life.” (Wilcox, 2017, p. 15).
This raises critical questions about the societal implications of allowing machines to make life-and-death decisions. According to Brenes and Hartung (2024), advanced technologies have significantly reduced the number of human actors and steps involved in such lethal decision-making processes. Moreover, the decision- making processes within AI systems often remain opaque – commonly referred to as the “black box” – making it difficult to understand how exactly conclusions are reached (UNIDIR, 2023). This opacity presents challenges in determining which tasks, if any, should be entrusted solely to AI. On one hand, according to UNIDIR (2023), AI’s reflection relies on the quality of its data, making it vulnerable to errors, contaminated data from cyberattacks or flawed associations. On the other hand, human decision-making processes involves emotions, unpredictability and contextual nuances, which AI may overlook (UNIDIR, 2023).
Furthermore, AI lacks an intrinsic ethical framework to distinguish acceptable actions from unacceptable ones. Unlike humans, machines do not inherently adhere to social norms (Dwyer, 2023). However, international conflicts occur within
specific political, historical and social contexts which should be considered while making decisions. As a result, Dwyer (2023) emphasizes the importance of keeping the human in the decision-making loop to ensure that AI decisions are
complemented by ethical and contextual considerations. Adding to that, Blanchard (n.d.) argues that ethical considerations are absent from the multilateral discussions taking place around the regulation of AI, particularly when it comes to LAWS. He points out that even though ethics is recognized as an essential element in the discussion, its role is not defined and is overshadowed by legal considerations under international humanitarian law (hereafter IHL).
Impacts on Democracy
We can observe that AI systems integrated into military strategies are no longer used exclusively in this sector but are slowly being incorporated into government functions (Crawford, 2021). And as we discussed with FRT, mass surveillance and privacy concerns, closely linked to the concept of autonomy, give the state a powerful tool to control its population and conduct unjustified surveillance activities. This could lead to freedom of expression abuses such as the repression
of political dissidents (Espindola, 2023). In this respect, Espindola (2023) points out that technology has the capacity to undermine democracy because “citizens no longer feel at ease demonstrating or speaking out against authorities for fear of reprisals enabled by government surveillance through FRT.” (p.183).
Another other significant impact on democracy stems from the rise of algorithmic warfare and the ways in which information is weaponized in modern conflicts. Payne (2005) notes that information, international media and public opinion are a now integral component of warfare. Under normal circumstances, he argues that tensions often arise between the agendas of belligerents and the role of the media outlets. According to the same author, media organizations are far from neutral actors in such scenarios; their objectives and biases frequently diverge from those of the conflicting sides. As a result, this interplay creates a complex and often litigious dynamic that shapes how information is disseminated and consumed.
A particularly influential vector of information in contemporary conflicts is social media. These platforms play a pivotal role in shaping public opinion and can be leveraged by parties to a conflict to amplify their message, influence perceptions, and even conduct political warfare by allowing the manipulation of narratives on a
large scale (Jensen, 2020). For instance, he highlights the Russian interference in the 2016 US elections as an example of how technology can influence democratic voting. Indeed, Russia used political data to disseminated targeted information such as articles aligning with the user’s political preferences. This practice encouraged the dissemination of polarized stories and propagandistic narratives, further increasing the scope of disinformation. O’Neil (2016) further points out that only a relatively small portion of the population needs to be influenced for such tactics to alter the outcome of an election. As a result, we can see that technology is a powerful tool that raises critical concerns about the integrity of democratic systems, as the boundaries between genuine public opinion and manipulated narratives blur.
Finally, Chandler (2015) shares an interesting view on the distribution of data within societies and the governance mechanisms it enables. He argues that Big Data offers a relatively innovative approach to addressing international intervention, including in peace and conflict, which were often have been criticized for their generalized frameworks and slow responsiveness. As a result, Chandler states that data available to local communities represent a tool of knowledge capable of empowering governance “from below” (Chandler, 2015) which represent a significant departure from externally imposed solutions that often lack contextual relevance. In this sense, as an important component of AI, data offers citizens to participate in a more accurate way to governance processes which reinforce democracy.
Conclusion
In conclusion, we have seen that AI is particularly linked to the political implications of the society from which the quality of its data stems. In the context war, we have explored questions about its environmental, social, and democratic impacts. As far as the environment is concerned, we have seen that AI requires a large number of natural resources, including rare minerals, water and electricity. With the replacement of important warfare material by technology, made to be highly destroyable and easily replaced, environmental damage will increase. The extraction, processing, and use of natural resources to sustain AI-driven technologies reveal the drawbacks of introducing technology in military strategies.
From a social point of view, AI mainly raises ethical dilemmas, particularly with FRT and LAWS as it has the potential to lead to mass surveillance, violations of privacy and discrimination. The biases naturally present in the data that feeds AI
have been shown to accentuate already existing societal inequalities. In democratic terms, data and AI offer the possibility of strengthening local communities with a bottom-up governance type. However, we have seen that AI is often used in political war to manipulate narrative and spread disinformation.
These findings demonstrate the need to regulate the use of AI in warfare, along with ethical frameworks. Personally, I believe that incorporating the use of LAWS and other AI-driven systems into IHL would have enormous benefits and would already contribute significantly to bringing ethics into the discussion, as laws primarily reflect the ethical concerns of the actors enacting them, in a certain historical and political context, at a given time.
Joëlle Tirza Fuchs is an exchange student studying International Relations at PUC-Rio.
This essay has been written as final assignment for the course “Artificial Intelligence and Armed Conflicts”.
The use of AI in this document
ChatGPT has been used in this document for rewording, vocabulary and synthetizing purposes. I declare that all content has been written without the aid of AI. It incorporates exclusively author resources found in the class bibliography and literature research. Below is an example of how I used AI.
My sentence: “This reflection raises questions about the social impact of having a machine deciding on the death of someone.”
ChatGPT sentence: “This raises critical questions about the societal implications of allowing machines to make life-and-death decisions.”
Bibliography
Abraham, David S. The Elements of Power: Gadgets, Guns, and the Struggle for a Sustainable Future in the Rare Metal Age. New Haven: Yale University Press, 2017.
Blanchard, Alexander. The road less travelled: ethics in the international regulatory debate on autonomous weapon systems. Humanitarian and Law Policy, Comitê Internacional da Cruz Vermelha. https://blogs.icrc.org/law-and-policy/2024/04/25/the-road-less-travelled-ethics-in-the-international-regulatory-debate-on-autonomousweapon-systems/
Brenes, Michael; Hartung, William D. A.I. Won’t Transform War. It’ll Only Make Venture Capitalists Richer. The New Republic, June 3 2024. https://newrepublic.com/article/182145/ai-weapons-make-venture-capitalists-palantir-richer
Chandler, David. “A World without Causation: Big Data and the Coming of Age of Posthumanism.” Millennium: Journal of International Studies, vol. 43, no. 3, June 2015, pp. 833–51. https://doi.org/10.1177/0305829815576817.
Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.
Dwyer, Andrew C. The unknowable conflict: Tracing AI, recognition, and the death of the (human) loop. In: Artificial intelligence and international conflict in cyberspace, Fabio Cristiano, Dennis Broeders, François Delerue, Frédérick Douzet, and Aude Géry (eds). Oxon: Routledge, 2023.
Espindola, Juan. “Facial Recognition in War Contexts: Mass Surveillance and Mass Atrocity.” Ethics & International Affairs 37.2 (2023): 177–192
Grand-Clément, Sarah, “Artificial Intelligence Beyond Weapons: Application and Impact of AI in the Military Domain”, UNIDIR, Geneva, 2023.
Jensen, Benjamin M., et al. “Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence.” International Studies Review, vol. 22, no. 3, Sept. 2020, pp. 526–50. https://doi.org/10.1093/isr/viz025.
Jeppe T. Jacobsen and Tobias Liebetrau. Artificial intelligence and military superiority: How the ‘cyber-AI offensive-defensive arms race’ affects the US vision of the fully integrated battlefield. In: Artificial intelligence and international
conflict in cyberspace, Fabio Cristiano, Dennis Broeders, François Delerue, Frédérick Douzet, and Aude Géry (eds). Oxon: Routledge, 2023.
Musiani, Francesca. “Governance by Algorithms.” Internet Policy Review, 2013. DataCite, https://doi.org/10.14763/2013.3.188.
OCHA Center for Humanitarian Data. Briefing Note on Artificial Intelligence and the Humanitarian Sector. United Nations Office for the Coordination of Humanitarian Affairs, Apr. 2024, p. 8.
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. First edition, Crown, 2016.
Payne, Kenneth. « The Media as an Instrument of War ». The US Army War College Quarterly: Parameters, Parameters Spring 2005, 35, no 1 (2005): 14. https://doi.org/10.55540/ 0031-1723.2243.
Robins-Early, Nick. AI’s ‘Oppenheimer moment’: autonomous weapons enter the battlefield. The Guardian, 14 July 2024. https://www.theguardian.com/technology/article/2024/jul/14/ais-oppenheimer-moment-autonomous-weapons-enter-
thebattlefield?CMP=share_btn_url
Verma, Pranshu, Shelly Tan, Nitasha Tiku, Caroline O’Donovan, Laura Wagner, Chris Velazco, Eva Dou, Ellen Nakashima, Joseph Menn, et Cate Cadell. « A Bottle of Water per Email: The Hidden Environmental Costs of Using AI Chatbots ».
Washington Post, 18 septembre 2024. https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-
electricity-water-data-centers/.
Wilcox, Lauren. “Embodying Algorithmic War: Gender, Race, and the Posthuman in Drone Warfare.” Security Dialogue, vol. 48, no. 1, Feb. 2017, pp. 11–28, https://doi.org/ 10.1177/0967010616657947.