Introduction - THIMUN SingaporeArtificial Intelligence (AI) Artificial intelligence is a subfield of...
Transcript of Introduction - THIMUN SingaporeArtificial Intelligence (AI) Artificial intelligence is a subfield of...
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 1 of 13
Forum: General Assembly 1
Issue: The question of Artificial Intelligence [AI] in the context of
disarmament and international security
Student Officer: Chris Lim
Position: Deputy Chair
Introduction
As humanity progresses into the early 21st century, the advancement of artificial intelligence (AI)
has heralded a new era worldwide. Ranging from self-driving vehicles to systems that can predict
natural disasters, artificial intelligence is versatile and ubiquitous within global society, often serving an
integral part in transforming industries and catalyzing the process of globalization. As a result, AI
technology has been able to expedite economic growth in many countries, and it has brought
revolutionary changes to the lives of civilians all over the world.
AI technology has repeatedly demonstrated how it can complement aspects of society, but
integrating artificial intelligence into security and disarmament has introduced a myriad of challenges
along with its benefits. This is mainly due to AI's sheer power, complexity, and unpredictability - three
features of AI that make its implementation notably risky. On a national level, AI can be integrated into a
variety of technologies to improve security; surveillance cameras can detect weapons or 'unusual
behaviour', computers can search for cybersecurity vulnerabilities, and government media outlets can
identify and remove disinformation.
With regards to the potential for implementation on an international level, the invention of AI is
often referred to as the "third revolution in warfare," following the discovery of gunpowder and nuclear
weapons. Given that AI is classified as a revolutionary tool for security and disarmament, this reinforces
the need for international regulations to ensure AI is not exploited in conflict. Unlike other means of force
such as nuclear or biological weapons, materials needed for AI production are relatively cheap and
accessible, making it significantly easier to produce. Due to its accessibility, the private sector is
reportedly far ahead in utilizing AI research and development. Hence, many argue that the further
integration of AI in the public sector is necessary to prevent a loss of control, and to dwindle the
influence of private firms. Several nations, including the United States of America, France, United
Kingdom, China, Israel, the Republic of Korea, and Russia, have publicly stated plans for improvement
and deployment of AI in the military, initiating an international artificial intelligence arms race.
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 2 of 13
Figure 1: The Modular Advanced Armed Robotic System (MAARS), developed by Qinetiq PLC. May 2017 (Qinetiq NA)
Definition of Key Terms
Artificial Intelligence (AI)
Artificial intelligence is a subfield of computer science that develops machines able to replicate
human cognition and intellect. A factor that clearly distinguishes AI from other technologies is the ability
to perform functions that are exclusive to intelligent beings, such as making decisions, perceiving visual
stimuli, and interpreting speech. In military context, AI can be used for a multitude of tasks, including
surveillance, protection, and engaging targets.
Lethal autonomous weapons (LAWs)
A type of military robot that can independently search and eliminate targets based on
programmed constraints and pre-defined criteria. LAWs do not require human intervention or input to
function. Autonomous offensives systems are LAWs that are used for offensive purposes, and they
generally have more autonomy. Therefore, they attract more concern when it comes to legal and ethical
issues. Autonomous defensive systems are LAWs that are used for defensive purposes.
Unmanned Aerial Vehicles (UAVs)
This is an aircraft that is piloted via remote control or onboard computers. As a result, it does not
have a human pilot or passengers on board. Although UAVs can be fully autonomous like AI, they are
often controlled by a human operator at ground level. Due to its innate ability to travel with great speed
and efficiency, this technology has far reaching potential in warfare, and thus, it draws controversy on
how it should be utilized. UAVs are more commonly known and referred to as "drones".
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 3 of 13
Cyber-warfare
The utilization of a range of computer technologies to disrupt the activities of a state or
organization, through breaching and hacking authorization systems. This is often done as a deliberate
attack on an institution or to provide strategic military advantages. With the rise of artificial intelligence,
cyber-attacks are becoming an emerging threat for private organizations and government agencies.
Background Information
The fundamental concept of artificial intelligence can be traced from antiquity, with several
century-old myths and beliefs of artificial beings possessing intellectual qualities unique to humans.
However, this concept had never been fully conceived until the invention of the first programmable
computer in 1936. Although early computers were only based on the essence of mathematical
reasoning, they planted the seeds of modern artificial intelligence by leading to a rapid increase in
research and development. These computers flocked a generation of scientists inspired by the idea of
building a machine that can mimic human intelligence, and this has ultimately accumulated interest in
the concept of machine learning.
In 1956, an assistant professor of mathematics at Dartmouth College named John McCarthy
hosted an event known as the 'Dartmouth workshop'. This was a convention dedicated to the concept of
'thinking machines' - widely considered the prequel and foundation of modern-day artificial intelligence.
Participants of the Dartmouth workshop discussed an array of topics regarding computer technology,
such as neural networks, abstraction and natural language processing, consequently leading to the birth
of artificial intelligence as a subfield of computer science. This is the point which many people refer to as
'the birth of artificial intelligence', and it initiated the inexorable growth of AI.
The implementation of AI in international security
Nations soon realized the true potential of integrating artificial intelligence with military vehicles
and equipment. Using AI technology can bring many strategic advantages - for example, it can help
make quicker and more informed decisions, increase the speed and scale of actions, or lower casualties
and losses. This can ultimately reduce the repercussions in conflicts and promotes international security
among nations. However, the absence of human input and control, as well as AI's unprecedented
potential has lead to concern over legal and ethical exploitation. In theory, AI could possibly be used to
intentionally or unintentionally violate the ethics of conflict - for instance, lethal autonomous weapons
(LAWs) can be programmed and deployed to target an ethnic group selectively.
One of the earliest cases of artificial intelligence being used for combat purposes is by the US
military in 1956, when the USS Mississippi (BB-41 Ship) test-fired a computer-guided missile that could
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 4 of 13
automatically correct for variations in its altitude and speed. Between the late 1950s and the mid-1960s,
concern over the Soviet Union's capabilities led to an increase in funding institutions like the
Massachusetts Institute of Technology (MIT) and projects to gradually modernize their military.
Eventually, these technologies would be utilized during actual warfare and some experienced success.
During the Vietnam War in 1972, the US air force used a laser-guided weapon to destroy the Thanh Hoa
Bridge, which was a strategic spot for the Vietcong.
Unfortunately, the danger of using artificial intelligence began to show. On the 3rd of July, 1988,
the aegis air-defence system USS Vincennes deployed at the Persian Gulf destroyed an Iranian
commercial airliner, killing all 290 people aboard. The incident had occurred because the system
detected a 'hostile aircraft' while in semi-automatic mode, falsely identifying the commercial airliner as a
threat. As expected, the event became an immense controversy, with the Iranian government deeming
the incident as a deliberate attack on Iran and its people. In July 1988, Iranian Foreign Minister Ali Akbar
Velayati asked for the condemnation of the United States because the attack "could not have been a
mistake" and was a "criminal act", a "massacre," and an "atrocity." Aside from affecting the US-Iran
relationship for years to come, the tragic incident also served as a warning for using AI for military
purposes. Developers knew that militarizing AI bestowed a huge responsibility to them; AI could
jeopardize the safety of many people if not reliably programmed.
The artificial intelligence arms race
Analysts argue that the global arms race for better artificial intelligence forces has started in the
mid-2010s, with three dominating countries in particular - China, Russia and the United States of
America. Other MEDCs and LEDCs have practised implementing AI, but these three countries'
actions will significantly influence the international norms for LAWs. In the 2018 UN CCW
meeting for the Group of Governmental Experts (GGE) for LAWs, Russia and the United States
were among five countries that opposed a ban for developing these weapons. This suggested
that the arms race for better militarized AI technology will continue. China has stated in the same
meeting that they are for a LAWs ban, along with their concern over the artificial intelligence
arms race.
Although a country's progress on AI development is mostly classified information, the
international community can be informed of this through conventions and events in which
government representatives meet and discuss. For instance, in April 2019, Secretary Nikolai
Patrushev mentioned that "a comprehensive regulatory framework" for "the specified [new]
technologies" is necessary, which could be considered a contradiction to Russia's stance in the
2018 UN CCW meeting. The U.S, on the other hand, seems to argue for further development of
AI technology. For the U.S, issuing a ban on these weapons are premature due to it being
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 5 of 13
relatively new in the context of disarmament and security. Also, maintaining control through
superior AI technology can prevent less developed military forces from intervening, which could
increase violence and casualties in the conflict. If designed and appropriately regulated, AI
technology can significantly reduce the consequences of warfare and ultimately increase
international security and disarmament.
Modern-day legal and ethical issues
The growth of AI was accelerating, and other countries such as China, France and South Korea
were beginning to modernize their military as well. In September 2006, South Korea announced plans to
install Samsung Techwin SGR-A1 sentry robots along the Demilitarized Zone with North Korea. The
robots would be armed with machine guns, designed to track and engage targets autonomously, but
human approval is required before firing. These are one of many examples of the increasing use of
Autonomous Defense Systems LAWs within countries.
A new subcategory of LAWs classified as Autonomous Offensive Systems has become
increasingly common as well. In November 2002, the US deployed the first Unmanned Combat Aerial
Vehicle (UCAV) 100 miles east of Sanaa, Yemen. The drone strike had shown its effectiveness,
managing to kill six militants (including one American) amidst an ongoing conflict. Many refer to this
occasion as the birth of 'killer drones', which cover a large portion of Autonomous Offensive Systems
today. Progress on developing these LAWs has continued in several nations. For example, Russia has
begun to develop AI-guided missiles that can decide to switch targets mid-flight, Israel has developed an
autonomous anti-radar, and South Korea has developed an autonomous machine gun that can track
and destroy targets at a range of four kilometers.
Figure 2: SGR-A1 Sentry Guard Robot, developed by Samsung and Korea university. September 2006 (Global
Security)
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 6 of 13
Regarding legal and ethical concerns over these type of LAWs, the main issue has been whether
these lethal autonomous weapons can reliably discriminate between a combatant and a non-combatant
in warfare. Several nations and third parties have expressed this concern during conferences and
meetings regarding the topic of artificial intelligence for international security. Member-states, including
the US and China, have emphasized how modern-day AI technology in the military must ultimately have
some degree of human control, as the decision to kill an individual in warfare should be made by
humans themselves, not machines. A coalition of NGOs created campaigns such as 'Stop Killer Robots',
and researchers of AI from the private sector have expressed their concern for lethal autonomous
weapons, with an open letter signed by over 100 people affiliated with the AI industry. Those who have
signed it include many of the leading scientists and researchers in the world, such as Elon Musk, who
has mentioned that "artificial intelligence is our biggest existential threat".
Major Countries and Organizations Involved
China
The State Council of China released the "New Generation Artificial Intelligence Development
Plan" on the 20th of July, 2017. This is a policy that outlines the government's strategy to bolster China's
AI industry and to become the leading global center for AI development and innovation by 2030. A
coalition of educational institutes and the country's three leading companies in the tech industry, Baidu,
Alibaba and Tencent has received government support to promote AI research further. Despite China's
hefty investments in modernizing their military with AI and growing its multi-billion dollar industry, in April
2018, China has explicitly endorsed the call to ban fully autonomous lethal weapons, along with 25 other
countries calling to ban LAWs, in the UN Convention on Conventional Weapons (CCW) meeting in
Geneva, Switzerland. The Chinese government has expressed its concerns over the AI arms race, and
these sentiments have been echoed by the private sector, with the chairman of Alibaba noting that the
rise of artificial intelligence may possibly lead to a World War III.
United States of America
The United States of America, home to many of the world’s largest tech companies such as
Amazon and Google, has the largest AI industry worldwide. The US's Department of Defense (DOD)
has publicly released its artificial intelligence strategy on the 12th of February, 2019. It mentions how the
changing global landscape of the US requires the further development of AI for security purposes, but
"the technology will be deployed in respect to the nation's values". The DOD's chief information officer,
Dana Deasy, has noted that AI is necessary to "increase the prosperity" and "national security" of the
nation. Despite several instances where AI was proven to be unreliable, the DOD’s artificial intelligence
strategy does not address the 1988 Iran Air Flight incident or any statement regarding reliability. In
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 7 of 13
addition, the United States of America was one out of five countries that explicitly rejected the
negotiation of a new international law for lethal autonomous weapons in the UN CCW meeting in 2018.
The justified reason behind this was due to the "potential humanitarian and military benefits of the
technology behind LAWs", so banning it would be "premature". The US also emphasized how rigorous
testing and development can ensure that these weapons are used with respect to the International
Humanitarian Law (IHL), and can ultimately be used to benefit ethics in conflict rather than violating it.
Russia
Russia continues to develop weapons that are autonomous and include embedded AI
technology, such as Unmanned Undersea Vehicles (UUV) and Hypersonic missiles. From 2017
onwards, Russia's stance on militarizing AI has been clear - control on its development must be
maintained, and there needs to be a discussion about its usage and regulations, but there should be no
international limits for developing and deploying such weapons. Along with the United States of
America, Israel, France and the United Kingdom, Russia has explicitly stated not to create a new
international law for lethal autonomous weapons in the 2018 UN CCW meeting. Despite this, however,
in April 2019, Russian Security Council Secretary Nikolai Patrushev mentioned that it is necessary to
"develop a comprehensive regulatory framework that would prevent the use of the specified [new]
technologies for undermining national and international security". Although Russia is still developing a
wide range of weapons with new AI technology, it also seems that creating international boundaries are
also an interest of the nation.
United Kingdom
Although the UK's government claims to "not possess fully autonomous weapons" with "no
intention of developing them,'' the United Kingdom has repeatedly declined proposals for setting
boundaries for their research and use since 2015. According to a report, the Ministry of Defence (MoD)
has funded several artificial intelligence programmes developing autonomous weapons, especially
military UAVs. By 2030, the MoD suggested that unmanned aircrafts will be able to independently locate
and engage targets with "appropriate proportionality and discrimination" to distinguish a civilian and a
combatant. The spokesperson for the MoD has also mentioned that lethal autonomous weapons will be
"under human control as an absolute guarantee of oversight, authority and accountability".
Timeline of Events
Date Description of event
1936
The first programmable computer was invented - arguably the starting point of
artificial intelligence.
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 8 of 13
February 1956 USS Mississippi (BB-41) successfully tests a computer-guided missile that can
correct variations of altitude and speed.
July 1988 USS Vincennes stationed at the Persian gulf destroys an Iranian commercial
airliner after false identifying it as a threat. 290 passengers were killed.
January 1994
The US government began funding the development of a UAV that can transmit
video footage in real-time via satellite link. By 2001, it has been upgraded to
carry missiles - killer drones are invented.
November 2002 The first Unmanned Combat Aerial Vehicle (UCAV) is deployed in Yemen
during the US war against terrorism.
September 2006
The Republic of Korea plans to install sentry robots in the demilitarized zone
(DMZ) bordering Democratic People's Republic of Korea (DPRK). The sentries
can track targets, but they need human authority to fire.
June 2017
An annual 'AI for Good Global Summit' is hosted in Geneva, Switzerland by the
International Telecommunication Union (ITU) with representatives from several
nations.
November 2017
The UN Convention on Certain Conventional Weapons (UN CCW) held their
first meeting with the Group of Governmental Experts (GGE) to discuss
questions related to the emergence of LAWs.
April 2018
The UN CCW’s GGE hold a meeting for the second time in order to reiterate
points discussed in 2017, with a focus on autonomous weapons. 26 countries
have endorsed a ban on LAWs, including China, Austria and Colombia. Five
countries (France, U.K, U.S.A, Russia, Israel) have explicitly rejected a ban on
LAWs.
Relevant UN Treaties and Events
● International Humanitarian Law (IHL), 2005
● Study on Armed Unmanned Aerial Vehicles, 12 October 2015
● AI for Good Global Summit hosted in Geneva, 7 June 2017
● Role of science and technology in the context of international security and disarmament,
4 December 2017 (A/RES/72/28)
● Impact of rapid technological change on the achievement of the Sustainable Development
Goals, 22 December 2017 (A/RES/72/242)
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 9 of 13
Previous Attempts to solve the Issue
As an announcement to the Group of Governmental Experts (GGE) on LAWs, UN Secretary-
General António Guterres has mentioned that "machines with the power and discretion to take lives
without human involvement are politically unacceptable, morally repugnant and should be prohibited by
international law". International conferences and meetings among member-states have been paramount
for expressing different opinions and suggesting potential ways to make international standards and
regulations. As of 2019, although there have been some resolutions addressing new technologies in
international security and disarmament, there is yet to be a comprehensive treaty or a resolution that
directly addresses international regulations for lethal autonomous weapons and artificial intelligence.
One of the many conferences held internationally is 'The AI for Good Global Summit' held
annually in Geneva, Switzerland. Since 2017, this summit was co-hosted by the International
Telecommunication Union (ITU), various UN Sister Agencies and other third-parties, with the goal of
sparking a dialogue between nations about AI. Representatives of AI in business, government and civil
societies from different countries gather to discuss pitches for projects, ways to advance the Sustainable
Development Goals (SDGs) and to propose potential solutions to ongoing issues.
Figure 3: The third annual AI for Good Global Summit hosted in Geneva, Switzerland. May 2019. (UN News)
In November 2017, the UN Convention on Certain Conventional Weapons (CCW) had their first
meeting in Geneva, with AI researchers from countries such as Canada, Belgium and Australia
participating. Due to growing pressure from the public, the group decided to meet again from April 9 to
13 with the incentive to conclude their discussions from the meeting prior. This time, a total of 82
countries participated in comparing different definitions of LAWs, considering the human aspect of
control and projecting the rate of growth AI will have in the future. Despite some fruitful discussions,
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 10 of 13
many advocates have criticized that the UN CCW meetings are relatively slow compared to the rapid
improvement of artificial intelligence. Furthermore, country sovereignty has made these meetings legally
binding, only allowing member-states to propose recommendations that other nations may adopt.
Possible Solutions
It is important to note that finding the optimal solution for counteracting the misuse of artificial
intelligence is an enigmatic problem; as mentioned before, the growth of AI is sporadic and
unpredictable, so it is crucial to have an insight of the future, consider all perspectives, and avoid being
too dismissive of ideas.
One way to approach the issue would be to strive for a restriction on lethal autonomous
weapons. (NB: Assuming that the current status quo is maintained, a severe restriction on LAWs would
likely be unfeasible.) Before negotiating restrictions, reaching an internationally agreed-upon standard
for the definition of a LAW would be a good starting point for such an approach, as there is yet to be a
universally accepted set of criteria for them. Once nations agree on the standard, member-states need
to decide how much autonomy and human control is tolerable for conflict. This decision should mainly
be influenced by two factors - adherence with the International Humanitarian Law (IHL) and the proven
reliability of AI technology discovered in the future. Careful consideration of both ethics and controlling
the autonomy of LAWs can help member-states reach a compromise and minimize consequences in
conflicts.
Another way to approach the issue would be to minimize limitations. This argument can be
justified with the claim that since AI technology is relatively new, focusing on creating limitations for it at
such an early stage would be premature and unideal. Incentivizing further research without restrictions
can potentially clarify the obscure aspects of AI and ultimately help researchers identify vulnerabilities
and find countermeasures to prevent mistakes. If this approach is successful, the repercussions of
international conflict can be immensely reduced, and governments can diminish the influence of private
firms as well. Perhaps after some intensive research in the future, restrictions can then be endorsed, but
once lethal autonomous weapons become well-established military technologies, it would be difficult to
reach a consensus for a limitation. By that time, AI technology would likely be so embedded in
international security and disarmament that it would become the new standard worldwide.
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 11 of 13
Bibliography
Abadicio, Millicent. “Artificial Intelligence in the Chinese Military – Current Initiatives.” Emerj, 23 May
2019, emerj.com/ai-sector-overviews/artificial-intelligence-china-military/.
Abadicio, Millicent. “Artificial Intelligence in the US Army – Current Initiatives.” Emerj, 27 May 2019,
emerj.com/ai-sector-overviews/artificial-intelligence-in-the-us-army/.
“AI Policy - China.” Future of Life Institute, 2019, futureoflife.org/ai-policy-china/.
Araya, Daniel. “Artificial Intelligence And The End Of Government.” Forbes, Forbes Magazine, 15
Jan. 2019, www.forbes.com/sites/danielaraya/2019/01/04/artificial-intelligence-and-the-end-of-
government.
“'Artificial Intelligence Is Our Biggest Existential Threat' Warns Elon Musk.” Yeni Şafak, 24 Nov. 2017,
www.yenisafak.com/en/life/artificial-intelligence-is-our-biggest-existential-threat-warns-elon-musk-
2819114.
“Artificial Intelligence Summit Focuses on Fighting Hunger, Climate Crisis and Transition to 'Smart
Sustainable Cities' | UN News.” UN News, United Nations, 28 May 2019,
https://news.un.org/en/story/2019/05/1039311.
“Autonomous Weapons That Kill Must Be Banned, Insists UN Chief | UN News.” United Nations,
news.un.org/en/story/2019/03/1035381.
Bartlett, Matt. “The AI Arms Race In 2019 - Towards Data Science.” Medium, Towards Data Science,
28 Jan. 2019, towardsdatascience.com/the-ai-arms-race-in-2019-fdca07a086a7.
Bendett, Samuel. “Did Russia Just Concede a Need to Regulate Military AI?” Defense One, 25 Apr.
2019, www.defenseone.com/ideas/2019/04/russian-military-finally-calling-ethics-artificial-
intelligence/156553/.
“CBRN National Action Plans: Rising to the Challenges of International Security and the Emergence
of Artificial Intelligence .” United Nations Interregional Crime and Justice Research Institute, 7 Oct.
2015, www.unicri.it/news/article/CBRN_Artificial_Intelligence.
Clifford, Catherine. “Hundreds of A.I. Experts Echo Elon Musk, Stephen Hawking in Call for a Ban on
Killer Robots.” CNBC, 8 Nov. 2017, cnbc.com/2017/11/08/ai-experts-join-elon-musk-stephen-
hawking-call-for-killer-robot-ban.html.
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 12 of 13
Cronk, Terri Moon. “DOD Unveils Its Artificial Intelligence Strategy.” U.S. DEPARTMENT OF
DEFENSE, 12 Feb. 2019, dod.defense.gov/News/Article/Article/1755942/dod-unveils-its-artificial-
intelligence-strategy/.
Davey, Tucker. “Lethal Autonomous Weapons: An Update from the United Nations.” Future of Life
Institute, 5 June 2018, futureoflife.org/2018/04/30/lethal-autonomous-weapons-an-update-from-
the-united-nations/.
Davis, et al. “Armed and Dangerous? UAVs and U.S. Security.” RAND Corporation, 7 Apr. 2014,
www.rand.org/pubs/research_reports/RR449.html.
“DOD Unveils Its Artificial Intelligence Strategy.” U.S. DEPARTMENT OF DEFENSE,
dod.defense.gov/News/Article/Article/1755942/dod-unveils-its-artificial-intelligence-strategy/.
Doward, Jamie. “Britain Funds Research into Drones That Decide Who They Kill, Says Report.” The
Guardian, Guardian News and Media, 10 Nov. 2018,
www.theguardian.com/world/2018/nov/10/autonomous-drones-that-decide-who-they-kill-britain-
funds-research.
McCormick, Ty. “Lethal Autonomy: A Short History.” Foreign Policy, 24 Jan. 2014,
foreignpolicy.com/2014/01/24/lethal-autonomy-a-short-history/.
“Recaps of the UN CCW Meetings April 9 – 13.” Ban Lethal Autonomous Weapons, 23 Apr. 2018,
autonomousweapons.org/recaps-of-the-un-ccw-meetings-april-9-13/.
Saidel, Jamie. “Russia's Terrifying New 'Superweapon' Revealed.” News.com.au, 14 July 2019,
www.news.com.au/technology/innovation/military/russias-terrifying-new-superweapon-
revealed/news-story/b815329f979851288e52a1c4082feb1b.
Wallach, Wendell. “Toward a Ban on Lethal Autonomous Weapons: Surmounting the Obstacles.”
Communications of the ACM, 1 May 2017, cacm.acm.org/magazines/2017/5/216318-toward-a-
ban-on-lethal-autonomous-weapons/abstract.
The Hague International Model United Nations, Singapore 2019| XV Annual Session
Research Report | Page 13 of 13
Appendices
I. https://www.youtube.com/watch?v=XAgXwUwQoPA - a brief video that explains the potential
threats from the growth of artificial intelligence for conflict.
II. https://www.icrc.org/en/doc/assets/files/other/icrc_002_0467.pdf - the International Humanitarian
Law (IHL) and Other Rules relating to the Conduct of Hostilities.
III. https://unoda-web.s3-accelerate.amazonaws.com/wp-
content/uploads/assets/publications/more/drones-study/drones-study.pdf - the ‘Study on Armed
Unmanned Aerial Vehicles’.