What's the Deal with Killer Robots?
Updated: Jun 28
Despite growing concerns by States and further developments in AI weapon technology, there remains no recognised framework regulating emergent autonomous weapons to ensure compliance in with pre-existing norms of public international law (‘PIL’).[i] The lack of an accepted definition by the international community as to what constitutes an autonomous weapon is indicative of the infancy of such a framework.[ii]
The International Committee of the Red Cross (‘ICRC’) has defined autonomous weapons as weapons that are ‘independently capable of selecting and attacking targets, featuring autonomy in acquiring information and tracking, alongside target identification’.[iii] This article aims to consolidate the current legal framework applicable to fully autonomous weapon systems (‘FAWS’), highlight the International movement toward regulation, and call for the expedited creation of an internationally recognised framework to ensure States are held accountable for their development and use.
The mess which currently regulates FAWS
Given the generally permissive nature of PIL,[iv] the development and use of FAWS is not inherently contrary to PIL.[v] The use of weapons by States is not an unlimited freedom,[vi] compliance with established frameworks such as International Human Rights Law (‘IHRL’) and International humanitarian law (‘IHL’) remains a consideration in the implementation of FAWS by States.
FAWS must adhere to the cardinal principles of IHL including, distinction,[vii] proportionality,[viii] avoidance of unnecessary suffering,[ix] principles of weapons testing,[x] humanity and the dictates of public conscience.[xi] Given their inherently greater potential to comply with the law, the ICRC have acknowledged FAWS may serve to minimise humanitarian consequences.[xii] As machines do not act out of the need for self-preservation they are able to delay their use of force far longer than humans, providing advantages in compliance.[xiii]
FAWS pose greater difficulty in compliance in a domestic law enforcement context as States are bound to the stricter requirements of IHRL.[xiv] IHRL requires States do not arbitrarily deprive rights, such as the right to life.[xv] Generally this arbitrarily deprivation falls on proportionality, appropriateness, and necessity.[xvi] Even assuming these requirements are met, FAWS call into question whether decisions in the use of force, made entirely by artificial intelligence (‘AI’) are inherently wrong, and therefore arbitrary.[xvii]
Some scholars and organisations suggest the law should not result in prohibiting the use of FAWS of superior efficacy.[xviii] Such scholars opine that the mere lack of human control at the operational stage of FAWS should not be seen to violate IHL.[xix] Further, control requirements should be able to be fulfilled by ensuring reliable and predictable compliance through testing.[xx]
Others suggest human thought, such as compassion and mercy, cannot be programmed,[xxi] restricting FAWS from comprehending the nuances of human behaviour required to comply with PIL.[xxii] Further, FAWS are incapable of understanding the value of human life and the significance of its loss.[xxiii] Thus, these incapacities, alongside the potential for bias and inaccuracy,[xxiv] should prohibit the delegation of the use of force to AI.[xxv] In relation to accountability, liability for breaches of PIL generally falls on those who issue an unlawful command.[xxvi] States party to the Convention on Certain Conventional Weapons have urged responsibility be retained in the use of autonomous weapons as they can breach PIL long after their deployment.[xxvii] Given that AI constantly change their functions, responsibility is especially important.[xxviii]
Similar to nuclear weapons, the use of AI by FAWS requires greater regulation than conventional arms as their destructive power extends beyond space and time. [xxix] However, no commentary or ruling on autonomous weapons has been made by the International Court of Justice so far. The most reliable authority in this respect, is the Nuclear Weapons advisory opinion, which briefly mentions considerations for the development of new weapons, means and methods of warfare.
Consensus only exists in that all weapons should have some form of human control and measures for accountability, however there is no agreement between States as to what level of control or measures should be required.[xxx]
Progress toward regulation
States including Israel, Russia, China, South Korea and the United States are currently developing and using systems capable of autonomously deploying lethal force.[xxxi] Given this movement toward automation of warfare through unmanned alternatives, proper regulation is vital to ensure PIL is adhered to.[xxxii]
Despite calls for a ban on FAWS by the UN, Human Rights Watch and 28 States,[xxxiii] the refusal to recognise such a measure and even negotiate on a treaty by a number of major States illustrates its unrealistic nature.[xxxiv] The Organisation for Economic Co-operation and Development’s Principles on Artificial Intelligence adopted by 36 States outlines the need to ensure the capacity for human determination and a stable policy environment promoting human rights and accountability in the use of AI.[xxxv] These requirements are consistent with the recommendations of the European Commission’s High-Level Expert Group on Artificial Intelligence,[xxxvi] Google, Microsoft, and IBM.[xxxvii] France and Germany have also encouraged the adoption of a political declaration urging human control over autonomous weapons to be employed at all times.[xxxviii]
The broad lack of consensus between States on PIL as to FAWS necessitates the creation of an internationally consistent framework.
Compliance by States cannot be derive from the consolidation of existing frameworks as invocations of universal principles are insufficient to regulate State action.[xxxix] The international community must call for the development of a recognised framework to hold States accountable for their development and use of FAWS.[xl] For such a framework to be effective, it must recognise and maintain a balance between inconsistent, antagonistic moralities and overlapping consensus on the disapproval of certain practices.[xli]
That being said, the creation of such a framework risks the weakening of existing laws applicable to targeting practices.[xlii] Existing principles need be supplemented by a new framework.[xliii] Said framework must define the degree of human control, [xliv] testing, and review that the international community considers necessary for the use of these new methods of warfare.[xlv] The pivotal question being whether, under any circumstance, we can allow robots to take human life.
Written by Dan Canta
Chief Publications Officer
[i] Neil Davison, ‘A legal perspective: Autonomous weapon system under international humanitarian law’ (2018) International Committee of Red Cross 9 (‘Davison’); Alexandra Brzozowski, No progress in UN talks on regulating lethal autonomous weapons (News Article, 22 November 2019) <https://www.euractiv.com/section/global-europe/news/no-progress-in-un-talks-on-regulating-lethal-autonomous-weapons/>. [ii] International Committee of the Red Cross (ICRC), Report on ICRC Expert Meeting on ‘Autonomous weapon systems: technical, military, legal and humanitarian aspects (Summary Report, 9 May 2014) 1 (‘ICRC Report 2014’). [iii] Ibid 1. [iv] SS ‘Lotus’ (France v Turkey) (Judgment)  PCIJ (Ser A) No 10, 18-9; Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion)  ICJ Rep 226, 238 (‘Nuclear Weapons’); Responsibility of States for Internationally Wrongful Acts, GA Res 56/83, UN Doc A/RES/56/83 (28 January 2002, adopted 12 December 2001) art 2. [v] Kenneth Anderson, Daniel Reisner and Matthew Waxman, ‘Adapting the law of Armed Conflict to Autonomous Weapon Systems’ (2014) 90 International Law Studies 386, 410; Marco Sassóli, ‘Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to be Clarified’ (2014) 90 International Law Studies 308, 310 (‘AWS Issues’). [vi] Nuclear Weapons (n iv) . [vii] Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts, 8 June 1977, 1125 UNTS 3 (entered into force 7 December 1978) art 1 arts 41, 48, 50, 51, 52, 57, 58 (‘AP-I’); Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts, 8 June 1977, 1125 UNTS 609 (entered into force 7 December 1978) CA 3(1), art 13 (‘AP‑II’); Nuclear Weapons (n iv) 257; ICRC, Customary International Humanitarian Law Volume 1: Rules (Cambridge University Press, 2005) 8, 14 (‘Customary IHL’); Michael Schmitt, Tallin Manual 2.0 on the International Law Applicable to Cyber Operations (Cambridge University Press, 2nd ed, 2017) r 115 (‘TM 2.0’). [viii] Ibid arts 51(5)(b), 57; TM 2.0 (n vii) r 113; Customary IHL (n vii) 46-7; Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory (Advisory Opinion)  ICJ Rep 136  (‘Wall AO’). [ix] Ibid art 35(2); TM 2.0 (n vii) r 104; Customary IHL (n vii) 237-50; Nuclear Weapons (n iv) , ; Customary IHL (n vii) 237-40. [x] TM 2.0 (n vii) r 110; AP-I (n vii) art 36; ICRC, Report on ICRC Expert Meeting on Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons (16 March 2016) 9. [xi] AP-II (n vii) preamble. [xii] ICRC, Artificial intelligence and machine learning in armed conflict: A human-centred approach (Report, June 2019) 2 (‘ICRC Report 2019’); ICRC Report 2014 (n ii) 1. [xiii] AWS Issues (n v) 310; ICRC Report 2014 (n ii)14. [xiv] Christof Heyns, Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Report, UN Doc A/HRC/23/47 (April 9, 2013)  (‘Heyns’). [xv] Ibid ; Wall AO (n viii) ; Human Rights Committee, General Comment No. 36: on article 6 of the International Covenant on Civil and Political Rights, on the right to life, 124th sess, UN Doc CCPR/G/GC/36 (30 October 2018) , [63-4]; James Crawford, Brownlie’s Principles of Public International Law (Oxford University Press, 8th ed, 2012) 642; see International Covenant on Civil and Political Rights, opened for signature 16 December 1966, 999 UNTS 171 (entered into force 23 March 1976); see also International Covenant on Economic, Social and Cultural Rights, opened for signature 16 December 1966, 993 UNTS 3 (entered into force 3 January 1976). [xvi] Elettronica Sicula (United States v Italy) (Judgment)  ICJ Rep 15, 76; Asylum (Columbia v Peru) (Judgment)  ICJ Rep 266, 284; ICRC, Report on ICRC Expert Meeting on the Use of Force in AC: Interplay between the conduct of hostilities and law enforcement paradigms (2013) 8. [xvii] Heyns (n xiv) . [xviii] Ibid , ; Michael Schmitt and Jeffrey Thurnher, ‘“Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict’ (2013) 4 Harvard National Security Journal 231, 247 (‘Out of the Loop’); AWS Issues (n v) 320; William Boothby, Weapons and the Law of Armed Conflict (Oxford University Press, 2nd ed, 2016) 233 (‘Boothby’). [xix] United Nations (UN), Draft Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, UN Doc CCW/GGE.1/2019/CRP.1/Rev.2 (21 August 2019) 5 (‘UN Draft Report’); Davison (n i) 14; Out of the Loop (n xviii) 277 [xx] Ibid 6; AWS Issues (n v) 314-7, 322; Davison (n i) 7; Boothby (n xviii) 345-7; see also ICRC, Report on ICRC Fifth Expert Meeting on the Notion of Direct Participation in Hostilities (Summary Report, 9 May 2008) 37, 42. [xxi] Human Rights Watch (HRW), Losing Humanity: The Case Against Killer Robots (Report, 19 November 2012) 4 (‘Losing Humanity’); Jeffrey Thurnher, ‘No One at the Controls: Legal Implications of Fully Autonomous Targeting’ (2012) 67 Joint Forces Quarterly 77, 81-3. [xxii] AWS Issues (n v) 312, 327; HRW, Mind the Gap: The Lack of Accountability for Killer Robots (Report, 9 April 2015) 8 (‘Mind the Gap’); Losing Humanity (n xxi) 4. [xxiii] Mind the Gap (n xx) 9. [xxiv] ICRC Report 2019 (n xii) 5. [xxv] UN Draft Report (n xvii) 4-5, 13. [xxvi] AP-I (n vii) art 87(1). [xxvii] UN, Report of the 2018 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (Report, 23 October 2018) 4-6. [xxviii] ICRC Report 2019 (n xii) 1. [xxix] Nuclear Weapons (n iv) . [xxx] Merel Ekelhof, Autonomous weapons: Operationalizing meaningful human control (Blog Post, 15 August 2018) <https://blogs.icrc.org/law-and-policy/2018/08/15/autonomous-weapons-operationalizing-meaningful-human-control/>. [xxxi] Heyns (n xiv) ; Khari Johnson, Andrew Yang warns against ‘slaughterbots’ and urges global ban on autonomous weaponry (News Article, 31 January 2020) <https://venturebeat.com/2020/01/31/andrew-yang-warns-against-slaughterbots-and-urges-global-ban-on-autonomous-weaponry/>; Dave Makichuk, Is China exporting killer robots to Mideast? (News Article, 8 November 2019) <https://asiatimes.com/2019/11/is-china-exporting-killer-robots-to-mideast/; Michael T Klare, Autonomous Weapons and the Laws of War (Blog Post, March 2019) < https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war>. [xxxii] Dave Makichuk, Is China exporting killer robots to Mideast? (News Article, 8 November 2019) <https://asiatimes.com/2019/11/is-china-exporting-killer-robots-to-mideast/>; ICRC Report 2019 (n xii) 2. [xxxiii] UN News, Autonomous weapons that kill must be banned, insists UN Chief (News Article, 25 March 2019) <https://news.un.org/en/story/2019/03/1035381>; Reaching Critical Will, Critical Issues: Fully Autonomous Weapons (Fact Sheet, 2020) <http://www.reachingcriticalwill.org/resources/fact-sheets/critical-issues/7972-fully-autonomous-weapons; HRW, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots (Report, August 2018) 45. [xxxiv] Damien Gayle, UK, US and Russia among those opposing killer robot ban (News Article, 30 March 2019) <https://www.theguardian.com/science/2019/mar/29/uk-us-russia-opposing-killer-robot-ban-un-ai>; Hayley Evans and Natalie Salmanowitz, Lethal Autnomous Weapon Systems: Recent Developments (Blog Post, 7 March 2019) <https://www.lawfareblog.com/lethal-autonomous-weapons-systems-recent-developments>. [xxxv] Organisation for Economic Co-operation and Development (OECD), Recommendation of the Council on Artificial Intelligence (22 May 2019). [xxxvi] European Commission, High-Level Expert Group on Artificial Intelligence Report on Ethics Guidelines for Trustworthy AI (Report, 8 April 2019) 15–16. [xxxvii] Sundar Pichai, AI at Google: Our principles (Press Release, 7 June 2018) <https://www.blog.google/technology/ai/ai-principles>; Microsoft Corporation, Responsible AI (Webpage, 2019) <https://www.microsoft.com/en-us/ai/our-approach-to-ai>; IBM, IBM’s Principles for Trust and Transparency (Blog Post, 30 May 2018) <https://www.ibm.com/blogs/policy/trust-principles>. [xxxviii] Michael T Klare, Autonomous Weapons and the Laws of War (Blog Post, March 2019) < https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war>. [xxxix] Brad R Roth, Sovereign Equality and Moral Disagreement (Oxford University Press, 3 November 2011) 271 (‘Roth’); ICRC Report 2014 (n iii) 14. [xl] ICRC, Autonomous weapons: States must agree on what human control means in practice (20 November 2018) <https://www.icrc.org/en/document/autonomous-weapons-states-must-agree-what-human-control-means-practice>. [xli] Roth (n xxxix) 229. [xlii] Merel Ekelhof, Autonomous weapons: Operationalizing meaningful human control (Blog Post, 15 August 2018) <https://blogs.icrc.org/law-and-policy/2018/08/15/autonomous-weapons-operationalizing-meaningful-human-control/>. [xliii] ICRC Report 2019 (n x) 12. [xliv] ICRC Report 2014 (n iii) 15. [xlv] Ibid 16.