Wrestling killer robots: Is ethical AI possible?

The Institute of Electrical and Electronics Engineers (IEEE), an organisation that calls itself the ‘world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity’, has embarked on a long and difficult voyage of ethical discovery. They have released their latest draft (a 263 page monster tome) on ethics and AI.

Parts of the document are so disturbing that I decided not to read it before bed time because my dreams were filled with killer robots who were too clever to ever explain the reasons for their actions to humans and minute swarms of autonomous drones carrying chemical weapons and deploying these at the will of the machine.

The document, Ethically aligned design: A vision for prioritising human well-being with autonomous and intelligent system (Version 2), is one of the most compelling and harrowing reads on AI today (IEEE prefer the term autonomous and intelligent systems [A/IS]).

EAD cover

I was so disturbed by the vision of a future – where AI had become so clever that it would not be able to be fully controlled or comprehended by the very humans that originally created it – that I felt compelled to put in a submission to IEEE on their document. The submission is presented below.

…………………………………………………………………………………………………………………………………………

Submission 7 May 2018: Ethically aligned design: A vision for prioritising human well-being and autonomous and intelligent systems (A/IS).  Version 2. IEEE.

  1. There is a lack of definitional clarity on the concepts of ethics and ethical decision-making; inadequate background on the philosophical and cultural traditions/models on which the idea of ‘ethics’ is based; and a false assumption about knowledge ‘input’ for A/IS ethical decision-making that has a Western bias.

There are many philosophical and cultural ethical traditions (for example, in the West there are theories about virtue ethics, consequentialist ethics, applied ethics etc), and each will consider different aspects of an ethical conundrum (a different line of reasoning) to determine what is right or wrong and defend the determination. Sometimes different lines of reasoning will come to the same conclusion about right and wrong, but sometimes they will not. Often the document reads as if society is conceived in mechanistic or functionalist ways. This ‘macro’ conception of society is based on an assumption that there is a set of shared values, norms and identities, each of which play a role in making sure the ‘social body’ functions correctly. Functionalist conceptions of the social world are outdated and discredited, having been replaced in social and philosophical thought with more nuanced ‘conflict theories’ of how values and norms are continually contested. Thus, identifying norms and values for ethical decision-making is a fraught process.

The document switches between normative ethics (e.g. Executive summary (p.8), ‘embedding values’ and ‘norms’ into A/IS) and an acknowledgment that the ethical traditions/logics of different cultures (this can refer to national cultures, trans-national cultural groups, or subcultures/minority groups within a society etc) may be different from those of the perceived mainstream or majority e.g. in the section Systems across cultures (p.164-167).  This later issuer is broader than the influence of affective computing, which has the potential to erase to erase cultural difference (an act of immense actual and symbolic violence). This lack of clarity on what the IEEE mean by ethics, ethical reasoning and the tradition that meaning derives from creates confusion in the document and will lead to a lack of clarity around design for A/IS. It appears that IEEE favours an applied ethics in the human rights tradition which is evident in arguably the most universal model of ethics, the medical ethics principles framework, as indicated in some of the language used in the report and ideas that it alludes to (e.g. ‘values’, ‘norms’, ‘freedom’, justice, beneficence, ‘autonomy’, respect’ etc). If this is the case then the historical roots of this ethics tradition and its principles need to be clearly espoused in the document (see here for a brief summary of the main principles (see http://web.stanford.edu/class/siw198q/websites/reprotech/New%20Ways%20of%20Making%20Babies/EthicVoc.htm).

The document correctly acknowledges that cultural and minority groups may adhere to their own ethical traditions. For example, many First Nations people have specific customs that rely on consultative/collectivist decision-making practices and traditional law that guides the sharing of knowledge. There is an implicit (Western) assumption in the document that all knowledge (including beliefs, values) can be discovered, known and publicly shared by/with all people, and that this can be initially or continually ‘input-ed’ into A/IS for ethical machine learning and decision-making. For many cultures, this assumption does not hold, as this example from Indigenous culture from the Australian Human Rights Commission demonstrates:

‘The rights to Indigenous traditional knowledge are generally owned collectively by the Indigenous community (or language group, or tribal group), as distinct from the individual. It may be a section of the community or, in certain circumstances, a particular person sanctioned by the community that is able to speak for or make decisions in relation to a particular instance of traditional knowledge. It is more often unwritten and handed down orally from generation to generation, and it is transmitted and preserved in that way. Some of the knowledge is of a highly sacred and secret nature and therefore extremely sensitive and culturally significant and not readily publicly available, even to members of the particular group.’ (https://www.humanrights.gov.au/sites/default/files/content/social_justice/nt_report/ntreport08/pdf/chap7.pdf)

An initial or continual “discovery’ and ‘input’ model for deep learning A/IS is not appropriate in cultures in which knowledge is collectively owned, regulated by enduring custom and tradition, and in many cases, sacred and secret to the community or to particular sub-groups within the community. Such cases add a layer of complexity to using principles from even widely used human rights/medical ethics frameworks (for example see,  https://aiatsis.gov.au/research/ethical-research/guidelines-ethical-research-australian-indigenous-studies or http://www.pre.ethics.gc.ca/eng/policy-politique/initiatives/tcps2-eptc2/chapter9-chapitre9/).

 

  1. Now is the time to produce developmentally appropriate, curriculum- aligned strategies and materials for educating and empowering children and young people about A/IS.

In addition to the ethical education of software engineers, computer scientist and other technologists (p.144), attention must be given to the education of children and young people on A/IS inclusive of but beyond privacy issues. IEEE should consider, as a matter of urgency, working with education specialists to develop developmentally appropriate curriculum material for digital literacy which can assist teachers in educating children and young people about: what A/IS is; its features and functions; the way A/IS is woven into the everyday interactions with technology including the IoT; ethical issues and A/IS; the potential of A/IS for good; and (for older children) legal and regulatory frameworks to ensure informed interaction with A/IS including privacy issues. To educate children, is to begin to educate the community as children bring their knowledge of A/IS into homes, real and virtual places of recreation and communication, and a range of public forums.

  1. More serious consideration of the consequences of unethical conduct is required: The case for professional registration

Professions that have a significant impact on people (teaching, medicine, psychology, social work, law) have long included a substantial component of ethical training in their degrees. Furthermore, these professions generally operate on a registration basis which is regulated both by nation legal frameworks and professions themselves.  Given the serious and substantial impact of the work of technologists on today’s society — especially computer scientists and software engineers who develop and unleash new technology — it is timely to consider reorganising the profession at both national and international levels so that there is a powerful means to educate on and regulate the ethical behaviour of its members. Unethical behaviour by teachers, doctors or lawyers can result in them being ‘struck off’ of the professional association, and in cases where there are concomitant legal frameworks, from practicing the profession itself. It is time for serious accountability. Technologists should be governed and made accountable through a professional registration process and legal frameworks: without this, there will be weak consequences for unethical conduct regarding A/IS and other technologies.

  1. Stop developing autonomous weapons systems (AWS) as no good will come from them: This is the ethical position to take.

0BD36450-58B8-4E36-80D5-FE6205F65F39

Screenshot of p.121 of the document with my scribble.

There is no greater crime than taking a human life. Humans should be held responsible for a death that they cause. This is a fundamental legal and ethical principle. This section of the document clearly outlines the case for NOT having AWS and provides a series of well explained, evidence-based Issues regarding the devastating effects that are likely with AWS (Issues 4-11, pp. 120-130). This includes the argument that humans may not be able to ultimately control AWS or understand the logic of the machine decision-making, thus logically making the machine responsible for human death and destruction. Therefore, to develop such machines and systems is a profound abdication of ethical responsibility on the part of technologists. It goes against the vision of this document and long-standing moral and legal principle of holding humans accountable for the death and destruction that they cause. The organisation of the section on AWS begins with an assumption that these systems should necessarily exist and be subject to national and international law as the primary mode of regulation (Issues 1-3 pp.115 – 120).  However this assumption is convincingly challenged with Issues 4-11 outlining the case for not forging ahead with this technology (pp. 120-130). This section should begin with Issues 4-11 – after which there is no case for developing AWS. Furthermore, it is naive at best and disingenuous at worst to suggest that ‘codes of conduct’ and ‘reflective practice’ will be key practices in minimising harm from AWS. Codes of conduct and reflective practice are individualised responses to what is essentially a ‘make or break’ issue for your profession, which should hold a strong, evidence-based and highly ethical position on this issue: no good will come from AWS and so therefore they should not be allowed to be produced.

 

Feature picture: aesthetics of crisis, metropolis, https://flic.kr/p/gq2wFt  (cropped).

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s