5 positions + 1 suspicion on AI and education

Prologue: This is not an introduction to AI or AI in education. You can find these here and here, respectively. Rather, this is a constellation of positions on AI from an educator’s perspective. I pull together disparate aspects of current debates about AI and education in order to articulate some directions for the field.

Lately I’ve been sitting in rooms with a lot of thoughtful people from industry, different disciplines in academia, and policy backgrounds to think through artificial intelligence (AI) and education. I say ‘AI and education’ to capture something of the evolving relationship, friction and fission that the word ‘and’ entails at this point in time as educators start to grapple with a technology which offers:

  • the automation of certain (as yet not really clearly specified) aspects of teaching;
  • potential tools for predicative, adaptive ‘personalised’ learning (as yet not too developed or tested for efficacy);
  • and the power to drive ‘big data’ administrative and learning systems and apps in ways that are purportedly about helping people learn while driving efficiency (although the way AI is  often invisibly infused into proprietary computing systems and apps makes it ‘opaque’ or difficult to tell where and how it’s working and to what effect).

To be clear, at a global level, the very human field of AI and education has been designated by ethicists as a ‘high stakes domain’ that requires urgent, ongoing scrutiny and a coordinated response to ensure that AI is used for the benefit of students, teacher, communities and society more broadly (Campolo et al., 2017).

In light of the recent conversations I’ve been having and the thinking I’ve done, I offer a set of positions and one suspicion that I’ve developed  in the hope that this discussion may prove useful to others in thinking through the myriad issues AI raises for teachers in schools and tertiary education institutions.

Position 1 – Against fatalism

There is sometimes an almost fatalistic stance that depicts the machine age as one unitary force that will inevitably cause some intended and a lot more unintended good and harm. This is especially true where conversations about AI bias and its discriminatory outcomes are concerned. The conversation can go something along the lines of – some AI harm is unavoidable or that bias and its effects may be only identifiable in retrospect (after harm has occurred) because ‘bias is everywhere’ and ‘all humans are bias anyway’ and ‘bias is a natural fact of life’.

It is true that human have biases and that these affect how we interact, our life opportunities and the opportunities of others, and the design decisions we make with all sorts of products. However, let’s be very clear, bias is neither natural nor inevitable. Biases are learnt over time, change across cultures and time, and certain types of bias flourish under particular conditions. Most importantly, biases can be unlearnt. Just as humans learn bias so machines ‘learn’ bias when they are given labelled data sets or scrapped data that are not carefully checked by humans for bias.

Humans are responsible for this. They have responsibility for the purposes for which algorithms are created, the oversight of the types of data that machines use to ‘learn’ with, and the ‘decisions’ that automated system make, and the outcomes from those ‘decisions’. Humans create cultures of bias that can influence this. For example, a lack of diversity in the worldviews and backgrounds of the technology workforce reflects a self-reinforcing homophily that makes a person, an organisation or a profession blind to biases and the need for these to be actively unlearnt.

Looking your bias square in the eye, asking questions about where it came from and its effects on self and others, can be challenging and even frightening. Homophily is related to groupthink and this can have very damaging consequences for actual people. Confronting bias within oneself and the groups one associates with is vital if potential harm is to be identified and addressed. It is simply not good enough to have the students we teach harmed by biased AI systems and try to address this after-the-fact.

Education is not an enterprise based on an experimentation-fail/harm-remediation dynamic. There is a profound responsibility on those that develop AI systems to prevent bias even if it is technically and existentially difficult.

Moreover, talk among technologists of embedding social norms and values into AI systems to make them ethical, actively ignores the complications and beauty of diversity. It is based on an old-fashioned and largely discredited form of functionalism – or the view that a social system has components (structure, institutions, value and kinship systems etc) that come to maintain the order of the whole, a bit like organs in a body. However, it is far more likely that the norms and values of one group are not the norms and values of another group and that norms and values are continually contested and change over time: It often takes a social or cultural movement to amplify this and to press for social change. So who gets to decide on the norms and values of AI systems? Is this even possible and what would it look like and how can any machine system account for conflict, change and difference in a respectful manner?

Position 2 – For the democratisation of knowledge and action

Following on from the above point, educators have been on a long, often hotly-contested, learning journey in trying to understand the value of socio-cultural diversity and difference and its very real implications for learning, pedagogy and curriculum. The question of whose values get embedded into AI systems that interact with teachers and students requires robust, ongoing democratic dialogue within the teaching profession, with our students and in the communities we serve. Let’s face it, there may not be a definitive set of values and norms that we can agree should operate within AI systems. We may not even agree on the preferred uses for AI in education.

However, I am sure we can agree to take a deliberate stance that AI bias is not inevitable (it is human-facilitated) and that ‘after the fact’ remediation of harm is not an acceptable starting position.

Education is about purposefully enabling human flourishing and no teacher would deliberately implement a practice or use a product that would impede this. This is why teachers, their students and communities need to be actively involved in the field of AI and education by:

  • undertaking lifelong learning on it;
  • engaging in institutional and public debate about it;
  • having an authentic role in designing and evaluating it so that harm is prevented and good learning outcomes assured;
  • creating baked-in governance mechanisms for calling a halt to systems or applications associated with real or suspected harm.

AI may very well re-make some fundamental aspects of education as we know it (see my post AI did my Homework on one aspect of machine learning and its implications for education). It will be up to educators to step up and for industry and government to engage with the teaching profession in good faith.

Maini and Sabri (2017) (nicking an insight from the writer Arthur C. Clarke) state:

Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.’ (p.3).

Educators need to make sure that this vision does not come to pass, for themselves and their students. Democratising all aspects of AI and education is a step to ensuring that  citizens can make a world where AI works for them.

Position 3 – Procedural ethics is nothing without ethics-in-practice

Ethics is complicated territory. There are any number of ethical traditions and applying these can yield different solutions and responses to quandaries (e.g. the greater good argument vs virtues perspectives of doing the right thing in a beneficent way). Educators face constant ethical dilemmas and they understand that ethics is both procedural (regulated by institutional protocols, policy documents, guidelines and laws) and a real-life practice inseparable to teaching humans.

This ethics-in-practice involves cultivating a deep understanding of ethical principles so that they might be applied in-situ when decisions-making  arises and, significantly, so that these decisions can be clearly articulated and justified to any who pose a question. Ethics-in-practice is often undertaken in social mess: in other words, it is a process of logic, explainability and responsibility that sits at the heart of very complex individual, socio-cultural and institutional contexts.

Those who design, make and market AI systems could learn a lot from the way educators navigate messy situations by simultaneously applying procedural ethics and a commitment to ethics-in-practice in order to come up with intelligible and responsible decisions.

As educators we must be wary of catch-phrases used in the area of AI ethics such as transparency or contestability. For example, computer scientist will argue that it is not always possible or advisable to have transparency in machine learning systems because of the proprietary nature of industry algorithms or because we should just trust the humanly indecipherable processes of deep machine learning (this is called ‘black box’ AI). Some suggest we should just trust the machine’s analysis and ‘decisions’ even when these are not even transparent to the very people, computer scientists, who have created the systems. Others say that calls for transparency would be disastrous to scientific progress. I have heard of a push for algorithms that will be designed to analyse and explain the algorithmic ‘decision-making’ of deep machine learning to humans – that is, the machine will translate the ‘decision-making’ of another machine and that we must have a double-trust that this is a good process for black box AI.

It isn’t good enough to say that the principle of contestability of AI systems is key to ethical practice because often people don’t even know AI is infused into the systems they are using. You can’t contest what you don’t know is there and you can’t identify potential or actual harm if you are not aware of how algorithms make decisions in intelligent tutoring or learning management systems, for instance.

I think there is a whole world of ‘rabbit-hole’ discourse that we should be questioning. This ‘rabbit hole’ discourse is a  form of dialogue on AI and ethics that plunges everyone down a pit in which original questions about transperancy or contestability or accountability are deflected rather than genuinely engaged with. Such convoluted discourse does not build my trust in the ability of science to sort out the ethical quandaries. It reminds me of when sensible Alice in Wonderland engaged in the following dialogue with the formidable yet very confusing Duchess :

“I quite agree with you,” said the Duchess, “and the moral of that is—’Be what you would seem to be ‘—or, if you’d like it put more simply—’Never imagine yourself not to be otherwise than what it might appear to others that what you were or might have been was not otherwise than what you had been would have appeared to them to be otherwise.”

“I think I should understand that better,” Alice said very politely, “if I had it written down: but I can’t quite follow it as you say it.”

Alice_par_John_Tenniel_32

Ask yourself as a teacher, is it adequate to respond to a question from a student or a parent about why a learner is going down a particular curriculum path that is different from their peers when using an intelligent tutoring system, by saying, ‘Because the machine said so and these machine know what they are doing’ or ‘Because the machine has magically personalised the learning’?

Universities may have in-house expertise or the resources to buy-in experts to ask technical, pedagogical and ethical question of AI systems; however, school principals may not have this option available. Access to independent advice can enable robust procedural ethics and ethics-in-practice and be a foil to ‘regulatory capture’. Regulatory capture is where government and those in governance positions (such as principals) become dependent on potentially conflicted commercial interests for advice.

These are just some of the ethical quandaries of AI for educators and ones in which educators and computer scientists alike need to engage together in earnest.

Position 4 Understand that pressure points are pivot points

As the above examples illustrate, there are myriad pressure points in the current area of AI and education. By pressure points I mean issues where the interests of different sectors intersect to create uncomfortable, sometimes conflicting positions. These pressure points are bigger than all of us or any one sector or industry. They tap into major issues such as: human rights in the era of ‘big data’; the ‘datafication’ of humans and of the child and what this means for our sense of being human; digital inclusion and digital divides; how accountability might work when AI systems know no borders; and the troubled relationship between human agency and trust in automated technology, to name a few.

These pressure points can also be a result of a clash of cultures – between, for example, the basic commitment to transparency and explainability that is at the root of education, and at best, the misunderstanding of the importance of this in some computer science communities. Honestly and clearly identifying such pressure points, laying them bare in an ongoing manner, will hopefully create (democratic) action to pivot towards solutions or resolutions of issue in AI and education, and may perhaps even result in innovation of AI for educational good.

Position 5 – There is always time to ask the right questions and act wisely

Maybe it’s just me but there seems to be a growing feeling that we are under pressure to rush towards solutions to AI and education or in putting AI-infused systems into schools, universities and classrooms without a careful, inclusive approach that taps into the phronesis or practical wisdom of educators at scale. We should be bringing together the phronesis of educators with other types of knowledge such as ‘techne’ or the artful knowledge of developers, and the ‘episteme’ or scientific knowledge of the computer scientist (and researchers in other areas such as Neuroscience or Learning Science).

Yes I know we have an urge to be innovative, leading edge, and to hold the mantle of ‘early-adopter’. There is nothing wrong with this as long as we keep our eye on the long game. We must enter the field informed by a foundational knowledge of AI and of the evidence base for using it for learning, have access to independent expert advice, and be up to engaging in open debate with industry, across disciplines and with the communities we serve.

AI and education will not be a field where policy can be easily settled or even have a substantive shelf-life or where decisions around design, implementation or governance of AI systems are arrived at (easily) through consensus.

We owe it to ourselves as educators to embark on lifelong learning about this technology and to develop awareness and foundational knowledge of it and its global debates, so that we can do our job with an eyes-wide-open disposition.

We should take the time to build the networks, forums and professional learning approaches that will allow us to have a strong voice in this area. We are more than customers or consumers or administrators or generators of data points for/of AI systems.

We owe it to our students and their communities to be informed citizen-professionals willing to work to democratise the AI and education space.

One suspicion – AI and education is a wicked problem

After laying all this out, and having worked on very tricky social issues for a couple of decades, I have to be honest and say that I suspect that AI and education is a wicked problem.

In their classic article, Rittel and Webber (1973) argue that those in the social professions (and I include computer scientists who are furiously working on AI ethics in this category) incorrectly believe that they can solve social problems in the same way scientists or engineers resolve technical problems. They suggest that this mind set is not helpful as wicked problems have a set of complicated characteristics that relate to three areas: goal formulation; problem definition; and response to social context.

Firstly, Rittel and Webber explain that goal setting with wicked problems can be an ‘obstinate task’ because the issue of who sets the goal and who it affects is often up for dispute. Ask yourself who is currently setting the educational goals in AI systems and how those who will be directly affected can have a say?

Rittel and Webber also argue that many problems encountered by the social professions are wicked because they cannot be compared with the tame or ‘benign’ problems commonly encountered by scientists or engineers. Tame problems are clearly definable, are demarcated or separable from other problems, and have solutions that are ‘findable’ even if they are technically difficult. In contrast, wicked problems can be defined in multiple ways and are often prone to political intervention and divisive public debate. Here, the term ‘wicked’ is used to describe complex, often intractable issues, but does not denote a moral judgement. This point goes directly to the misunderstanding of computer scientists about identifying collective social norms and values in AI systems – norms and values are not findable objects but moving messy phenomena prone to conflict, debate and change. Are computer scientists sometimes confusing wicked with tame problems – I think so.

Thirdly, finding definitive solutions to wicked problems is not always possible because stakeholders can have radically different frames for understanding the problem and the solutions. This premise rests on a rethinking of social context as a boisterous ‘plurality of publics.’ Social context is viewed as diverse stakeholders seeking to pursue varied and sometimes conflicting goals where one solution will never fit all. Nurturing and harnessing the ‘plurality of publics’ associated with AI and education will be key to resolving issues, even if these resolutions are not as durable as we might like. Accepting this will be key to building trust in the technology and using it in schools, colleges and universities for human flourishing.

Postscript: I will revisit the post in a year’s time and see if I’ve changed any of my positions or if I need to review my suspicion. At any rate, I am ready for an intellectually joyous ride which comes from just thinking about this difficult stuff. Bring it on!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s