Controversy over Cambridge Analytica prompts some hard questions about surrendering our decision making to algorithms
Every time a new technology is introduced into society, advocates are quick to point out the ways in which it would improve our lives. A car saves travel time; a mobile phone means you’re never out of contact; the printing press allowed more books to be produced in far less time. Sometimes it’s easy to forget that technology doesn’t just make existing behaviour more efficient, though; in so many cases, technological advances also serve to change the spectrum of behaviour that is possible. The advent of the car gave rise to suburbia; mobile phones have fundamentally altered our ideas of communication, while the printing press allowed propaganda to be distributed en masse and in part led to the reformation in the 16th Century, sparking centuries of war.
The feedback of human behaviour and technology has always been present – the philosopher Marshall McLuhan neatly captured the concept when he stated “the medium is the message” – but we’ve reached an interesting point in history as artificial intelligence begins to be used in earnest. Previously, technology has aided us by making existing tasks simpler, or extending our range of actions; now, for the first time, technology is capable of helping us make decisions – or indeed replacing us entirely in the decision making process. If we’re not enormously careful, we’ll ignore the potential for AI and computer-aided decision making to fundamentally alter our behaviour.
For me, there seems to be a specific cognitive dissonance on display amongst many technological evangelists. While some may espouse different viewpoints, the key figures behind Facebook and Google and other leading brands generally express a perspective emphasising openness – embracing a wide variety of ideas and peoples, freedom of speech, and generally allowing and accepting the whole spectrum of human behaviour (even up to the point where it becomes hate speech). At the same time, however, these companies are building decision-making algorithms for a huge variety of applications, immanent within which is a defined ‘best outcome’. Some set of individuals has defined what the ‘solution’ to this algorithm is – a specific answer, that to those individuals is seemingly correct; but in replacing humans as the decision makers, a ‘one-size-fits-all’ algorithm may miss the full range of human solutions.
We can see examples even in the infancy of decision making algorithm use. Taking education as an example, using algorithms to assess the performance of teachers may be biased depending on the metrics used to judge performance, but even more fundamentally it requires someone to make a decision about what constitutes ‘good’ education; is it simply more efficient? Or is it something less tangible? Elsewhere we see algorithms at work deciding if jobs or loans should be allocated, based on credit ratings – a metric which is notoriously flawed and biased against those from disadvantaged backgrounds. Why are these choices fair?
These kind of examples require a specific ethical or subjective judgement to be made by the programmer, whether explicitly or unconsciously. The potential for bias is always there; Friedmann & Nissenbaum, in a paper from 1996, wrote:
“Bias can enter a [computer] system either through the explicit and conscious efforts of individuals or institutions, or implicitly and unconsciously, even in spite of the best of intentions”
It gets worse when these algorithms are applied to wider and more diverse social settings. What is right in one country may not be the same in others; even the UN recognises that some countries disagree with the specific definitions of human rights, and particularly those who feel condescended by a universal ‘western’ cultural and moral definition. Placing ethically-subjective decision making into the hands of algorithm authors risks imposition of a single viewpoint upon diverse populations.
Maybe you don’t consider this a problem, especially if you might well make the same decision that the programmers might make. Dilligent algorithm authors might seek out a large and diverse sample of real people to build a set of answers to the problem their code is trying to resolve. We could even call this democratic algorithm writing, where a whole populace defines the ‘right’ solution to a problem. But in building such an algorithm, those ‘right’ answers become engrained and enshrined within society. Behaviours start to be built around them. If anything has changed as much in the last several decades as technology, it’s social mores; from adoption of equal rights in law, to attempts to reduce gender bias, to recognition of indigenous peoples, the generally accepted social perspectives have fundamentally changed. Even if you believe there is a universal set of moral truths, it’s hard to argue that behaviour itself has been unchanging. What if algorithms stall these changes, or reinforce existing inequalities and make it harder for future generations to correct them?
A related problem can be framed by talking about self-driving cars. In the event a pedestrian steps out in front of a car, a human driver might act in a variety of ways, but self-driving cars will be programmed to act in a specific fashion – some programmer will have to decide if the life of the pedestrian is more or less valuable than that of the occupant of the car. Are we prepared to sacrifice some of the control we want in our lives for some increase in efficiency?
This is, to me, a crucial core problem posed by decision making algorithms; we sacrifice control over our own lives. If we consider ourselves to benefit from living in a democracy, should we not be outraged, or at least concerned, that we are handing over control of a growing portion of our lives to technocrats who may embrace differing ethical stances to ourselves? We’re clearly outraged that some people are even trying to use algorithms to influence aspects of our actual democracy, judging by the response to the Cambridge Analytica story – so shouldn’t we be similarly sceptical of any other use of decision-making algorithms?
Is it realistic to think we’ll turn around and stop the march of AI research? I doubt it. The endless march of efficiency, driven in part by the free market, entails this kind of technological change. I do think, however, it is prudent to offer some key suggestions in these early stages:
– Make any algorithms locally variable: rather than defining one global solution, authors should allow for various options globally
– Similarly, make the code easily responsive to social changes over time; don’t enshrine one set of ‘correct’ behaviour.
– Above all, make the solutions democratic; ascribing a solution to a populace for any decision-making algorithm without consultation is tantamount to tyranny-in-miniature, and we should resist this in all forms.