On the way to work this morning, I stuck my headphones in and sought some new music. I’m a Spotify convert – given that after the advent of iTunes, one doesn’t really own music you buy digitally anyway, I like the variety you get with unlimited streaming. As on many occasions, I was in a rush and fed up with my standard rotation of saved songs, so selected the curated “Discover” playlist, filled with music that an algorithm has determined I’ll like based on prior choices. The algorithm is often very effective – more than a few of the recommended songs were ones I’d go back to, but it got me thinking how little say I’d had in my own music choices. In this case, I’m using recommendations as a crutch to offset my own musical ignorance and laziness, but it started a train of thought about how much of our decision making in our lives we give up to algorithms – and more generally to tech companies – and how that will affect society in the future.
The more we use tech in our day-to-day lives, the more important this kind of discussion becomes. On average, people in the West spend 2 or more hours a day on their phones, which means that the way in which we spend at least a tenth of our lives is guided strongly by choices made by tech companies, and given that those choices are made with little to no democratic input this eats at the notion that we have complete freedom within our daily lives. My personal take is that there are several crucial aspects to that loss of freedom that make this a worrying trend.
There has been extensive recent coverage in the media of the so-called ‘techlash’ – a repudiation of tech companies’ attempts to design new ways to solve current problems, without considering the impact going forward. Facebook, Amazon, Google and Apple are often caught up in these discussions, and if you’ve been following this you might be thinking of an aspect of this ‘techlash’ that’s relevant to the issue of freedom – the monopolistic nature of these companies. Margrethe Vestager, the lead antitrust negotiator for the European Union, was recently interviewed about her take on the monopolistic nature of these companies. She echoed other voices by worrying that the giant scale and market dominance of each of these tech firms gives little option for the consumer to choose how they might search the internet, or interact with friends on social media. In other words, the freedom to choose another way to act online is absent.
The algorithms themselves are designed to take away elements of freedom too. Much like how consumer consultants and focus groups have reorganized our shopping experience to maximize the chance that we’ll buy goods in physical stores, advertising algorithms are set up to do the same online, but at vastly greater scale. Using huge swathes of data, machine learning methods can comb through patterns of behaviour online to assess the most efficient way to part each of us from our money. It’s certainly not a one-size-fits-all approach, either; given each individual has their own online footprint, algorithms can assign the perfect match of targeted advertisements to each single person. Being guided to do something is a form of unconscious coercion, and I’d argue it represents another step in our loss of freedom online.
The Instagram influencer is simply a more effective and better targeted version of placing the sweets and candy at the checkout at the supermarket – and instead of seeing it maybe once a day, you see it dozens.
You might scoff at this – perhaps you’re the kind of person who blocks ads, and refuses to read anything put in front of them. How clever can these algorithms possibly be? In the past, I’ve thought something similar, but now I’m not so sure. And it’s not because the algorithms are cleverer that we think – humans are just highly predictable.
To preface this, a little discussion of personality trait analysis. There is a cottage industry of categorizing personal traits so that we can ‘better understand ourselves’. You may have heard of the Myers-Briggs test, or the ‘Big Five’ personality traits. Rating people on scales of introverted to extroverted, or on their openness, conscientiousness, and other traits, is a method used by some practitioners to provide more nuanced personal advice to individuals. These tests have often been deployed in business consulting to find the right fit for team members in different environments. This kind of analysis has many critics, and I’m certainly not going to argue the validity of the methodology. All you need to know is that I took the ‘Big Five’ personality test as part of a machine learning experiment organized by Cambridge University. The results of the test were then compared to what an experimental algorithm predicted my results would be based solely on my Twitter interactions (people I follow, what I’ve written, what I’ve liked).
The results were almost identical. Maybe this isn’t a surprise to some people – but to me, as someone who tries to keep a relatively uncontroversial and limited profile online, this predictive ability was almost uncanny, especially given the seeming independence of my personality traits from my online presence.
The machine learning method has simply compared my online presence with those of people for whom the personality profiles are known, and found a very good match. Armed with that information, they can push news and advertising into my space online that has been tailored to fit people like me more effectively. Even if that’s only marginally effective, it’s still infringing on my own freedom of choice.
Perhaps we’ve never had freedom of the sort I’m describing. Perhaps as new modes of living have been developed and more of our time has been freed from manual labour, those with means have jumped in to exploit this time and tell us how to use it. Whether advertising or religion, perhaps there’s always been some pressure, whether overt or covert, to push us to use our time in a given way.
Moreover, it seems humans are more predictable than we’re often prepared to admit. At the same time, many of us would take some comfort in the notion that there’s some parts of our personalities and thought processes that a machine would struggle to understand and predict. This is particularly true of our rich internal lives, and our internal monologue.
There’s something I find unnerving about following that train of thought, however. I imagine most of us would like to feel understood by our friends and family – those closest to us. But how can anyone try and understand us? By assessing our outward behaviour and our actions. We have to accumulate data and make inferences about those around us based solely on what we observe. For now, the way we absorb that data through our senses is far more diverse and integrated that any machine is capable of at present, but it seems like only a matter of time before machine learning catches up.
Depending on your theory of consciousness, you may be convinced that we can know ourselves in a meaningful way – that we can comprehend the causal relationships within our own behaviour, rather than relying on statistical relationships as a computer would need to. However, even if there is a separation between human consciousness and what a machine could theoretically be capable of – and that certainly isn’t a certainty – it seems that the relationship we have with others could still be approximated by a machine learning methodology of sufficient complexity. Relationships with others are a separate concept to internal consciousness. While this doesn’t explicitly relate to the loss of freedom of choice, I personally am disquieted by the notion that our relationships with others could be replicated by a machine, and it concerns me the implications for our society at large if this turns out to be the case.