Are We Underestimating the Social Impact of AI?

Every generation probably considers that the era in which they live is in some way special and unique. In many cases, it’s true; each generation undergoes events that permanently alter the way in which society functions. In some cases, this could be war, or natural disaster, or a revelatory scientific advance. Technological advancement has also served as such an impetus for change; consider the advent of gunpowder or the printing press. How widely technology affects society is rarely considered before it is introduced; the introduction of mobile phones and the internet, for example, has fundamentally altered the way in which we consume information and communicate – changes so far reaching that it would have been hard to predict beforehand.

To suggest technological change is a double-edged sword is hardly controversial. The impact of advances in tech on society at large are sometimes predicted before they’re introduced, but just as often are unprecedented. It’s a topic I’ve touched on before on this blog – I feel strongly that jumping on new inventions without even considering the ramifications for the effect it might have on our behaviour and customs is deeply misguided. Technological change, though, is a juggernaut, and standing in the way is hardly likely to prevent it’s occurrence. Perhaps trying to look forward and draw attention to what we guess the impacts of future technological change might be is all we can do – at least it’s better than looking back one hundred years from now with hindsight.

It’s notable, then, that many writers have already begun to consider what the dawn of artificial intelligence will mean for society. The end of the world is a popular suggestion amongst naysayers, although tech evangelists would argue that those outcomes are unlikely. Whatever the outcome, it seems that most would agree it will change social structures quite significantly. From my perspective, some of the most interesting implications are discussed in the HBO series Westworld, which features ambiguously conscious robots as servants in an advanced theme-park, wherein the robots suffer at the hands of human guests. While the question of what constitutes ethical behaviour toward such a robot is a good one, I’m more interested in the ambiguity of that consciousness, and what it means for us as humans.

Let’s illustrate this with a classic thought experiment from the philosopher of mind John Searle, which he calls the Chinese Room Experiment. Imagine a room, he says, into which information is passed in Chinese, and within which there is a computer. The computer takes the Chinese instructions, processes them, and produces an output – also in Chinese. Searle then introduces a human into the room. If the human has the same instructions as the computer (i.e. each set of input characters results in a given response), and sufficient time, then the output would be the same. This is in spite of the human in question having no knowledge of what the Chinese characters actually mean – in other words, they don’t understand what the instructions are, they just follow the program. Thus, Searle argues, the machine can never be truly ‘conscious’ in the same way a human is perceived to be.

To explain this a little further, we can use an example from current data science. You may have encountered the term ‘machine learning’ recently, but it’s not learning in the way many people might initially interpret. A machine ‘learns’ which response is the appropriate one for a given input stimulus when it is ‘trained’ with existing data; in other words, the machine looks at the statistical relationship between the inputs and responses, and uses that to predict the appropriate response to a new bit of input. The computer doesn’t understand any causal, deterministic relationship – it’s only statistical.

“So far, so good”, you might be thinking; if a machine can never achieve consciousness, then we’re not going to have nearly as many problems. Perhaps you can rest easy with that knowledge – but I cannot. While we can say conclusively that the computer in the examples above doesn’t truly ‘understand’ the causal relationships between input and output (since we have programmed it), we aren’t able to say the same about other conscious beings. We can only inhabit our own mind, and thus can never be ‘inside the room’ for another consciousness. In other words, how would we ever tell which is which between a human, or a computer perfectly simulating a human? The implication that we can be simulated without any sense of understanding unsettles me; it undermines the idealised view of consciousness we hold so dear.

To illustrate what I mean, it’s useful to look at one key critique of the Chinese Room Experiment. While some argue against Searle’s view of the computer, there is another school of thought that instead challenges Searle’s view of human consciousness. Humans, according to this latter idea, are in fact exactly like machines; we only think we understand causality, but in actual fact our responses to stimuli are statistical and based on prior experience in exactly the same way as a machine. We are staggeringly more complex in terms of the stimuli and data from which we can learn, but according to some, the language of cause-and-effect that we use has only arisen as a way to make sense of the world. This position, which is a strictly materialist one, has led some philosophers to the conclusion that while we believe ourselves conscious, this is not the case; our internal monologue or sense of self could be simulated by a machine, if necessary.

This touches on a problem known as ‘the hard problem of consciousness’ – the notion that no-one, as yet, has provided an explanation as to why we experience any kind of stimulus in the world at all; in other words, why we are ‘conscious’. I wonder if the advent of machines that are able to simulate human behaviour will encourage the dis-belief in a consciousness as previously construed – I certainly don’t think it’d be supportive of our view-of-self as special, conscious organisms. This is not a comfortable philosophical position to hold. The idea that our cherished humanity is just a bio-physical process is deeply nihilistic for most, and pours cold water on meaningful values closely held by almost everyone.

Perhaps most people will be comfortable dismissing this, concluding that since we cannot see into the mind of others we cannot definitively refute the idea of a consciousness. But let me pose a hypothetical question. Imagine an AI of sufficient complexity was created that could simulate the response of a given human. At the same time, a copy of that human’s brain is grown in a lab, and the responses to stimuli from that brain are recorded. What would it say about human consciousness if the behaviour of the machine and the new brain are identical to the original human, given a specific input?

While I posed this as a hypothetical, it’s not unrealistic to expect we might be able to create one or both of these ‘new brains’ in the mid- to long-term future. Lab grown brain tissue is now complex enough to raise tricky ethical questions for neuroscientists, while AI can already do a good job of simulating responses; I recently compared my actual responses to Myers-Briggs questions to those estimated by an algorithm based only on data from my social media, and the degree to which the two are similar is alarming.

The outcome we should maybe hope for as a species is that regardless of how complex we make machines, or accurately we replicate a brain, there remains a clear distinction between our human consciousness and the simulated ones. It would preserve beliefs in something beyond our biology. From my (perhaps pessimistic) perspective, this doesn’t seem likely. Given we can’t look into the mind of another human, we’ll never be able to demonstrably show they understand something at a causal level when compared with a machine simulating their responses. Even a fragment of doubt about our humanity is dangerous; after Galileo and others showed that the Earth was no longer the centre of the universe, the response from religious figures was immense. The shift to a more secular society wrought great change, and could hardly be said to be a peaceful transition. If AI forces us to question our own conscious experience, the effect on society could be just as important.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close