Why aren’t we leaving space in modern discourse to be wrong?
It’s probably almost impossible to be social media user in 2018 and not come across heated exchanges on a fairly regular basis. Even as we become more insulated within the bubbles of our own accepted ideas, some issues are seemingly so contentious that strong words are exchanged by commentators; Brexit, Trump, climate change, #metoo: just a few examples where there is sufficient depth of feeling to argue with people either within (or more often outside) your social circle.
Reading these exchanges is almost always a sure-fire way to raise your blood pressure and prompt loud closing of laptops in frustration. Most often, neither side backs down, and ad hominem attacks increase in their intensity as multiple parties scream into a void without listening. Both sides believe in their own ‘correctness’; whether value judgement or otherwise, it’s rare to see people back down from their tightly held personal convictions. It’s not debating, it’s arguing for the sake of noise. We can attribute some of this to the depth of emotion attached to some issues, but the written-word format of many social media systems – which should encourage a more considered response – deteriorates too quickly.
I get as annoyed by such exchanges as anyone. The potential for deterioration of mental and physical health just by engaging with current affairs was all too starkly described in the New Yorker recently as the author saw clinical changes in her blood pressure and stress levels as a result of her involvement with the #metoo movement. As a scientist, though, there is a specific aspect inherent in all of these arguments that aggravates me (and probably drives up my blood pressure), beyond the specifics of the discussion: the inability for each participant to state how they could be wrong.
There probably aren’t many people who thoroughly enjoy being wrong. Self-esteem is a powerful motivating force, and it has been suggested that being shown up as wrong in public can provoke a literal ‘fight or flight’ reflex, so defensive reactions are almost to be expected as par for the course when someone is shown up as incorrect. Debate breaks down as the neurochemical response reduces logical and analytical thinking. How can discourse proceed under these conditions?
As I moved from school into academic science, the value in being right became less and less clear. Particularly as a PhD student, one of the key lessons I had to learn was to be much more open about saying ‘I don’t know’, or ‘I made a mistake’; foolishly chasing blind alleys or guessing wildly wouldn’t help me. If the point of advancing science is to uncover new knowledge, then this must by definition be something you didn’t know before – so as a scientist, being in the dark about something (or even being wrong) is fundamental to the work.
There’s an even more crucial point here. A more nuanced version of scientific progress should acknowledge that we can never definitively prove any of our theories; they can only be definitively proven false. This is the classic ‘proof by induction is not possible’ argument made famous by Karl Popper which I have written about previously, and a distilled example is as follows: I theorise that all swans are white, based upon my prior observations (and preconceptions). This theory is valid to describe past events, but there is always a possibility that a black swan exists – and thus the theory would not hold. This resembles the statement commonly attributed to Socrates: “I know one thing; that I know nothing”. Nothing is truly known for sure, beyond one’s own perceptions.
Popper argues that we should thus consider a theory ‘scientific’ if it is falsifiable – in other words, if the theory can be disproven by new evidence. This is what separates science from pseudoscience. Although not accepted by all philosophers of science, Popper’s view has gained significant traction among many scientific practitioners. In practice, it’s rare to see this expressed so plainly, but many scientists would argue their work only provides the best available explanation for the evidence so far – not the final answer to a given problem.
So what does this have to do with bickering over social media? I would argue that scientists make it easier on their egos and self esteem if they recognise that their own work is inherently fallible, and in fact that fallibility is almost what defines progress. They could further embrace this if they laid out the conditions under which their theory would be proven wrong – and the evidence they would like to see. For example – my theory that pigs cannot fly would be definitively proven wrong by clear evidence of pigs flying. My point is, if participants in arguments held in the public forum of social media were to both ensure that their points were clearly falsifiable and to state the evidence that would falsify, they might have an easier time of admitting their own wrongness – and thus perhaps bring more civility and rationality to public discourse.
There are, of course, caveats. Suggesting people should explain what evidence would definitively prove them wrong leaves room for abuse; demanding absurd or impossible evidence would hardly be productive for discourse. Moreover, standards of acceptable evidence will vary for different individuals. In my absurd hypothetical example above, what would constitute clear evidence of pigs flying? A written account, a video or still image, or observations we make with our own eyes? Nearly all species of evidence can be doctored, or fraudulently produced. Even more fundamentally, not every topic of debate relies upon scientific reasoning; opinion is mightily important, and much of what is discussed comes down to differing value sets.
On the other hand, specifying the sources of evidence necessary might, as a side effect, cause people to examine their trustworthiness more carefully. It may be a naive hope, but perhaps if comment authors and pundits define what could prove them wrong, they might think more carefully about their arguments. The inclination to veil opinion as a falsifiable theory may also be more limited. And would debating really be worse if participants at least acknowledged the mere possibility that they would be wrong? Allowing room to make mistakes might even prevent calamitous social media posts returning to haunt authors as new evidence comes to light in the future, as has been so clearly seen recently.
It would be imprudent to write in favour of defining the evidence to disprove your point without doing the same myself, so here goes: I am arguing that offering the evidence you’d want to falsify your argument should lead to more constructive debate, and clearer, more rational arguments. Some aspects of this are harder to quantify than others, but if my argument holds, I would look for some of the following aspects to change if the ‘falsifiable argument’ is deployed more widely:
– Decline in negative mental health outcomes from social media use;
– Reduced partisanship amongst citizens
– Decline in ad hominem and insulting behaviour online.
Conversely, either no change or an opposite set of changes in such factors would negate my own point.
Facebook and other social media sites have begun to experiment with ‘fact-checking’ articles; I would argue that they should extend this to assessing whether the arguments made are falsifiable. And to those who spend time arguing online, I’d pose the following question: are you trying to win a debate, or advance the field of human knowledge? What benefit do you gain by being right?