Blog

Science, philosophy, and depression

Growing up, I was endlessly fascinated with understanding how the world worked. Discovering the neat, interlocking processes by which the landscape and ecosystems around us are governed filled me with joy, and it should come as no surprise that I followed these interests to become a scientist. Science, for me, became fundamentally about understanding the truth of the physical world; using evidence to confirm our theories of nature, and uncovering facts that could replace our guesswork and idealised notions as a way to comprehend the world around us. It seems to me that for many scientists, this passion for truth is a key reason for their choice of career; few scientists would say they’re in it for the money, after all.

In my schooling, it was rare to conflate science with philosophy to a great degree. Facts could remain facts, and objectivity was permitted. But as someone interested in the nature of things, I have over time been drawn to philosophy to help make sense of exactly what objectivity means, and what we can really mean by a fact. Perhaps some of this philosophical inclination comes from a scientists urge to ‘question all assumptions’. Why not ground my scientific understanding on the most secure base I could find; the fewer axiomatic assumptions I needed to accept to build a coherent world-view, the closer I could be to truly objective knowledge, I reasoned.

For those of a similar mindset, scientific philosophy holds little solace. The possibility of ‘a provable theory’ disappears rather quickly; Karl Popper famously discussed the ‘black swan’ concept to demonstrate that inductive proof falls apart. In short, let’s say I have a theory that ‘all swans are white’; based on my experience, this is the case, but there’s always the possibility of a ‘black swan’ that I’m yet to encounter. Dig further back into philosophy and one can find a litany of thinkers who wrestled with the question of whether knowledge is genuinely possible at all. Hume, Kant, Hegel and others since have argued over whether we as subjective observers can ever really perceive the objective truth of the world around us. Perhaps I lack the faculty to truly understand the counter-arguments, but I find it hard to see beyond the sceptical position offered by Hume; genuine objective knowledge of the world as it truly is remains forever outside our reach.

This philosophical perspective does not coexist easily with my self-image of a scientist seeking the underlying truth of the world. The realisation of this internal conflict did not happen all at once; but over the course of my PhD it dragged at my psyche. When one always has the nagging doubt that what one is working on is merely observational, and not necessarily true, motivation becomes problematic. Having struggled to accept compromise in any internal viewpoints from a young age, I found it even more difficult to accommodate this kind of scepticism alongside more everyday notions; politics, religion, social mores; all are based on axiomatic propositions, with roots in history and human psychology. I had been reading Foucault and Barthes at the time, and the nihilist, postmodern interpretation of the world they presented certainly fell in line with my thinking at the time. How could one believe in anything at a deep level, if it required accepting an unprovable axiom at its core? This is my interpretation of Nietzsche’s void; I had sought to be sceptical of the world to understand it, but it had left me unable to view anything sincerely.

As someone with an inclination to dwell on a train-of-thought, I struggled with this for some time. For a while it led to what I now recognise as depression, although I refused to label it as such at the time. Waking up in the mornings often came with a question of ‘what is it all for?’ on too many occasions. Gradually aware that I was bereft of a higher meaning to my existence, and aware that my own pathway to this viewpoint was so acutely pseudo-intellectual as to alienate anyone I spoke to about it, I found it difficult to address or seek help.

Fortunately for anyone who spent any time with me, and particularly my family who had to listen to my attempted philosophical meanderings, I found some measure of solution. While sat on a train staring out the window into the rain during a long winter in Berlin, I asked the question that we should all ask ourselves now and again:

“So what?”

“So what if you can’t access the objective truth? You are, by nature, a subjective human living in a subjective world – subjective knowledge has to suffice.” Bertrand Russell, whose writing I discovered shortly after, put it much more eruditely: “Scepticism, while logically impeccable, is psychologically impossible, and there is an element of frivolous insincerity in any philosophy which pretends to accept it.”
The psychological aspect is perhaps the most important point here. No matter how sceptical one is of social constructs, human biology necessitates certain inputs; we would perish pretty quickly if we decided to skip food and sleep because there “isn’t an objective need to eat”. Diogenes the Cynic, the ancient Greek sceptic, who according to apocryphal tales lived in a barrel to point out the folly of some human conducts, probably didn’t fulfil a great many of the psychological needs we commonly discuss today, for example Maslow’s famous “Hierarchy of Needs”. In my case, trying to achieve deeper perspective was pushing me to depression – and offering no help to actually living as a functioning adult.

I knew that for me, to be satisfied, life needed to have a purpose and direction, and now I was at liberty to choose that direction for myself. I knew that I had to make the conscious decision to attribute meaning to whichever direction I chose, despite my underlying scepticism that objective meaning is possible. Albert Camus, in The Myth of Sisyphus, suggested that in the face of this absurdity, the choice we must make is to live in spite of it. With this realisation came some level of freedom; the choice of meaning was mine alone, and not defined by any other authority. It’s not a complete answer, and it never can be – at a philosophical level, one is still obliged to lie to ones-self, but it has to suffice.

It’s easier to wake up in the morning with a specific goal and objective, even if I know that it only matters to me on a subjective level. Perhaps if I had never spent time delving into philosophy and overanalysing the world in which we live I may never have asked such nihilistic questions; hindsight is overrated, though. This is not a story that has a satisfactory ending, as I’m still working to answer these questions for myself. More than anything, I’m writing to share my experience. It seems to me that numerous aspects of society are moving toward this kind of ‘objective’ thinking; the decline in organised religion and rise of postmodern cultural tropes are examples, although some may disagree. Some, however, may be thinking along the same lines as me, and I hope they might find some common ground here.

 

Retreating Glaciers in British Columbia's Coastal Mountains

Climate Change and Lost Potential

How much of the impact of anthropogenic climate change will go unnoticed?

When you hear the phrase ‘disastrous impact of climate change’, what comes to mind? Pictures of dying polar bears and bleached coral reefs? CGI images of major cities half submerged under rising seas? Perhaps for some it’s “bleeding-heart liberals over-exaggerating concerns about economic growth”…? I’d wager that if you are strongly concerned one way or another, you have some specific images in your mind about the potential outcomes that our human footprint will have in future. What if, in focusing on these dramatic and catastrophic concerns, environmental advocates are missing the more insidious potential impact of climate change? This is perhaps my biggest concern for the next hundred years; that some of the worst impacts will simply go unnoticed, with long-term changes masked by the pace of social and economic change. Or worse, that not enough people will care.

To elaborate on this point, let’s look at an illustrative example. In 2012, the Dara organisation produced an analysis of the impact of climate change and carbon-related pollution on both human mortality and economies on a country-by-country basis, as well as projected estimates for the year 2050. Their results were the basis of a number of articles at the time arguing that ‘climate change kills more people than terrorism’. Let’s use India, a large, developing country that the report suggests is highly susceptible to climatic effects, as a test case for the potential impact of climate change.

In the data, we learn that at present, 200,000 die every year in India from climate-related deaths, and a further 900,000 from carbon pollution (mostly from smoke from cooking). In 2030, these numbers are anticipated to rise to 350,000 and 1,050,000 respectively (an overall increase of 300,000 per year). But what happens when we compare these numbers with the projected changes in the overall population and life expectancy of India?

The United Nations provides historical population estimates as well as projections of future change. The Indian population in 2010 was 1.231 billion people, and this is projected to grow to 1.513 billion in 2030. The UN also estimates a ‘crude’ death rate of 7.91 per 1000 people per year in 2030 (vs 7.53 in 2010). These numbers are a little meaningless without context, but let’s put the 2030 population together with the estimated death rate: we find that in 2010, 9.3 million died in India (from all causes) and in 2030 this number will be 12 million.

What’s my point? Well, if we compare the fraction of all deaths due to carbon or climate in 2010 (around 11.8%) to the expected fraction in 2030 (c. 11.6%), we see there’s not much change.

If the proportions remain constant, social institutions have no impetus to change in response, at least in terms of the impact on mortality. It might be argued that 2030 is too short a window to really notice the big climate changes. The World Health Organisation can help push these estimates a bit further: they expect 500,000 additional deaths due to climate change every year in 2050. If we again use the UN population estimates though, even this vast number of climate-related deaths ends up being only 0.5% of the estimated deaths per year in 2050.

I would not be surprised if commentators took issue with how I’m interpreting these numbers. I should stress from the outset that I’m not intimating that 500,000 additional deaths is irrelevant or insignificant; these deaths could be prevented, and the cumulative cost is horrifying. My point is that the slow pace of climate change could allow these changes to sneak by relatively unnoticed; at least initially, the climate death toll will not look like the Black Death, with drastic declines in population; based on current projections, the population will still continue to grow, at least through the first half of this century. Or if health outcomes begin to decline, then the range of factors that could cause this change will confound efforts to tie in conclusively to climate change – just look at the fall in life expectancy in the US as the result of Opioid-related deaths for an example where short term effects could easily swamp the slower shifts from climate change.

And this, essentially, is what concerns me; the drawn out nature of the crisis will lead to a loss of potential human life and well-being, that will otherwise be swamped in the otherwise-exponential growth in our society and economy.

The economy itself is another system that policy-makers have historically used to determine the overall progression of nations, and one which we can also use to illustrate the possibility that climate change gets lost in the data.

The economic cost of climate change is a topic of great debate for economists and scientists alike. On top of the increased healthcare costs related to the increased mortality, the increased likelihood of extreme weather events, droughts, famine and further indirect effects could be extremely expensive. The Stern Review in 2006 estimated costs of a business-as-usual regime of more than 5% of global GDP annually, and in the most catastrophic scenarios this could be higher than 20%.

Will we be richer or poorer in absolute terms in the future? Does GDP growth mean that these percentage costs will be absorbed by the increase in the overall economy? Some scientists from Stanford have suggested that in 5-43% of countries (primarily in the developing world), the economy may be worth less than it is today.

However, it’s notable that both the accountancy firm PWC in their projections for 2050 and the OECD in their estimates until 2060 suggest growth is the only trend we should expect in global GDP for the next decades. Some might argue that these reports have underestimated the potential economic risk, but these agencies are well-established and trusted by policy-makers in a broad range of settings, so there are at least some serious suggestions that any major economic impact of climate change will come later than 2050, or that it will be absorbed by economic growth.

Moreover, since negative effects of climate change will disproportionately impact developing countries in the global south, the economic impacts may be even more strongly masked in the short term. Some developing countries will benefit from an effect known by economists as ‘Convergence’, whereby their economic growth is faster than developed states, in part due to the increased availability of existing technologies. Thus, their potential to absorb negative economic effects of climate change while still growing as an economy may reduce potential civil unrest and limit political will to act on environmental issues. Where this convergence is not at work (as has been notably the case in many Sub-Saharan African nations), the lack of global power and media interest may similarly limit the attention that climate change in such settings generates.

Even as the United States experienced the most expensive year of natural disasters in national history, the economy continued to grow; in fact, some government officials took the position that Hurricanes Harvey and Irma would not have any long-lasting effect on the economy. The record-breaking storms this year actually highlight two factors; first, that even under a dramatically increased amount of climate-related disasters, large developed economies may be able to absorb much of the damage. Much more concerning is that developing countries that only receive marginal media attention may suffer much more significantly; the impact of Hurricane Maria on Puerto Rico meant that it is one of only 4 countries where GDP is forecast by the Economist to shrink in 2018 (the others are Venezuela, North Korea, and Equatorial Guinea). It’s all too easy to envisage a future where developed countries turn a blind eye to increasing climate-related disasters in the global south as long as their economic growth is maintained.

None of this is to say that truly disastrous effects will not occur. As pointed out above, climate change is already leading to thousands if not millions of extra deaths each year, and this is only likely to increase. The point is that there are conceivable scenarios where these slow moving changes get masked by the ‘march of human progress’, until it becomes too late. The changes in the physical environment that can result from climate change aren’t all linear, and there are possibilities of ‘surprise’ events that we won’t be prepared for like rapid decline of agriculture due to desertification, or collapse of the West Antarctic Ice Sheet. Such critical events would be impossible to absorb into economic growth. What if ‘progress’ allows us to ignore climate change for long enough that we reach these tipping points?

Even if these critical thresholds are never crossed, at some point in the future, let’s say 2100, we will be able to compare the projections for population growth we made in 2010 with the actual outcomes. Will we see the lost potential?

More than likely, we’ll see it if we look beyond GDP and life expectancy. The coral reefs will more than likely have entirely disappeared; mass extinction of species in many environments is probable; glaciers will have receded across the globe, and the inequality of impacts from climate change might well still mean rampant poverty in some developing countries. In this disparity between the dispassionate numbers and the very real negative outcomes for human well-being there lies a nugget of a solution.

If instead of GDP and bulk health outcomes we choose different metrics by which to judge our success (or at least to judge the impact of climate change), we may see the effects appear more clearly. Ecological richness, measures of inequality (at multiple scales) and the more intangible measurements of well-being (and even happiness) might better capture these changes, and I would argue strongly that policy should increasingly be guided by these metrics, rather than the prevailing inclination to focus on GDP. Even economists point out the inadequacy of the measure: let’s aim to do better.

Information and the Commons

Through recorded history, there have been many occasions when states and communities have exceeded the boundaries of resources available to them, often with dramatic consequences. Exhaustion of food supply and resultant famine has been, at times, one of the great drivers of social change and in some cases civilisational collapse, for example. At no time throughout history has there been such pressure on resources as at present, however. The increasing population and demand for goods and services has led to pressure on nearly all of our fundamental resources, from soil to forests to fresh water.

At the heart of this problem is that for much of the world, these basic resources are not owned by any individual or corporate entity; they are termed a ‘public good’. Paying to preserve such goods comes at a cost for any individual, either person or state; and other individuals can benefit from others paying for preservation at no cost. Thus a rational individual has no incentive to preserve such resources.

A clear statement of the problem is laid out in a paper by the economist Garrett Hardin, published in Science magazine in 1968, titled “The Tragedy of the Commons”. Hardin describes a simple scenario, where farmers graze their cattle upon a shared pasture. The pasture can only contain a certain number of cows, but an individual herder knows that by including more cows than their ‘fair share’, they can obtain a greater marginal benefit, while the penalty of overuse is shared amongst all of the other farmers (and thus the farmer gains more than is lost, even as the other farmers suffer). Thus, the rational choice for all farmers is to increase their own herd “without limit” – and thus all will suffer from total overgrazing and possible collapse of the pasture ecosystem.

This is a well known problem, and while some have pointed out that there are aspects of the theoretical model that don’t always fit with reality, it has become an important complication for policy-makers to deal with at many levels. Politically, it is often incumbent upon a government to police these shared resources and punish those taking more than their fair share, to ensure that the rational choice is to only take the fair share. Unsurprisingly, government wielding power to punish those taking more than their fair share is often associated with a left-wing position, and can clash with prevailing neoliberal economic paradigms; as a result, such laws are often highly controversial.

Are there other ways to prevent resource exhaustion without government intervention? A key part of these questions, that I feel is perhaps increasingly relevant today, is the availability of information for decision makers within this process. On a long-term basis, the farmer definitely wants to avoid the pasture being stripped bare, but in the scenario described above, there isn’t sufficient information to gauge how taking more than a fair share will influence the outcome. Let’s break this down.

– The farmer knows the value of adding one more cow to the herd, and more broadly we could describe this as ‘the cheaters’ benefit’. In cases where resources are stretched, this is likely to be the best known factor – at least for the individual. If instead of cows we think of fresh water, each individual company or person could put some value on the benefit they’d get from an extra amount.

– How many farmers are involved? This is also likely well known, but it is an important value so that we can estimate what a fair share is.

– How many cows can the pasture sustainably feed? This is called the ‘carrying capacity’, and here we’re beginning to find pieces of the puzzle that are not so well known. Perhaps for a pasture this could be clear, but what if instead we’re talking about a mineral like iron ore? How much is available, and how many applications can it support? Or what about fresh water? What amount of water can we use that doesn’t deplete the stock we have? These questions are certainly fraught, and research to find the answers has contributed greatly to the sustainability goals laid out by the UN.

– What is the penalty for overuse? This is shared amongst all the herders, but each individual must build it into the calculation for how much they will lose. This factor might increase non-linearly with an increase in overuse, and could worsen with time. The worst outcome is a total depletion of the resource, but over what timescale does this arise? A total collapse of the pasture would mean no benefit at all for any cows – and thus the predicted losses would be infinite.

– How do each of these factors affect one another, and how do they change over time? The long term cost-benefit analysis of the choice to take an unfair amount necessarily has to incorporate these changes.

Naturally, scientific research could help us fill in these gaps and allow each stakeholder to make a better decision as to the cost-benefit analysis, particularly on long time scales. Since we’re talking about the availability of information, it’s important to note that making such research open access would certainly be a huge help in such situations.

Let’s say that the farmer now has the information I’ve described above. What choice should they make now about how many cows to add to the field? A rational decision may still encourage some cheating, depending on how long a timeline they are interested in. At a long timescale, any overuse of a resource leads to benefit dropping to zero (and as such losses become theoretically infinite) but if they’re only interested in a ‘short-term-buck’ then perhaps some cheating might still be rational.

There’s a big component of the calculation the farmer is still missing, however. Each herder doesn’t know what the others are going to do (in economic terms, this means the information is still ‘imperfect’). What if you knew whether every other farmer was planning on cheating, or is already cheating? How would this change your calculation?

From a simplistic viewpoint, one could argue that “because my neighbour is cheating, I should cheat too, to avoid falling behind them”. But how does this play in a game theory perspective? At some point in time, the penalty for cheating becomes intolerable for some or all participants, and fair play becomes the better option. If you know that everyone else is cheating, and you know that cheating will only serve to make you worse off (let’s say you know one more cow will mean that the pasture will be stripped bare within a year), then fair play is the only good option left for you.

This scenario is perhaps too prosaic to be helpful at this point; a real example will better illustrate the issues at stake. The rate at which the seas are rising as a result of climate change can be considered to be a ‘public good’ of this sort, since nobody owns the whole ocean and ice-caps, but carbon emissions from all actors will affect it. Each individual country produces a certain quantity of emissions that to some degree will affect the rate at which the sea level changes. Landlocked countries may not see the short term penalty of this, and as such their rational action (at least in the context of sea level change) may be to continue to emit high levels of carbon.

For a low-lying coastal state, the maths is very different, of course – a reduced emissions regime is the only rational choice they can make, regardless of what others do, to prevent total inundation by rising seas. To be able to make these decisions today, these countries rely on knowledge of what other states are planning or are currently attempting; in the case of carbon emissions, these data are available and relatively reliable, but for other under-risk resources such information may be lacking; and as such, overuse may proliferate.

Laying out questions of sustainability in such stark, economic terms may either be old news to some researchers, or totally miss the human side of these environmental issues for other advocates. The point, for me, though, is to highlight the importance of increased information for the individual systems (drawing somewhat on my interest in informational asymmetry and open access research) as a potential motivator to decrease resource overuse. This doesn’t require state intervention (other than perhaps to fund the research itself) and doesn’t necessarily call for a change in moral codes; it’s simply that in the context of game theory, moving closer to perfect information can affect the rational choice made by a self-interested actor.

Improving the availability of information is a widely prevailing trend at present, too. Open access research is becoming more and more significant in academia, while Google have made vast quantities of data free to the public (and more importantly easily accessible). Moreover, some might suggest that Blockchain-type technology could offer a trustworthy way to account for emissions and fair usage between multiple parties using a shared resource (i.e. a Blockchain ledger of who is using a fair amount and who isn’t – that key final bit of information). In combination, these potential trends could help address the broader issue of usage of common-pool resources. Using insights from the psychology of delayed gratification we might also look for ways to more effectively weigh our cost-benefit in favour of future generations; doing so might give us a fighting chance of attaining sustainability goals that may otherwise be out of reach.

Can we measure happiness objectively?

If you’re looking to gauge your success in life, it’s no longer enough to compare yourself with the neighbours. “Keeping up with the Jones” is no longer relevant in an era of big data and vast storage of information. Individuals can quantify their place in society at large via a range of metrics, and particularly economic ones. On an individual or national scale, the income or national economy is a (fairly) easy number to quantify, and there isn’t much subjectivity about how much money something is worth. Income tax statements and national budgetary documents are provided with reassuring regularity and are simple to conceptualise, and, for all of its faults, GDP does offer insight into the changes in the economy.

Financial measurements don’t define everything, of course. Reuters polled individuals in 23 countries in 2010, and found that only 4 in 10 see money as the chief measure of success for a person. This jibes with generally accepted wisdom; most people would argue that there’s more to life than wealth or possessions. Moreover, while financial measurements are easier to quantify, any suggestion that they’re an ‘objective’ definition of success is false. Wealth is an entirely arbitrary measure of success. It just happens that it’s much easier to quantify and thus build economic models to maximise it in a society, and as a result, policy-makers are treading on thinner ice when building social models to maximise more ethereal quantities like ‘well-being’, or ‘happiness’.

Happiness as a measure of success is certainly not a universally accepted paradigm. But as has been noted in a number of places, younger generations who are, for example, priced out of the housing market and saddled with student debt, happiness might be a measure of success they can reasonably aim at, when financial success is well out of reach. One might think that ‘Millennials’ might look to the sub-field of ‘happiness economics’ to help build a better society. But is it any more objective, or even distinct from wealth?>

A leading source of data about happiness is the annual ‘World Happiness Report’ (http://worldhappiness.report/). I took a look at the results, as well as the input, to better understand why trends in the data exist. In general, it suggests that wealth and happiness tend to be fairly well correlated, both within nations and when rich and poor countries are contrasted. “Great!”, you might think, “we can rely on the easy-to-quantify economic metrics to maximise happiness”. But why is this the case? The results come from a multi-national survey, where the input question is phrased in a very specific way. The website ‘Our World In Data’ has a succinct summary:

The underlying source of the happiness scores in the World Happiness Report is the Gallup World Polla set of nationally representative surveys undertaken in more than 160 countries in over 140 languages. The main life evaluation question asked in the poll is: “Please imagine a ladder, with steps numbered from 0 at the bottom to 10 at the top. The top of the ladder represents the best possible life for you and the bottom of the ladder represents the worst possible life for you. On which step of the ladder would you say you personally feel you stand at this time?” (Also known as the “Cantril Ladder”.)”

So inherent in this measure of happiness is a question about where you think you compare with other people in society. I would argue that this is going to bias the results enormously. How can an individual hope to conceptualise the happiness of another, without looking at the material possessions? Even if, hypothetically, they had a full picture of the exterior lives of everyone with whom they are asked to contrast themselves, they would still be unable to see the actual degree of interior happiness. So this comparison is based on subjective interpretation of observed proxies for happiness.

Is it any surprise, then, that richer countries are ‘happier’ by this metric? If the dominant paradigm of happiness has any materialist component, then this factor will dominate how an individual perceives their position on this ladder. How else could they place themselves on the ladder otherwise? Fundamentally, this kind of analysis requires inequality – to have any meaningful results, the happiness of one individual has to be ‘different’ to another person.

To put this another way, if economic factors are a significant determinant of happiness, then the corollary is that people have historically got happier and happier as economic growth has progressed. Let’s imagine an individual at the dawn of the industrial revolution compared to our modern lives. Since then, many aspects of life have changed; life expectancy has increased, education has become near-universal in many countries, and welfare systems have been introduced (these would be termed measures of ‘well-being’). But does any of this mean that we are bound to be happier than our historical counterparts? Presumably, if confronted with the same question (how happy are you on a ladder of 1-10), they wouldn’t be able to perceive the possible future lives we live, and wouldn’t naturally place themselves lower down the ladder.

What I’m getting at here is that the ladder analogy not only requires inequality to work, but also that it is necessarily subjective. As a scientist, I’m always interested in more objective, bias-free ways to measure the universe, so is there a more scientific way? The neurobiology of emotion is highly complex, but we have made some headway over the past decades to better understand the chemical precursors of happiness. Hormones such as Dopamine, Serotonin, Endorphin, or Oxytocin are sometimes described as ‘happiness chemicals’. To my knowledge, no large-scale, multi-national study has been carried out to assess how the neurochemistry associated with happiness varies across differing individuals under differing standards of living. Given we still don’t fully know how these chemicals interact to produce ‘happiness’, such a large-scale survey would still be limited in terms of implications, but it would at least help address some key questions.

Of prime importance is the question of whether human biochemistry permits for one individual to be objectively happier than another? This has troubled thinkers for hundreds of years, and has given rise to the amusingly-named concept of the ‘Hedonic Treadmill’. Proponents of the treadmill argue that over the course of their lives, individuals tend to maintain a constant level of happiness, with fluctuations around the mean level as a result of specific life events. A biochemical investigation could test this hypothesis, as well as establish if the mean level (or ‘hedonic set point’) differs between individuals; are there fundamental differences in neurochemistry that permit some people to experience more sustained happiness than others?

The Hedonic treadmill idea is a potentially powerful argument against striving for progress at all costs. If every single individual experiences ups and downs, but essentially the proportion of their lives they spend ‘happy’ (in comparison with their average set-point) is the same, then why should we change where we are as a society? We could, instead, pick other metrics to define our success in life.

However, if this isn’t the case, and some individuals can attain a sustained life of chemical happiness, then we can rebuild social models to maximise the potential happiness at a given moment in the society, and find so called ‘Pareto-optimal’ (win-win) solutions. We could ascertain the actual conditions that best correlate with happiness, without needing to ask individuals to compare themselves with others; we could genuinely test whether financial wealth has any bearing on happiness.

These kind of questions are really in their nascent stage, since we simply don’t have a solid understanding of the neurochemistry of happiness. Moreover, given the inherent subjectiveness of any measure of ‘success’, they may be moot in the long run; happiness, after all, is just another quantity, and success is only defined as happiness if we choose to define it as such. I would be surprised, however, if over the decades to come, as neurology and big data move forward, if these questions remain unexplored by scientists and policy-makers. Think of the implications for sustainable development if it is shown that material possessions are (or are not) shown to be a key determinant of chemical happiness.

Replace the PhD

It has been well documented in recent years that the number of PhDs awarded is increasing at a rapid rate, much faster than the number of senior academic faculty jobs. The progression from graduate student to post-doctoral researcher to tenured professor, so long established, is now an uphill battle through a bottleneck; the proportion of PhDs for whom there are faculty positions at which they could aim is startling small at the moment, with some STEM fields having 5:1 ratios of PhDs to professors or more.

Having an overabundance of PhDs within academic circles is not inherently a problem, depending on your perspective. Scientific research in most cases requires extensive legwork, whether laboratory analysis, field sampling, paper writing, or computer coding; PhDs provide a skilled and cheap labour force to get this work done. More students means more work can be done, allowing for more complex and involved research questions to be posed; but what does this bottom-heavy pyramid mean for the students themselves?

The quid pro quo of this low-wage labour for a graduate student is that a PhD degree should theoretically set them up for an academic career; the assumption being that they should be able to work as independent researchers at the end of their degree. But with these academic jobs at a premium in all but a few fields, there has been some recent shift to emphasise the broad skill-set a PhD develops as well; writing papers and presenting at conferences improves communication skills, while research is supposed to hone problem solving ability too. A PhD, we’re told, sets up a student well to work outside of academia too.

This point, however, ignores one of the most fundamental aspects of the modern economy: the prevalence of the division of labour. Large industrial economies have broken down job roles into increasingly minor parts to allow us to produce an increasingly wide range of more and more niche products. Every role becomes more and more specialised, as individuals fill increasingly small parts of the overall economy.

The broad ‘skills’ of a graduating PhD student may thus provide them with a range of options outside of academia, but the expertise they have might well be entirely inapplicable to industry jobs. Some PhDs may study topics that give them comparable expertise to an industry peer, but for some (including myself), a ‘Jack of a number of trades’ set of abilities can’t necessarily compete. Could a PhD compete with a peer who had spent an equivalent amount of time working in the industry in question? It seems unlikely.

The result, I’d suggest, is that PhDs leaving academia who have not honed a specific skill would be forced to enter the workforce at an entry level, behind their classmates who had started years before. Some may disagree, but I would point to the figures suggesting that a PhD does not provide a significant boost in lifetime earnings compared to a Masters’ degree graduate to support the notion of underemployed doctorates, and the difficulty PhDs have in finding jobs.

There’s already a basic solution to this problem in the paragraph above – we should encourage PhD students to pick up and specialise in a given skill if they think they may leave academia. But I would argue for a more significant change in the structure of academic work (particularly scientific study): we should heed the example of the wider economy and build division of labour into our institutional models. In other words, instead of expecting a number of PhD students to take on a range of roles throughout their studies, we should consider employing a number of different specialists, each an expert in their specific role.

There are already examples of this: I was fortunate to work in a research group where laboratory work was significantly simplified by hard working lab technicians, and every student and professor knows the value of friendly admin assistants. Which other roles that PhD students normally take on could we conceive of instead being handed over to experts?

Paper writing and editing take up an inordinate amount of time for almost all scientists; it’s vitally important to communicate formal discoveries in a clear, concise fashion. Specialists could be brought in to work on the drafting and copy editing; we could consider a kind of journalism position for this kind of work. Similarly, conference presentations could be given by specialist presenters; many scientists I’ve spoken to feel that the ability to effectively communicate scientific findings at conferences is the sign of a great researcher, but what about those reclusive geniuses who are making radical conceptual advances but are constitutionally incapable of standing in front of a crowd? Why shouldn’t these researchers’ findings see the light of day in a presentation worthy of their achievement, given by an expert presenter?

Specific roles for student mentors, lecturers, or even professionals ensuring that good scientific conduct is followed (ethical or procedural) could also be considered. All of these individuals would be working toward a common goal, and it would exploit the efficiencies of economies of scale and division of labour. Arguably, with increasingly large and complex projects, such as the CERN particle accelerator or any number of space probes, the concept of the individual scholar is increasingly meaningless; science is a team effort, and we should treat it as such. There’s clearly an appetite for it, judging from the results of a poll recently run by Nature magazine.

Academia still needs senior, tenured professors, whether directing the team, joining the dots, analysing data for answers, or asking the right questions. Those people filling these roles need to understand the whole of the research project; they need an understanding of the “ins and outs” of each role to appreciate how they might affect the overall results. Perhaps a PhD is still a necessary precursor to fill these roles, but to me it doesn’t seem like an absolute requirement.

To those who might argue that such division of labour would introduce unpleasant hierarchies in science, I would counter with two points; first, that industry has achieved it successfully for years, and second that as a PhD student I had more respect and gratitude for the lab technicians and fieldwork specialists with whom I worked than almost any other collaborator.

Such a system would allow for a wider range of individuals to find their niche in STEM careers, while preventing an overabundance of PhD students. Young people interested in academic research could still find a role and hone a specific skill, giving them an edge for a future career outside of academia. Why not stop pretending that a PhD offers a guaranteed career in academia and revise how we treat the increasingly wide pyramid of academic labour?

Value in uselessness

What makes a place valuable? The concept of land value has varied throughout human history, dependent on the needs of the civilisations living nearby to a given setting. Economic factors have encouraged humans to compete for farmland or other resource-rich regions since the earliest peoples inhabited the fertile crescent, but we sometimes forget that cultural values are also tied up in how we value land. Sites of religious significance, or those associated with myth and legend, have inherent value to certain people. The advent of neoclassical economics and colonialism has led to a tendency to value land numerically (and thus in terms of the resources) but for aboriginal peoples around the world the spiritual and social connection to the land is not well quantified by such metrics.

Today national parks are designated to capture some of the cultural or scientific value of a landscape – at least where the potential for resource exploitation is not too tempting for a given state to allow extraction.

But what about areas that have no value attached to them by humans, either economic or cultural? There are many areas that, as a result of lack of infrastructure or impossible logistics, have limited or no economic value – think of the Tibetan plateau, the Empty Quarter of Saudi Arabia, or even the Antarctic desert. It’s fair to say that if the market value was pushed high enough and resources were found in such areas, they could still be exploited, but many such locations are simply lacking in any resource of interest.

Moreover, there are some locations where this lack of any economic value is combined with negligible human historical or cultural value; ‘useless’ places where humans have never been able to gain a foothold or established civilisation. There are not so many places, since humans have broadly found their way everywhere; but a search on google for ‘least accessible mountains’ comes up with some desperately isolated peaks in the high parks of Tibet, or those in Antarctica. I was fortunate to recently visit a place nearly as forgotten and empty as these, but situated in the heart of the United States: the Henry Mountains.

IMG_3742
Looking east over Canyonlands National Park, from the south end of the Henrys

The mountains are situated in the southern part of Utah, surrounded by the more famous national parks of Zion, Arches, Canyonlands and Bryce. So much of southern Utah is protected as part of national parks or reserves that it is perhaps surprising to find these mountains are not designated as part of any protected area. At the same time, there is essentially no development of the area for economic purposes; meagre desert grasses support marginal grazing and a single paved road runs through them, but otherwise they are bereft of civilisation. The geologist GK Gilbert, who was the first white scientist to fully describe the range for the USGS in 1877, wrote that ‘No one but a geologist will ever profitably seek out the Henry Mountains’, concluding that the economic value of the land was negligible.

And thus the 5 peaks of the range have remained more or less untouched. In a recent visit, we took a hike for 5 hours in the southern part, and not a single car drove past, nor did we encounter a single other person. This was during peak tourist season in Utah, but the lack of infrastructure and presence of more prestigious wilderness areas nearby seemingly precludes any tourism there. The Henry mountains lack the more unique landforms expressed in the aforementioned national parks, and this disqualifies them from national park designation. And as Gilbert described, their economic value is near nil. Prehistoric peoples lived in the regions nearby to the mountains, but no significant archaeological remains have yet been found upon the mountains themselves. The Navajo people refer to the range as Dził Bizhiʼ Ádiní, literally meaning “mountain whose name is missing” [1].

So it seems like these mountains are nearly totally useless by the metrics with which we usually judge land value. Even I wouldn’t have been likely to visit if Gilbert hadn’t made them sound so tempting to geologists. But standing atop Mt Ellsworth, in the southern part of the range, not only were the vistas of the surrounding wild lands as stunning as any you could find anywhere around the world, but the sense of isolation really drove home the uniqueness of the experience.

And that’s what I find most interesting. The drive to find ever more interesting and diverse experiences is one of the defining factors of the modern era, and it has driven people to travel all over the world to seek out far flung locations. Our experiences define our lives, so we’re told, and I personally consider the diversity of experience to be an important factor in how I judge success in my own life. The national parks of Utah are rightly fêted for the wonder they instil in a visitor, but with the numbers of park visitors skyrocketting [2], one might be hard-pressed to experience them alone. Perhaps it’s selfish to seek out experiences that are unique to oneself; but if we were to treat experience as a commodity, then scarcity would no doubt increase its value.

Given that for most tourists and visitors, we don’t own the land we stand upon, but merely own the experience of standing there, perhaps we should consider the value inherent in useless places. Assigning value to places that we may otherwise deem to be worthless on the basis of our own subjective experience is perhaps an ironic inversion of postmodern thinking, but in an era that seems to be increasingly defined by relativism (‘alternative facts’) it was for me a unique joy to ignore any objective truth about a place and revel in the isolation and wilderness, which offered my own subjective paradise.

22499341_10156051584510166_6589105350918128630_o
The author atop Mt Ellsworth – photo credit Laurence Pryer

[1] Linford, L. Navajo Places. History, Legend, Landscape. University of Utah Press. 2000

[2] https://www.nytimes.com/2017/09/27/us/national-parks-overcrowding.html?action=click&module=Featured&pgtype=Homepage

Desert and Geomorphology in Utah

America is a country of which I’ve only scratched the surface, and particularly in terms of the extensive network of national parks. Last week I set out to remedy some of that, travelling to southern Utah with a friend. With 5 national parks and numerous national monuments and preserves, the proportion of land in some way protected in the park is amongst the highest of any of the US states. As a visitor, then, it makes for a compact (ish) trip to visit a number of these sightseeing meccas, and while they’re all ostensibly within the same near-desert / arid environment the forms and shapes of the different landscapes are highly diverse, driven by differing erosional processes.

The chief purpose of our trip wasn’t just to visit the national parks though. Hidden amidst the national parks is a small mountain range – the Henry Mountains. 140 years ago a USGS scientist, GK Gilbert, produced a report on the geology of this range, the last mountain range in the lower 48 states to be mapped. The report itself is not what one would expect from a modern scientific report; it’s qualitative rather than quantitative, and in many places nearly poetic in the style of writing. It also contains a concise description of the field I studied for my PhD – geomorphology. Many, if not most of the ideas that we are still working on today in the field are pretty well explained by Gilbert in the his report, so the trip was in some respects me trying to understand a little of the man who defined my field by visiting the mountains he studied.

The trip was a productive source of a number of ideas for writing, which I hope to work on over the coming weeks. As a brief summary, below are a number of images from this stunning part of the world.

bryce stitch
Bryce Canyon
Capreef stitch
Grand Staircase / Escalante National Monument
henrys stitch correct
Mt Hilliers, from the south of the Henry Mountains
IMG_3542
Delicate Arch, Arches National Park
IMG_3829
The night sky over Canyonlands National Park
IMG_3849
Road through Canyonlands

 

 

Are research papers outmoded?

Rankings, metrics, scores; numerical methods are so widely used at the moment to judge and analyse different systems that we often forget there are alternatives. A number is objective, and plugs nicely into algorithms to allow better assessments of a whole range of human interactions – and science is no different. Whether h-index, m-index, citation score, or even your Researchgate number, scientists are often gauged on their performance by these numerical metrics. The relative merits and disadvantages of these scores have been widely described, but one relatively under-discussed aspect is the very basis of how they’re calculated – which, for the most part, is the research papers the scientists have themselves written.

The professor and philosopher Marshall McLuhan famously argued that ‘the medium is the message’ – that the medium by which information is communicated affects the information itself. For example, the introduction of the telegram didn’t just allow people to send messages over long distances – it allowed news to be reported immediately across continents, irrevocably changing the type of news that was reported; in other words, it changed the society that it was used in. In other words, technology is rarely neutral in its social effect. A simpler example: the electric lightbulb allowed people to see and work at night – giving society the chance to change working hours.

Scientific journals and the papers therein have existed in some form or other since the mid-17th century. Many aspects have changed, including the important advent of peer-review in the early 19th century, but at their most simple terms they remain the same: written text, published, and therefore unchanged after they go into print. Rarely do we ask what this medium means for the research that is published and the broader communication of science. In particular, if scientists are judged on what they’ve authored, what are the implications of this medium for our metrics?

A published paper is by it’s nature static, unchanging, and part of the historical record. Contrast a paper with a lecture, or conference presentation, for example; a public talk is seen once by an audience, but unless it is recorded, then it cannot be judged again or referred back to. A paper, on the other hand, can be referred to and cited from the point it is published. These aspects are essential to modern science; we must have record of prior work, in order to justify the assumptions within novel studies.

However, once published, a research paper is left, unaltered. This stands in contrast to science as a whole; no theory should go unquestioned, and new hypotheses should redress the issues with prior studies. Is it fair, then, to judge a scientist upon older papers that may have been disproved – even by the researcher themselves? Given the tendency for older papers to be superseded, how are we to factor this in when assessing a researcher’s oeuvre? If a journalist pens a series of articles on an event that is still ongoing, would it be fair to assess them on pieces published before all the facts emerged? The parallel to science is clear, with the important caveat that scientific research is always evolving.

The tradition of published research predates Karl Popper, the philosopher of science, and I would argue that some aspects of the medium are contradictory to the way he argued science should be conducted. Popper argued in the first half of the 20th century that for a statement to be scientific, it must be falsifiable. Providing definitive proof of a statement is not logically possible, due to the problem of induction, and as such science should offer only falsifiable hypotheses that represent the best current understanding of a problem.

Other thinkers have added to this notion. I particularly note Imre Lakatos’ contribution. In his paper Falsification and the Methodology of Scientific Research Programs he suggests that

“Intellectual honesty does not consist in trying to entrench, or establish one’s position by proving (or ‘probabilifying’) it intellectual honesty consists rather in specifying precisely the conditions under which one is willing to give up one’s position.”

If one is judged by the research that one has published – and in particular the amount of citations your work receives – it hardly incentivises you to state the precise conditions under which you’d be prepared to admit you’re wrong. In fact, it encourages the opposite behaviour, since a more embedded idea will likely stick around longer and accumulate more citations. The essence of science (at least since logical positivism was largely discredited in the last 100 years) is to embrace being wrong in the search for a deeper understanding of the world at large, but this is certainly not mirrored in our publication model.

So how can we address this? Evolving scientific understanding could benefit from evolving accounts of science, moderated and curated by researchers. The internet provides us with a platform for continually updating our understanding, and in a fundamentally collaborative way. Wikipedia is a clear example of just such a platform, where the state of the art can be continually adjusted and revised. A hypothetical ‘unified compedium of knowledge’ could operate and evolve much like the code bases of tech systems; changing in response to new discoveries, but where archived versions could show the evolution of ideas.

“But wait,” I hear many scientists interject, “what about peer review? How can we trust content that isn’t reviewed?” In response, I would again turn to the philosophers of science. Why should a one-time peer review guarantee the long-term validity of a study? Contrary evidence could arise at any later point (and this indeed is the problem of induction), and I would argue that instead of a one-time review, we should consider all work critically at all points, whether before or after peer-review. This attitude would naturally lend itself to a perpetually updated repository of knowledge.

One can imagine, of course, that this kind of project could rapidly stagnate; if researchers disagree, they could demonstrate the false nature of each others’ ideas without significantly contributing to the knowledge-base. Here, Lakatos can offer some guidance. He suggests we should only consider a theory falsified if an alternative theory is provided, that can both explain the existing observations and “predicts novel facts” – i.e., improves over the prior theorem.

This centralised model would (in my eyes at least) increase collaborative work, and since new theorems would have to explain the existing observations, there is an in-built mechanism to encourage testing of the reproducibility of findings. Continual review and improvement would also be immanent. Open access to data and methods would be necessary to this kind of model, and it would be necessary to train authors and contributors to state the conditions under which their findings would be falsified.

Metrics to gauge the contribution of individuals to this kind of project would not be dependent on how fast a given field evolves, but they would likely look significantly different to those we currently work with. However, the amount of data that would be generated would provide ample opportunity to assess researchers in a fundamentally different way.

It remains to be seen whether such a model would even be conceivable. Science changes slowly, and the objectives that funding agencies look for are not necessarily aligned; universities may not be appreciative of researchers sharing insight with competing institutions, much like commercial entities try to avoid corporate espionage. But if we genuinely value the advancement of science rather than local politicking, then these kind of concerns should not prevent a shift.

Open Access & Industry funded research

Increasing public access to scientific research has become an important target in many democracies over recent years. Both the researchers funded by government or taxpayer sourced money and the taxpayers themselves have advocated for more of the results to be published in places where members of the public can access and use the results free of charge. In the UK, for example, publications from research funded by the National Environment Research Council (NERC) must be ‘open access’ – freely available to the public (1). Elsewhere, the German Helmholtz Institutes have recently announced they will cancel subscriptions of journals from the company Elsevier in part due to lack of open access options (2).

These efforts are certainly having a clear effect on academic science; discussions of open access options are increasingly incorporated into how researchers decide where to publish their findings. However, even if every scientist working at a university with government funding published all of their papers with open access journals, this wouldn’t give the public access to all of the research and development taking place; in fact, it wouldn’t even be a majority.

It should come as no surprise at all that private business invests significantly in science, but I was personally shocked when I found out quite how much of it is privately funded. According to the OECD, an average of 60% of funding in the developed world comes from for-profit business (3), although this varies widely between countries (as low as 30% in Greece, but greater than 80% in Israel).

Does this privately funded research get published? I found it difficult to get decent statistics for the proportions, so I took a sample of data myself to test. I looked at the 87 papers published in the open access journal PloS One on June 30th 2017, and looked at the statement of conflicting interests to get a sense of which articles might have been funded by private institutes. In general, this is where authors would be obliged to list any conflicting financial interests, which broadly includes funding from commercial sources.

Of those 87 articles, only 8 had any conflicting interests (the data are available in a Google document (4)). Naturally, it may well be the case that the journal and sample set I used was not in any way representative, but it matches up with my experience working as an editor at Nature Geoscience; published science is dominated by research that isn’t funded by private institutions, even though they provide the bulk of the financing.

“Of course”, an intelligent reader would say, “the incentives are different in industry.” True enough; publications can often seem like a goal in themselves for researchers, but commercial enterprise has other objectives, not least of which is turning a profit. Sending your rivals a project for peer review would be a disastrous way of handing your secrets to competitors; holding onto data preserves the edge that a business works hard to create. Moreover, review and publishing take time, further eating into tight margins.

The possibility that more than half of scientific endeavour is never published does seem disheartening for those who believe that science as a human construct is a collaborative system. Are there any arguments that could persuade corporate interests to release their information into the public sphere?

It seems clear that only the most philanthropic of corporations would want to make their new findings freely available. Indeed, we shouldn’t expect this to change. However, old research or redundant data that no longer impacts corporate bottom lines could still be a benefit to other researchers, looking for alternative ideas or datasets. Consider a model like a pharmaceutical company with leftover stock of drugs that are past their patent expiry, and as such lower in commercial value. These drugs could be donated to countries unable to afford the brand new state-of-the-art drugs, which could certainly aid corporate image in the public consciousness. In such a scheme, data and results would not need to be formally written up, but even under such a ‘buyer-beware’ system, useful information might be gleaned.

Image conscious companies would naturally only make up a fraction of all industry R&D. A purely profit motivated organisation would need other incentives, but here government could step in. Government subsidy is an important part of many industries, and we might envision that a quid pro quo for subsidy assistance would be to expect that some proportion of the R&D conducted by such firms would be made publicly available.

Only a few days prior to this post (at the end of August 2017) the UK government announced that £140 million would be provided as subsidy to encourage collaboration between academia and industry in the life sciences (5). If results from this collaboration are not subject to the same requirements as other UK government funded research, it would seem extremely hypocritical.

The benefits of offering industry data up to the public are not limited to scientific research. If governments are interested in holding corporate entities accountable for their actions, the R&D research would be a useful place to check. The example making the rounds in the science-environment media at the moment is that Exxon Mobil researchers were well aware of the risks of climate change, but executives didn’t communicate the potential threat (6). We know now, too, that tobacco companies engaged in similar behaviour decades earlier.

In a similar fashion, increasing pressure is now placed upon pharmaceutical companies to publish the results of clinical trials (e.g. the All Trials organisation (7)). Naturally, pharmaceutical companies have looked to lobby against these changes. It need not be said that implementing changes to the way industry shares research findings at a broader scale would be just as difficult, if not impossible. Government oversight and corporate accountability are not strongly compatible with the current laissez-faire economic models. However, the scientists working at the bench aren’t so different between academia and industry; in both areas researchers benefit from access to prior work, and so perhaps is it incumbent upon the researchers themselves to push for this kind of data sharing.

References
(1) http://www.nerc.ac.uk/research/funded/outputs/

(2) https://www.helmholtz.de/en/current_topics/press_releases/artikel/artikeldetail/helmholtz_zentren_kuendigen_die_vertraege_mit_elsevier/

(3) http://www.oecd-ilibrary.org/docserver/download/9215031ec027.pdf?expires=1504219877&id=id&accname=guest&checksum=0ED3BF8C84C0698A673E96723F21100A

(4) https://docs.google.com/spreadsheets/d/1SFXJIdvuw3wE4Rp0-YMpjk7jNxMj2EC1xnwY9IUaTO8/edit?usp=sharing

(5) http://www.bbc.com/news/science-environment-41101892

(6) http://iopscience.iop.org/article/10.1088/1748-9326/aa815f

(7) http://www.alltrials.net/

Information asymmetry in science & publishing

Suppose you want to buy a used car, but your knowledge of car maintenance is limited, and you need a car quickly. The dealership you visit has a range of cars, some better than others. Since you find it difficult to tell the difference between the good and bad cars, you might be inclined to lower your offer for any of the cars, to avoid overpaying for a bad car (a ‘Lemon’) by mistake. If the dealer knows you won’t pay enough for a higher quality car, but can’t tell the difference, they have then a stronger motivation to try and shift one of the lower quality motors. This means that the better cars remain unsold. This effect stems solely from a difference in the information about the product for the buyer and seller.

This thought experiment is based on George Ackerlof’s famous 1970 paper, ‘The market for Lemons’ (1), where he explored the concept and effects of asymmetric information in economics. In transactions where one party has differing information to the other, adverse effects can occur; economists refer to this as an imperfect market, where often both buyer and seller can lose out.
This idea has gained traction among economists, who have linked problems in (for example) social mobility, or Obamacare, to imperfect information. In the sciences, however, we rarely think it such stark, transactional terms. This may be to our detriment, however, since in the process of publishing science we are likely to encounter several points where different parties have varying levels of information about a given study which could potentially lead to a poorer communication of facts and data.

The audience for a scientific study has a range of information at their fingertips – author affiliations, potential conflicting financial interests, for example – that enable a judgement to be made about the content of the work. This kind of meta-information provides a useful link between author and reader that can help provide trust in the work at hand, but there are other ‘meta’ aspects of research that are trickier to communicate; why have the authors chosen to write up this specific set of data, rather than any other findings? What, if anything, changed during the review process? These are potential times where an asymmetry in the information available about a scientific study could limit the trust which a reader could place in the findings.

Competition could encourage the omission of details if it could be of financial benefit to an individual or a corporation. If an author cannot tell whether a study represents just the best results or not, then their trust in a research project could be limited; research shows that in many cases drug trials go unpublished or unfinished (2), so how seriously should we take those that are published? Without full information about unpublished studies, the value of published studies could be questionable, across the whole field in question.

Who knew what, and when?

In a general sense, we can think about the asymmetry of knowledge between different parties involved in publishing; what do authors know that the readers (and to a different extent editors) don’t?

We can assume that the author tends to have a greater grasp of the information involved in study than the reader; they make judgements about which data to include, and which aspects of their research should be written up completely. Few scientists would be able to claim they’d published every part of the trains of thought that had led them to where they currently are in their careers. In most cases, the decisions as to the selection of data can be which results are most interesting, or offer the best chance for success in a high tier journal, or in the simplest case, those data that pertain to the hypothesis in question (why mention data you don’t believe pertains to the question you’re asking?)

Even if the choices of experimental design and data to include in published papers are generally made in good faith, it can be difficult to explain to readers. The controversy surrounding the hacked emails of the Climate Research Unit at UEA highlights how the disparity between formal and informal communication in science can be misconstrued, at great cost to public trust in science in this case (3). Where readers may have a sense that a backstory could be omitted, the value they place in a given study may decline, regardless of the history of an article.

A culture that prioritises the publication of interesting research in higher tier journals leaves less room for academics to give weight to the work that lies between these topics. Perhaps we should give credit to scientists for keeping a public research diary of sorts, that could serve as an open archive of the direction in which they are working. This may be a harder sell where competition between differing research groups is a driving factor, but the flip-side could be to actually foster a more cooperative research environment.

An even slower publication process

During submission, review and publication of papers, there are a number of facets that may induce an asymmetry of information. The editor naturally asks for expert opinion as to the quality of an article through peer review – much like an antiques salesperson would seek a valuation of a supposedly priceless heirloom to avoid fraud. In this way, the editor seeks to increase their information about the article, and thus can value them more appropriately; but where the referee isn’t given sufficient evidence to make these judgements, the editor can be left blind. Thus we see the value in providing all data to allow complete review.

However, there are other, more opaque parts of the publication process that could limit what each party knows. An editor must make a subjective judgement about whether a submission is suitable for their audience; if readers or authors are unaware of the rationale for these decisions, it may affect their impression of the finally published articles.

Of course, publicising such details stands in contrast to the business models of many journals, and it almost need not be mentioned how much longer this would take overall. Should we advocate for a fully open publication process, at the expense of an even longer turn-around time for research papers?

Expediency or openness?


Where information is not evident to readers, it tends to be the result of processes to expedite the wheels of scientific advancement; the need for a reader to absorb all meta information about a study (history, outliers, rationale, even the train of thought) would markedly increase the time required to understand a research field.

Should we then be weighing up expediency against trust in science? In the present research enviroment, with questions about the trust placed in the scientific endeavour, this is a valid question to ask. It may even be the case that such a slowdown in research may not be the case; with fewer repeated trips down blind avenues of study, and the potential for greater communication and cooperation, there is potential for advancement to still occur swiftly, with a greater sense of trust from readers and governments that may be funding our studies.

(1) http://www.econ.yale.edu/~dirkb/teach/pdf/akerlof/themarketforlemons.pdf

(2) http://pediatrics.aappublications.org/content/early/2016/08/02/peds.2016-0223?sso=1&sso_redirect_count=1&nfstatus=401&nftoken=00000000-0000-0000-0000-000000000000&nfstatusdescription=ERROR%3a+No+local+token

(3) http://climatecommunication.yale.edu/publications/climategate-public-opinion-and-the-loss-of-trust/