Are research papers outmoded?

Rankings, metrics, scores; numerical methods are so widely used at the moment to judge and analyse different systems that we often forget there are alternatives. A number is objective, and plugs nicely into algorithms to allow better assessments of a whole range of human interactions – and science is no different. Whether h-index, m-index, citation score, or even your Researchgate number, scientists are often gauged on their performance by these numerical metrics. The relative merits and disadvantages of these scores have been widely described, but one relatively under-discussed aspect is the very basis of how they’re calculated – which, for the most part, is the research papers the scientists have themselves written.

The professor and philosopher Marshall McLuhan famously argued that ‘the medium is the message’ – that the medium by which information is communicated affects the information itself. For example, the introduction of the telegram didn’t just allow people to send messages over long distances – it allowed news to be reported immediately across continents, irrevocably changing the type of news that was reported; in other words, it changed the society that it was used in. In other words, technology is rarely neutral in its social effect. A simpler example: the electric lightbulb allowed people to see and work at night – giving society the chance to change working hours.

Scientific journals and the papers therein have existed in some form or other since the mid-17th century. Many aspects have changed, including the important advent of peer-review in the early 19th century, but at their most simple terms they remain the same: written text, published, and therefore unchanged after they go into print. Rarely do we ask what this medium means for the research that is published and the broader communication of science. In particular, if scientists are judged on what they’ve authored, what are the implications of this medium for our metrics?

A published paper is by it’s nature static, unchanging, and part of the historical record. Contrast a paper with a lecture, or conference presentation, for example; a public talk is seen once by an audience, but unless it is recorded, then it cannot be judged again or referred back to. A paper, on the other hand, can be referred to and cited from the point it is published. These aspects are essential to modern science; we must have record of prior work, in order to justify the assumptions within novel studies.

However, once published, a research paper is left, unaltered. This stands in contrast to science as a whole; no theory should go unquestioned, and new hypotheses should redress the issues with prior studies. Is it fair, then, to judge a scientist upon older papers that may have been disproved – even by the researcher themselves? Given the tendency for older papers to be superseded, how are we to factor this in when assessing a researcher’s oeuvre? If a journalist pens a series of articles on an event that is still ongoing, would it be fair to assess them on pieces published before all the facts emerged? The parallel to science is clear, with the important caveat that scientific research is always evolving.

The tradition of published research predates Karl Popper, the philosopher of science, and I would argue that some aspects of the medium are contradictory to the way he argued science should be conducted. Popper argued in the first half of the 20th century that for a statement to be scientific, it must be falsifiable. Providing definitive proof of a statement is not logically possible, due to the problem of induction, and as such science should offer only falsifiable hypotheses that represent the best current understanding of a problem.

Other thinkers have added to this notion. I particularly note Imre Lakatos’ contribution. In his paper Falsification and the Methodology of Scientific Research Programs he suggests that

“Intellectual honesty does not consist in trying to entrench, or establish one’s position by proving (or ‘probabilifying’) it intellectual honesty consists rather in specifying precisely the conditions under which one is willing to give up one’s position.”

If one is judged by the research that one has published – and in particular the amount of citations your work receives – it hardly incentivises you to state the precise conditions under which you’d be prepared to admit you’re wrong. In fact, it encourages the opposite behaviour, since a more embedded idea will likely stick around longer and accumulate more citations. The essence of science (at least since logical positivism was largely discredited in the last 100 years) is to embrace being wrong in the search for a deeper understanding of the world at large, but this is certainly not mirrored in our publication model.

So how can we address this? Evolving scientific understanding could benefit from evolving accounts of science, moderated and curated by researchers. The internet provides us with a platform for continually updating our understanding, and in a fundamentally collaborative way. Wikipedia is a clear example of just such a platform, where the state of the art can be continually adjusted and revised. A hypothetical ‘unified compedium of knowledge’ could operate and evolve much like the code bases of tech systems; changing in response to new discoveries, but where archived versions could show the evolution of ideas.

“But wait,” I hear many scientists interject, “what about peer review? How can we trust content that isn’t reviewed?” In response, I would again turn to the philosophers of science. Why should a one-time peer review guarantee the long-term validity of a study? Contrary evidence could arise at any later point (and this indeed is the problem of induction), and I would argue that instead of a one-time review, we should consider all work critically at all points, whether before or after peer-review. This attitude would naturally lend itself to a perpetually updated repository of knowledge.

One can imagine, of course, that this kind of project could rapidly stagnate; if researchers disagree, they could demonstrate the false nature of each others’ ideas without significantly contributing to the knowledge-base. Here, Lakatos can offer some guidance. He suggests we should only consider a theory falsified if an alternative theory is provided, that can both explain the existing observations and “predicts novel facts” – i.e., improves over the prior theorem.

This centralised model would (in my eyes at least) increase collaborative work, and since new theorems would have to explain the existing observations, there is an in-built mechanism to encourage testing of the reproducibility of findings. Continual review and improvement would also be immanent. Open access to data and methods would be necessary to this kind of model, and it would be necessary to train authors and contributors to state the conditions under which their findings would be falsified.

Metrics to gauge the contribution of individuals to this kind of project would not be dependent on how fast a given field evolves, but they would likely look significantly different to those we currently work with. However, the amount of data that would be generated would provide ample opportunity to assess researchers in a fundamentally different way.

It remains to be seen whether such a model would even be conceivable. Science changes slowly, and the objectives that funding agencies look for are not necessarily aligned; universities may not be appreciative of researchers sharing insight with competing institutions, much like commercial entities try to avoid corporate espionage. But if we genuinely value the advancement of science rather than local politicking, then these kind of concerns should not prevent a shift.

Open Access & Industry funded research

Increasing public access to scientific research has become an important target in many democracies over recent years. Both the researchers funded by government or taxpayer sourced money and the taxpayers themselves have advocated for more of the results to be published in places where members of the public can access and use the results free of charge. In the UK, for example, publications from research funded by the National Environment Research Council (NERC) must be ‘open access’ – freely available to the public (1). Elsewhere, the German Helmholtz Institutes have recently announced they will cancel subscriptions of journals from the company Elsevier in part due to lack of open access options (2).

These efforts are certainly having a clear effect on academic science; discussions of open access options are increasingly incorporated into how researchers decide where to publish their findings. However, even if every scientist working at a university with government funding published all of their papers with open access journals, this wouldn’t give the public access to all of the research and development taking place; in fact, it wouldn’t even be a majority.

It should come as no surprise at all that private business invests significantly in science, but I was personally shocked when I found out quite how much of it is privately funded. According to the OECD, an average of 60% of funding in the developed world comes from for-profit business (3), although this varies widely between countries (as low as 30% in Greece, but greater than 80% in Israel).

Does this privately funded research get published? I found it difficult to get decent statistics for the proportions, so I took a sample of data myself to test. I looked at the 87 papers published in the open access journal PloS One on June 30th 2017, and looked at the statement of conflicting interests to get a sense of which articles might have been funded by private institutes. In general, this is where authors would be obliged to list any conflicting financial interests, which broadly includes funding from commercial sources.

Of those 87 articles, only 8 had any conflicting interests (the data are available in a Google document (4)). Naturally, it may well be the case that the journal and sample set I used was not in any way representative, but it matches up with my experience working as an editor at Nature Geoscience; published science is dominated by research that isn’t funded by private institutions, even though they provide the bulk of the financing.

“Of course”, an intelligent reader would say, “the incentives are different in industry.” True enough; publications can often seem like a goal in themselves for researchers, but commercial enterprise has other objectives, not least of which is turning a profit. Sending your rivals a project for peer review would be a disastrous way of handing your secrets to competitors; holding onto data preserves the edge that a business works hard to create. Moreover, review and publishing take time, further eating into tight margins.

The possibility that more than half of scientific endeavour is never published does seem disheartening for those who believe that science as a human construct is a collaborative system. Are there any arguments that could persuade corporate interests to release their information into the public sphere?

It seems clear that only the most philanthropic of corporations would want to make their new findings freely available. Indeed, we shouldn’t expect this to change. However, old research or redundant data that no longer impacts corporate bottom lines could still be a benefit to other researchers, looking for alternative ideas or datasets. Consider a model like a pharmaceutical company with leftover stock of drugs that are past their patent expiry, and as such lower in commercial value. These drugs could be donated to countries unable to afford the brand new state-of-the-art drugs, which could certainly aid corporate image in the public consciousness. In such a scheme, data and results would not need to be formally written up, but even under such a ‘buyer-beware’ system, useful information might be gleaned.

Image conscious companies would naturally only make up a fraction of all industry R&D. A purely profit motivated organisation would need other incentives, but here government could step in. Government subsidy is an important part of many industries, and we might envision that a quid pro quo for subsidy assistance would be to expect that some proportion of the R&D conducted by such firms would be made publicly available.

Only a few days prior to this post (at the end of August 2017) the UK government announced that £140 million would be provided as subsidy to encourage collaboration between academia and industry in the life sciences (5). If results from this collaboration are not subject to the same requirements as other UK government funded research, it would seem extremely hypocritical.

The benefits of offering industry data up to the public are not limited to scientific research. If governments are interested in holding corporate entities accountable for their actions, the R&D research would be a useful place to check. The example making the rounds in the science-environment media at the moment is that Exxon Mobil researchers were well aware of the risks of climate change, but executives didn’t communicate the potential threat (6). We know now, too, that tobacco companies engaged in similar behaviour decades earlier.

In a similar fashion, increasing pressure is now placed upon pharmaceutical companies to publish the results of clinical trials (e.g. the All Trials organisation (7)). Naturally, pharmaceutical companies have looked to lobby against these changes. It need not be said that implementing changes to the way industry shares research findings at a broader scale would be just as difficult, if not impossible. Government oversight and corporate accountability are not strongly compatible with the current laissez-faire economic models. However, the scientists working at the bench aren’t so different between academia and industry; in both areas researchers benefit from access to prior work, and so perhaps is it incumbent upon the researchers themselves to push for this kind of data sharing.

References
(1) http://www.nerc.ac.uk/research/funded/outputs/

(2) https://www.helmholtz.de/en/current_topics/press_releases/artikel/artikeldetail/helmholtz_zentren_kuendigen_die_vertraege_mit_elsevier/

(3) http://www.oecd-ilibrary.org/docserver/download/9215031ec027.pdf?expires=1504219877&id=id&accname=guest&checksum=0ED3BF8C84C0698A673E96723F21100A

(4) https://docs.google.com/spreadsheets/d/1SFXJIdvuw3wE4Rp0-YMpjk7jNxMj2EC1xnwY9IUaTO8/edit?usp=sharing

(5) http://www.bbc.com/news/science-environment-41101892

(6) http://iopscience.iop.org/article/10.1088/1748-9326/aa815f

(7) http://www.alltrials.net/

Information asymmetry in science & publishing

Suppose you want to buy a used car, but your knowledge of car maintenance is limited, and you need a car quickly. The dealership you visit has a range of cars, some better than others. Since you find it difficult to tell the difference between the good and bad cars, you might be inclined to lower your offer for any of the cars, to avoid overpaying for a bad car (a ‘Lemon’) by mistake. If the dealer knows you won’t pay enough for a higher quality car, but can’t tell the difference, they have then a stronger motivation to try and shift one of the lower quality motors. This means that the better cars remain unsold. This effect stems solely from a difference in the information about the product for the buyer and seller.

This thought experiment is based on George Ackerlof’s famous 1970 paper, ‘The market for Lemons’ (1), where he explored the concept and effects of asymmetric information in economics. In transactions where one party has differing information to the other, adverse effects can occur; economists refer to this as an imperfect market, where often both buyer and seller can lose out.
This idea has gained traction among economists, who have linked problems in (for example) social mobility, or Obamacare, to imperfect information. In the sciences, however, we rarely think it such stark, transactional terms. This may be to our detriment, however, since in the process of publishing science we are likely to encounter several points where different parties have varying levels of information about a given study which could potentially lead to a poorer communication of facts and data.

The audience for a scientific study has a range of information at their fingertips – author affiliations, potential conflicting financial interests, for example – that enable a judgement to be made about the content of the work. This kind of meta-information provides a useful link between author and reader that can help provide trust in the work at hand, but there are other ‘meta’ aspects of research that are trickier to communicate; why have the authors chosen to write up this specific set of data, rather than any other findings? What, if anything, changed during the review process? These are potential times where an asymmetry in the information available about a scientific study could limit the trust which a reader could place in the findings.

Competition could encourage the omission of details if it could be of financial benefit to an individual or a corporation. If an author cannot tell whether a study represents just the best results or not, then their trust in a research project could be limited; research shows that in many cases drug trials go unpublished or unfinished (2), so how seriously should we take those that are published? Without full information about unpublished studies, the value of published studies could be questionable, across the whole field in question.

Who knew what, and when?

In a general sense, we can think about the asymmetry of knowledge between different parties involved in publishing; what do authors know that the readers (and to a different extent editors) don’t?

We can assume that the author tends to have a greater grasp of the information involved in study than the reader; they make judgements about which data to include, and which aspects of their research should be written up completely. Few scientists would be able to claim they’d published every part of the trains of thought that had led them to where they currently are in their careers. In most cases, the decisions as to the selection of data can be which results are most interesting, or offer the best chance for success in a high tier journal, or in the simplest case, those data that pertain to the hypothesis in question (why mention data you don’t believe pertains to the question you’re asking?)

Even if the choices of experimental design and data to include in published papers are generally made in good faith, it can be difficult to explain to readers. The controversy surrounding the hacked emails of the Climate Research Unit at UEA highlights how the disparity between formal and informal communication in science can be misconstrued, at great cost to public trust in science in this case (3). Where readers may have a sense that a backstory could be omitted, the value they place in a given study may decline, regardless of the history of an article.

A culture that prioritises the publication of interesting research in higher tier journals leaves less room for academics to give weight to the work that lies between these topics. Perhaps we should give credit to scientists for keeping a public research diary of sorts, that could serve as an open archive of the direction in which they are working. This may be a harder sell where competition between differing research groups is a driving factor, but the flip-side could be to actually foster a more cooperative research environment.

An even slower publication process

During submission, review and publication of papers, there are a number of facets that may induce an asymmetry of information. The editor naturally asks for expert opinion as to the quality of an article through peer review – much like an antiques salesperson would seek a valuation of a supposedly priceless heirloom to avoid fraud. In this way, the editor seeks to increase their information about the article, and thus can value them more appropriately; but where the referee isn’t given sufficient evidence to make these judgements, the editor can be left blind. Thus we see the value in providing all data to allow complete review.

However, there are other, more opaque parts of the publication process that could limit what each party knows. An editor must make a subjective judgement about whether a submission is suitable for their audience; if readers or authors are unaware of the rationale for these decisions, it may affect their impression of the finally published articles.

Of course, publicising such details stands in contrast to the business models of many journals, and it almost need not be mentioned how much longer this would take overall. Should we advocate for a fully open publication process, at the expense of an even longer turn-around time for research papers?

Expediency or openness?


Where information is not evident to readers, it tends to be the result of processes to expedite the wheels of scientific advancement; the need for a reader to absorb all meta information about a study (history, outliers, rationale, even the train of thought) would markedly increase the time required to understand a research field.

Should we then be weighing up expediency against trust in science? In the present research enviroment, with questions about the trust placed in the scientific endeavour, this is a valid question to ask. It may even be the case that such a slowdown in research may not be the case; with fewer repeated trips down blind avenues of study, and the potential for greater communication and cooperation, there is potential for advancement to still occur swiftly, with a greater sense of trust from readers and governments that may be funding our studies.

(1) http://www.econ.yale.edu/~dirkb/teach/pdf/akerlof/themarketforlemons.pdf

(2) http://pediatrics.aappublications.org/content/early/2016/08/02/peds.2016-0223?sso=1&sso_redirect_count=1&nfstatus=401&nftoken=00000000-0000-0000-0000-000000000000&nfstatusdescription=ERROR%3a+No+local+token

(3) http://climatecommunication.yale.edu/publications/climategate-public-opinion-and-the-loss-of-trust/

Volcanic Geology in North Western USA

To reach our chosen spot to watch the eclipse (Madras, in central-west Oregon) we took a road trip over a few days, from Vancouver and back to Grand Forks in British Columbia. Our outbound and return legs took us west and east of the Cascade mountains respectively, which offered some interesting insights into the different styles of volcanism that have shaped the landscape of the Pacific North West. In particular, it was striking to see the contrast between the much more recent volcanic activity at Mt St Helens and the ancient but vast deposits of the Columbia River Flood Basalts. While these are well studied and documented geological formations, I felt it would be interesting to write up some of these observations.

Road_trip_route.PNG
Road Trip Route


Mt St Helens

Even non-scientists will be familiar with Mt St Helens. An active stratovolcano, close to population centres in the US, was always likely to attract attention, but the hugely dramatic eruption in 1980 is well known as a prototypical volcanic disaster. In many ways, though, it was an unusual event; the ash and pyroclastic flows were a product of explosive decompression, after the entire side of the mountain slid away.

The park rangers give a great analogy to describe what happened. Under the volcanic cone, magma was gradually building up – much of it full of volcanic gasses. Imagine a fizzy drink bottle, full of bubbles. The magma pushed its way into the subsurface, and in doing so caused a number of earthquakes. The largest of these, just prior to the main eruption, caused the entire side of the mountain, made of relatively loose material, to collapse as a giant landslide. This acted to hugely reduce the pressure on the magma inside the volcano, still full of gas; imagine that now the fizzy drink bottle has had its lid removed, having been shaken up. You can imagine the resulting eruption!
This kind of collapsing mountainside-triggered event was unusual though; Mt St Helens was the first well-documented example. The resulting blast levelled trees all across the landscape; even today, these either lie where they fell or have been washed en masse into the nearby lakes.

 

DSCN0631.JPG
Logs washed from the still nearly-bare hillsides into the lakes.

The mountain itself is still stunning, albeit without the symmetric character that led to it being dubbed ‘Mt Fuji of the Americas’ before the eruption. The giant crater and the smaller incipient ridges within it (produced since the eruption as magma pushes upwards into the crater) is a formidable sight. In the early Victorian period, rugged mountain ranges were often viewed as terrible, forbidding scenes, a perspective that contrasts with the more modern view of mountains as sites of awe and beauty. Mt St Helens manages to bridge that divide – at least for me. The impressive nature of the topography cannot be disentangled from the very human side of eruptive history, in which dozens perished.

DSCN0667.JPG
The Northern Aspect of Mt St Helens from the boundary trail, showing the crater and landslide deposit below.

Personal bias as a result of my own previous study of landslides ensured that I spent a fair while considering the debris avalanche and the deposit that remains. In the image above, the whole foreground is dominated by the deposit, which is still the largest debris landslide in recorded history. The ‘hummocky’, or lumpy landscape results from great chunks of mountain that slid downhill overlain by finer, loose deposits on top.

A couple of interesting aspects, from my perspective. First, the snow melt and flooding is clearly cutting quickly into the loose deposits every year; those canyons that can be seen in the centre of the image are being incised at high speed (I estimate on the order of metres per year). This could be a great set of field observations for sediment scientists, if it hasn’t already been studied!

Secondly, those familiar with my PhD work will know I looked at the way in which landslides can affect the amount of dissolved mineral elements in the water draining across and through their deposits. As such, seeing the largest landslide on record certainly piqued my interest as to the state of the water chemistry in the Toutle River, which drains the bulk of the deposit. It would be a cool test of concept if a time series exists! If anyone is aware of such a data series, please do let me know.

Mt St Helens is only one of the sporadically active Cascade volcanoes. Their form is in many ways similar; they stand proudly above the surrounding landscape by several thousands of feet, formed as individual cones. We saw a number of these (Mts Jefferson & Hood were visible from the vantage point where we watched the eclipse), and last month I climbed Mt Baker, the northernmost large volcano.

2017-07-23 13.39.38.jpg
Mt Baker, northern aspect; at the peak, small volcanic outgassing can discolour the snow, testament to its continued activity.

The landscape over which these volcanoes tower bears the marks of another type of volcanic eruption; perhaps less obvious, but only because the hallmarks are thousands of kilometres from edge to edge.

Columbia River Flood Basalts

 

Volcanoes like the Cascade examples form as magma rises at the margin of colliding tectonic plates; many may have heard of the ‘Pacific Ring of Fire’, and these volcanoes are part of that system. This isn’t the only way in which volcanoes can form though. In some places, hot material can well up from many thousands of km deep within the Earth’s mantle, and as it nears the surface, begins to melt due to the elevated temperature. This molten magma then erupts through the plate above – as a ‘hot spot’. This is the kind of volcanism we see today in places such as Hawaii.
These kind of eruptions are generally less explosive than those at plate boundaries, but can also produce vast quantities of lava; and that’s exactly what scientists believe happened in Washington and Oregon some 14-17 million years ago. The huge quantity of lava that was erupted eventually inundated the landscape, in some cases to over a km in depth, over nearly 200,000 square kilometres.

And so, as we drove across this landscape, we saw outcrops of this lava exposed literally everywhere. The lava cooled into basalt, and as it did it cracked in very distinct ways – forming ‘columnar joints’, much like the famous Devil’s Causeway in Northern Ireland.

IMG_8755.JPG
Columns of Basalt, near the Columbia River Gorge, North Oregon.

The landscape created by these vast eruptions is primarily a flat one, but rivers have incised deep gorges into the lava flows in many places, such as seen here near Warm Springs, Oregon:

DSCN0677.JPG
Canyon cut by river into the thick lava deposits, in the Oregon Desert.

We drove for nearly a thousand kilometres through and on top of these lava flows, and that more than anything else gave the best sense of the size of these eruptions. The columns of basalt, which for a while seem monotonous in colour and form, eventually become more and more astonishing as it becomes clear that this is among the largest volcanic structures on the planet. The volcanoes to the west are suddenly shrunk in the mind’s eye, even when the tallest (Mt Rainier) is over 4km above sea level.

Notes from the Eclipse

(N.B. this was written in two parts, the day before and the day after the total eclipse. These parts are labelled accordingly)

20/08/17, 16:24 PST: August has been an unexpectedly busy and scattered month for me. I had anticipated writing a number of pieces, and while several of these are either soon-to-be-published or ready to send off for consideration, a number of short trips have made it somewhat more tricky to write and research every day. At the time of writing I’m on another excursion (although this one has been planned for a while) to watch the total solar eclipse taking place in Madras, Oregon, on the 21st of August.

‘The Great American Eclipse’, as it is being referred to, will cross the mainland United States from central Oregon in the West all the way to South Carolina in the East. The number of people caught in the shadow of the moon will be unprecedented in the modern age, and unsurprisingly a huge number of people have been making the effort to drive somewhere where the eclipse will be total.

We have travelled to Madras, Oregon, where the climatic conditions indicate the lowest chance of cloud in the country. 100,000 others are expected to join here, all camping in the ‘Solartown’ that has been set up specifically for the event. Madras is a town of only 6,000, but the anticipated apocalyptic traffic tailbacks didn’t materialise; this may not be the case as visitors try to leave en masse tomorrow after the eclipse!

22/08/17, 22:20 PST: The clouds and smoke conditions were in our favour – we were able to see the total eclipse clearly, and it was truly a stunning event. I had been prepared for the dance of sun and moon together to be striking, but the effect on the surroundings was perhaps more memorable; time will tell.

It was suggested by other observers that we should, as observers, discuss what we were seeing through the eclipse from first contact all the way through totality until the sun returned at full strength. In the hour before totality we did just that, watching as the sun was gradually eaten into by the moon, noting minor changes in brightness and temperature. In the few minutes around the totality, however, it was hard to keep up; changes happened thick and fast.

Not only did temperatures drop noticeably, but the light level dropped so fast that one could see it shift second-to-second. And then, amid cheers from the thousands of others around, the moon finally obscured the orb of the sun completely; the wink of the diamond-ring-like ‘Bailey’s Beads’ was clear from our vantage point. We were transfixed by the blackened orb and the ring around for a few seconds, but quickly it became clear that there was visual magic happening elsewhere too.

DSCN0721.JPG
Totality

We were fortunate with our location that we could see several of the volcanic peaks that make up the Cascade mountain range, including mounts Jefferson and Hood. The latter of these peaks was outside of the zone of totality, and while we were completely shaded by the moon we could still see this peak lit in the near twilight; further into the distance were other red-shaded peaks that were hitherto unseen behind haze. The whole horizon, in fact, was lit as if sunset was happening all around us; the transition between the deep blue-black sky above with pinpoints of stars and planets and the 360 horizon lit to near-scarlet was genuinely moving.

A quick aside to give a simplified explanation about what was happening: when sunlight hits the atmosphere, some portion of the light is scattered by the air molecules (Rayleigh scattering). This acts more strongly on the blue part of the sun’s spectrum of light – this gives the sky a blue colour during daytime. In the evening, the angle of the sun is such that direct light must pass through more of the atmosphere, which means more of the blue light is lost, and the result is that we see mainly the red light from the sun. This gives the emotive colours we attach the sunset.

In this case, the red light from the horizon wasn’t coming directly from the sun, which was covered above. Instead, this was diffuse light from all around; the longer path that this diffuse, reflected light had to take gave it the red colour.

Despite the dry, scientific explanation, I think there’s something truly amazing at play here. It only struck me afterwards, but this diffuse red glow on the horizon is always there when the sun is up; it’s normally hidden by the bright light of the sun, but one could poetically say that those sunset colours that prompt such emotional response in many people are always glowing.

IMG_8731.JPG
The lit horizon below the obscured sun

*****

The two minutes of total shadow were certainly fleeting; some observers stood silently; a few loosed fireworks, while many worked frantically at camera equipment. The emotional affect was clear, as the gasps and cheers of 100,000 or so were not subtle. One enthusiastic watcher near to us began to clap as the moon shifted away and the sun reappeared; he quickly ceased though – who was he applauding?

Nature has afforded us on Earth with a rare set of circumstances. The balance of the size of the moon to the distance to the sun is so perfect; a larger satellite would obscure the sun, but the ring of solar flares would be less visible, and the ‘sunset horizon’ would also be absent. A smaller moon wouldn’t throw such a large shadow on the Earth’s surface, and the stars wouldn’t be visible at 10.20 in the morning.

Awe and wonder at nature seem to be part of the human psyche. Very mundane parts of the human condition were on display only a minute or two after the totality subsided, however, as thousands tried to beat the traffic out of Madras, even while 95% of the sun was still obscured. I think it would be hard to forget those two minutes of darkness, though; it’s more than the sum of the parts involved in terms of the celestial mechanics. Offered the opportunity again, I would jump at the chance to see another.

 

Can we still shift paradigms?

One of the most painfully overused phrases in science is ‘paradigm shifting’. The roots of the term as used in a scientific context come from the philosopher Thomas Kuhn, who utilised it in his model of scientific advancement. While researchers have interpreted Kuhn’s work in different ways, a general sense of the model is as follows:
Science proceeds under a paradigm of knowledge, methods, and techniques, which together define a kind of overarching global perspective. As scientists continue to accumulate knowledge, anomalous results begin to build up, until they are no longer explicable as merely errors under the existing paradigm; once the community of scientists accepts that these anomalies require a new global perspective to fit these anomalous findings in, then science undergoes ‘a paradigm shift’ to a new framework of knowledge, approaches and methods.
In my brief editorial experience, it seems like many researchers are big fans of this term – many use it in cover letters to suggest that their work is valuable and significant. I don’t intend here to either question the model of scientific advancement suggested by Kuhn, or to debate the various merits of the supposedly paradigm-shifting work submitted by different authors. No, here I’d like to contend that the modern relationship between science and publishing makes a genuine paradigm shift in the context described by Kuhn rather more difficult; much more difficult, in fact, than one might believe based on the frequency at which the term is used in the media and in cover letters.

Two factors are at play here, I think. First, the critical process of peer review means that anomalous results are potentially more likely to receive intense scrutiny, making it ever harder to publish work that might significantly undermine the existing core perspective. Secondly, since number of publications tends to be an important metric by which academics are judged, there is an incentive to break down radically anomalous findings into smaller publishable pieces. While individual small publications can still add to the body of anomalous results, for researchers with a grand, game-changing idea, the potential lure of multiple papers might outweigh the hard work in building a large case for a new mode of thinking that cannot be supported under the existing framework.

Peer review is considered a vital part of modern science, but when Kuhn published The Structure of Scientific Revolutions in 1962 (in which he proposed the debated model of scientific advancement), peer review was only beginning to be formalised. As part of the post-war scientific boom in the west, peer review was becoming increasingly important to secure funding1, but it is notable that many of the paradigm shifts considered by Kuhn, such as the shift from a Ptolemaic view of planetary motion to a solar-centric one, was based on science prior to the advent of review. Galileo and his contemporaries were able to publish without first getting their results past their peers, who may have had personal bias against such radical ideas. I’d suggest that it’s worth asking how easy it is today to persuade referees of a novel idea when they are working in an existing paradigm.

The second point is arguably more subtle. I think it’s fair to say that even a short contribution can dramatically change the way we think about the world, but making a case for a dramatic shift in scientific frameworks can require a large body of evidence. The Origin of Species is not a short book; Charles Darwin used a vast range of examples and data to build his case, and in combination they provide a new framework for understanding life on earth. The modern publishing incentives seem unlikely to encourage such large compilations, however. Judgement of researchers based on the number and citation count of their publications encourages splitting of projects into smaller parts (or even, derogatorily, ‘Least Publishable Units’), while simultaneously discouraging scientists from putting out anomalous results without context, which would be unlikely to achieve high impact (and as suggested above, may have trouble getting through review). I’d suggest that The Origin of Species as a series of small papers would be unremarkable until the final short-format piece that linked it all together; I’m unsure whether this would be a successful way to build a career in the modern academic environment.
Younger researchers are encouraged to publish more (and thus potentially split their work up more), and may therefore be more prone to this kind of effect. Older, more experienced scientists may have more intellectual and emotional capital invested in the framework within which they have spent their careers working; the tendency to promote game-changing suggestions may thus be more limited amongst more established researchers; I’d love to hear counter-examples, though.

A positive suggestion to address these aspects might be to emphasise the importance of conferences! There, unrefereed work can be judged by a broader community, and anomalous results presented concurrently, by researchers of all ages and backgrounds. With plenty of discussion and open-mindedness, these should serve as highly productive ground for giant leaps in our understanding of the world around us.

Perhaps these suggestions do not, in reality, limit the progression of science under Thomas Kuhn’s model. I do think, however, it’s worth questioning the modern understanding of the term, especially as science has changed so much in the past decades. When some studies suggest that global scientific output is doubling roughly every nine years2, it’s worth considering whether our models to describe its advancement are still valid.

References

1: Csiszar, A. Peer review: Troubled from the start. (2016), http://www.nature.com/news/peer-review-troubled-from-the-start-1.19763

2: Bornmann, L., & Mutz, R. Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references (2014). Uploaded to arxiv.com: https://arxiv.org/abs/1402.4578

Taiwan Fieldwork & Recent Update

I recently returned from helping out a colleague from my old department at the GFZ (where I worked for my PhD project) in the central mountains of Taiwan. Essentially, we were working to collect samples to answer some of the outstanding questions from my PhD work; several aspects of how the physical parameters of landslides affect the net weathering remain unclear, and so I was asked to help Dr Aaron Bufe – a postdoc in my old group – with addressing some of these issues.

IMG_20170601_120625249
Giant Landslide in the Chenyoulan River, central Taiwan. More details to come!

With these in mind we looked to sample a diverse range of landslides in the central part of Taiwan, but while the initial sampling worked out well (see photo) we ended up getting caught in a huge (and unusual) storm system, during which well over a metre of rain fell over around 48 hours. The result was that rivers and roads became essentially impassible in many parts of the catchment in which we were working, severely limiting the access to many of the sites we had hoped to access.

It was, however, a fascinating experience, and really made me appreciate what intense rainfall entails in tropical regions. In fact, the whole trip offered some fantastic opportunities to learn about life and geomorphic processes during extreme weather events, which I am working on putting together in a longer form post (incorporating some of the approximately hour of video footage I took while I was there) for publication in the near future. In the meantime, some short clips on Twitter may be of interest:

https://twitter.com/RobertEmberson/status/870582220817276928

https://twitter.com/RobertEmberson/status/871326849221107713

With much more to come soon.

As well as fieldwork I have been busy writing, both academically and in a more science communications capacity. A revised version of the 3rd PhD paper has gone back to the journal, while I have had two new pieces published recently. The first is an exhibition review which I wrote while I was working at Nature Geoscience, on the recent “Volcanoes” exhibition at the Bodleian Library in Oxford:

http://www.nature.com/ngeo/journal/v10/n5/full/ngeo2944.html

Currently this is behind the Nature Geoscience paywall – please do contact me for a copy if necessary.

I also wrote a piece for Atlas Obscura on my recent visit to the Millennium Seed Bank, run by Kew Gardens:

http://www.atlasobscura.com/places/millennium-seed-bank

I’m hoping to flesh out some of the details in these pieces within this blog when time allows. Finally, I’m excited that tomorrow morning another of my articles will be posted on the EGU’s lead blog page – Geolog – discussing my recent editorial experience.

http://blogs.egu.eu/geolog/

All of this has been a lot of fun, and I’m just as excited to have a chance to settle down for a couple of weeks to write it all up, and tell some stories.

Move to Canada

This is the first post in a while; a busy month has now settled down somewhat for me. My short-term contract at Nature Geoscience ended, and I have taken the opportunity to move to British Columbia in Canada. The move is a result of a number of reasons, but to some extent it reflects that I am still not settled on a final career goal. While I clarify what I want to end up doing professionally, and the route to take to that point, I felt it would be valuable experience to move to a new place, particularly one that offers such great chances to get out into nature. So, this move is more for personal reasons than professional ones. Some people I respect greatly have told me that to move for reasons other than a job is not always smart, so I’m going to use this time to work on writing and science communication. I’m hoping to find openings to write about different and varied aspects of science, and in particular focusing on why certain topics of research fascinate the scientists who work on them. Whether this foray into communication will be successful is not clear, but while I’m stood at a career-crossroads it seems prudent to explore alternatives and hone a skillset. Updates will follow thick and fast, I hope!

Opinion, Objective facts, and the Science March

In the social media and professional bubble in which I live at the moment, it’s hard to miss the outrage and upset that many scientists (primarily in the US) are feeling at the proliferation of untruths in the media. The ‘March for Science’ on April 22nd will no doubt be a huge outpouring of grievances by many individuals. I’ve been trying to figure out for myself exactly what I feel about the ‘assault on truth’ that has concerned so many. A common theme is the dismissal of the science surrounding climate change; the founder of 350.org Bill McKibben noted that protesters are even attaching footnotes to their signs1:

Snap_of_tweet
twitter.com/billmckibben

Once presented with these facts, it is often hard to see how there can still be so much inaction and repeating of lies. What I want to address here however is the disconnect in this message – while the first 3 points here are statistically significant findings supported by the weight of evidence, the suggestion that ‘[Climate Change is] Bad’ is a value judgement that depends entirely on your viewpoint. Short version – the impacts of climate change will be terrible for people living in low-lying or less economically developed countries, but will benefit those working in the disaster insurance industry (for example). I should stress at the outset that I fundamentally believe we should do everything we can to prevent dangerous climate change, but I’m trying to be careful to point out to people I discuss this with that this is my own opinion. My feeling is that we’re seeing less division between the search for truth and opinions as the the actions we should take to address contentious societal problems given scientific facts – but I think with careful delineation we can craft more hard hitting message.

Part of this cognitive dissonance, I think, relates to the way in which we view the goals or aims of science. Here, it’s informative to make the link to historiography. A widely discredited view of history – often termed ‘Whig History’ – is the notion that social changes have been moving inexorably forward to a democratic, liberal, and peaceful version of the world. Historically this was used as a justification for imperialism, but it is now outmoded. In the latter half of the 20th century, a number of structuralist authors and philosophers found alternative ways of explaining the evolution in social mores that have occurred throughout human history. Writers like Roland Barthes and Michel Foucault rejected the notion of directed evolution in civilisation. Foucault in particular stressed that even specific individuals don’t tend to drive history; their own viewpoints (and ideals for the future) are informed by the times in which they live (their ‘epsiteme’). Instead, civilisations evolve as the result of a multitude of small factors, that gradually shift views. The broader lesson is that this is not a directed process.

The same point can be made about science, but perhaps even more strongly. The accumulation of knowledge has no final goal. The generation of this knowledge ideally results from the testing of hypotheses about unexplained observations, but more often we see the words ‘We have set out to prove’ or ‘I seek to show’ in use by scientists. This is a fundamental difference – in seeking to prove a hypothesis, an author (knowingly or not) has a goal in mind. Science should be objective – but it doesn’t have an objective. Aligning the March for Science with Earth Day links the two, suggesting that a march for science is a march to protect the environment – but scientific data alone don’t require us to be for or against mitigating climate change.

Some scientists have expressed that they are planning to march for science so that political decisions can be made with clear facts and data. While I agree this is an important goal, I do think it arguably betrays a certain level of naiveté. A great example: it’s becoming more clear that internal research at Exxon showed the potentially dangerous effects of anthropogenic climate change before the UN panel on climate change even existed2, but at the same time Exxon donate significantly to political interests3. The head of the US Congressional Science Committee, Lamar Smith, has received more campaign funding from the oil and gas industry than any other4, and the current US Secretary of State is the previous CEO of Exxon. Given that penalising the oil companies or related polluters is thus hardly likely to be in the interests of such politicians, it should not come as a surprise that even if they are fully aware of the scientific facts about potential dangers of climate change that they wouldn’t act to mitigate pollution. These vested interests are a prime example of places we as scientists can conflate the facts with the action that should follow; we are often obliged to declare that we have no conflicts of interest when we publish material, but the same is almost never true of stakeholders outside of academia.

Another laudable goal that some have advocated for the science march is to increase the dissemination of facts to the general public, which would facilitate a better understanding of the issues at large. This is, for me, admirable; the wider public hold stakes in a whole range of environmental or economic aspects that relate to climate change. In a democracy, they ultimately hold sway over politicians (ignoring for a second the influence of lobbying). But what if, when presented with unabridged factual information, non-academic stakeholders come to alternative conclusions about the appropriate actions? For example, the impact of climate change is unlikely to fall equably between all nations; clearly, some countries have a lot more to lose than others5 (the Notre Dame Resilience Index gives a good rundown6), and it’s notable that those at greatest risk are primarily poorer countries in the third world. Some research has even shown that some western countries may become better off in a warming world7 When nationalist politics is increasingly prevalent in the West, a refusal to mitigate climate change is tantamount to reducing foreign aid to these vulnerable countries; it’s not unimaginable that this could appeal to some voters, at least in a cynical, zero-sum version of politics. A truly objective – and thus scientific – approach would be to present all facts as equal, but nobody is immune from confirmation bias; a fact that fits one’s view is more appealing than one that upsets the apple cart.

Protest movements almost invariably take a moral stance, whether advocating for equal rights for all, or protesting about perceived injustices in society. Arguing for increased appreciation for facts and data doesn’t necessarily strike me as a moral stance. Instead, I’d argue we should state strongly what our opinions are as to the data at hand; for example, even if my country would be demonstrably better off economically in a world affected by dangerous climate change, I still believe we should aim to limit climate change to reduce the impact on ecosystems and those most vulnerable. Some might disagree with taking a moral stance8, but I would suggest embracing one’s own values and expressing them loudly. Separating fact from opinion allows others the option to come to alternate conclusions; at the same time, if it’s clear that you’ve based your opinion on a range of facts, that seems to me more powerfully persuasive than an opinion based on ambiguity or guesses.

1 https://twitter.com/billmckibben/status/808791393569243140?lang=en

2 https://www.scientificamerican.com/article/exxon-knew-about-climate-change-almost-40-years-ago/

3 https://www.opensecrets.org/lobby/clientagns.php?id=d000000129&year=2016

4 https://www.opensecrets.org/politicians/industries.php?cycle=Career&cid=N00001811&type=I

5 https://www.theatlantic.com/magazine/archive/2007/04/global-warming-who-loses-and-who-wins/305698/

6 http://index.gain.org/ranking

7 http://www.nature.com/nature/journal/v527/n7577/full/nature15725.html

8 https://www.theguardian.com/science/political-science/2013/jul/31/climate-scientists-policies

Travel notes on South India Geology

Two of my close friends recently got married in India, and I was lucky enough to not only go and celebrate with them but also to travel around a little afterwards. India is clearly a vast country with vast depth of culture (not a surprise given that over a sixth of all people live there), but my eye can often get drawn away from the human artifice and colourful festivals, focusing instead on the landscape and the geology. Arguably, the cultural diversity in India is also matched by the morphology and diversity in the landscape; the high Himalayas in the north give way to dry river plains around the Ganges and Brahmaputra, while in the south the Western Ghats provide a physical barrier between the jungle of the west coast and the drier elevated plateau in the hinterlands. The Ghats was where we were travelling around; with more time I’d love to investigate more of the country. On returning, I thought I’d write a little about what we saw while travelling to give a sense of the interesting stories locked in the hills and rivers.

The south of the subcontinent is a remnant of the breakup of ancient supercontinents; the east coast separated from Antarctica, while the west coast (and the Western Ghats) separated from Madagascar around 65 million years ago (1). The Ghats represent the eroded edge of that margin, but I was startled by quite how mountainous they are even today – in the photo below you can see what this looks like.

img_20170225_104943797
Not quite Himalayan – but sizeable relief nonetheless (several hundred metres from valley to summit)

The intense monsoon rain that falls today in India (and potentially has fallen since 39 million years ago! (2)) acts as a powerful force to erode the mountains and flatten them out; one might expect that over 65 million years the edge of the plateau should be ground down as if by sandpaper. However, erosion seems to have pushed the steep mountainous edge of the plateau gradually inland, leaving a widening coastal plain (3). It was cool to see this, and the elevated peaks even in comparison to the inland plain seem to support some local uplift actually driven by erosion, perhaps as described by Gunnell and Fleitout (3). The range of elevated plan-form areas mixed with steeper hills (as shown in the image below) speaks to a long history of complex interaction between erosion and the rocks, even in a place where ostensibly ‘not a lot has happened’ since the break-up with Madagascar 65 million years ago.

img_20170225_120233677
Note the flats and the drop-off at the edges. I don’t know what’s happening here, but I wonder if the rain-shadow that explains the vegetation patterns might also play a role.

While hiking around these hills we topped out on Brahmagiri, a high peak on the border of the states of Kerala and Karnataka. Atop this peak we found a large summit stone (pictured below). I don’t know if this was associated with the Great Trigonometric Survey of India (4) but the possibility intrigued me. The survey was a long term endeavour in the 19th century, which ended up contributing measurements of the high Himalayan mountain elevations (hence Mount Everest being named after Sir George Everest, who ran the survey for many years). It also contributed to measurements of minor changes in Earth’s gravity caused by mountains on the surface; overall it was an amazing long-term scientific endeavour. Even if the point we found wasn’t used as part of the survey, the historical association made me smile.
img_20170225_121114193

(1) Evolution of the passive continental margins of India—a geophysical appraisal. C. Subrahmanyam & S. Chand, 2006, Gondwana Research, http://dx.doi.org/10.1016/j.gr.2005.11.024

(2) Asian monsoons in a late Eocene greenhouse world; A. Licht et al. 2014, Nature. doi:10.1038/nature13704

(3) Shoulder uplift of the Western Ghats passive margin, India: a denudational model. Gunnell, Y. and Fleitout, L. (1998), Earth Surf. Process. Landforms, 23: 391–404. doi:10.1002/(SICI)1096-9837(199805)23:5<391::AID-ESP853>3.0.CO;2-5

(4) https://en.wikipedia.org/wiki/Great_Trigonometrical_Survey