Replace the PhD

It has been well documented in recent years that the number of PhDs awarded is increasing at a rapid rate, much faster than the number of senior academic faculty jobs. The progression from graduate student to post-doctoral researcher to tenured professor, so long established, is now an uphill battle through a bottleneck; the proportion of PhDs for whom there are faculty positions at which they could aim is startling small at the moment, with some STEM fields having 5:1 ratios of PhDs to professors or more.

Having an overabundance of PhDs within academic circles is not inherently a problem, depending on your perspective. Scientific research in most cases requires extensive legwork, whether laboratory analysis, field sampling, paper writing, or computer coding; PhDs provide a skilled and cheap labour force to get this work done. More students means more work can be done, allowing for more complex and involved research questions to be posed; but what does this bottom-heavy pyramid mean for the students themselves?

The quid pro quo of this low-wage labour for a graduate student is that a PhD degree should theoretically set them up for an academic career; the assumption being that they should be able to work as independent researchers at the end of their degree. But with these academic jobs at a premium in all but a few fields, there has been some recent shift to emphasise the broad skill-set a PhD develops as well; writing papers and presenting at conferences improves communication skills, while research is supposed to hone problem solving ability too. A PhD, we’re told, sets up a student well to work outside of academia too.

This point, however, ignores one of the most fundamental aspects of the modern economy: the prevalence of the division of labour. Large industrial economies have broken down job roles into increasingly minor parts to allow us to produce an increasingly wide range of more and more niche products. Every role becomes more and more specialised, as individuals fill increasingly small parts of the overall economy.

The broad ‘skills’ of a graduating PhD student may thus provide them with a range of options outside of academia, but the expertise they have might well be entirely inapplicable to industry jobs. Some PhDs may study topics that give them comparable expertise to an industry peer, but for some (including myself), a ‘Jack of a number of trades’ set of abilities can’t necessarily compete. Could a PhD compete with a peer who had spent an equivalent amount of time working in the industry in question? It seems unlikely.

The result, I’d suggest, is that PhDs leaving academia who have not honed a specific skill would be forced to enter the workforce at an entry level, behind their classmates who had started years before. Some may disagree, but I would point to the figures suggesting that a PhD does not provide a significant boost in lifetime earnings compared to a Masters’ degree graduate to support the notion of underemployed doctorates, and the difficulty PhDs have in finding jobs.

There’s already a basic solution to this problem in the paragraph above – we should encourage PhD students to pick up and specialise in a given skill if they think they may leave academia. But I would argue for a more significant change in the structure of academic work (particularly scientific study): we should heed the example of the wider economy and build division of labour into our institutional models. In other words, instead of expecting a number of PhD students to take on a range of roles throughout their studies, we should consider employing a number of different specialists, each an expert in their specific role.

There are already examples of this: I was fortunate to work in a research group where laboratory work was significantly simplified by hard working lab technicians, and every student and professor knows the value of friendly admin assistants. Which other roles that PhD students normally take on could we conceive of instead being handed over to experts?

Paper writing and editing take up an inordinate amount of time for almost all scientists; it’s vitally important to communicate formal discoveries in a clear, concise fashion. Specialists could be brought in to work on the drafting and copy editing; we could consider a kind of journalism position for this kind of work. Similarly, conference presentations could be given by specialist presenters; many scientists I’ve spoken to feel that the ability to effectively communicate scientific findings at conferences is the sign of a great researcher, but what about those reclusive geniuses who are making radical conceptual advances but are constitutionally incapable of standing in front of a crowd? Why shouldn’t these researchers’ findings see the light of day in a presentation worthy of their achievement, given by an expert presenter?

Specific roles for student mentors, lecturers, or even professionals ensuring that good scientific conduct is followed (ethical or procedural) could also be considered. All of these individuals would be working toward a common goal, and it would exploit the efficiencies of economies of scale and division of labour. Arguably, with increasingly large and complex projects, such as the CERN particle accelerator or any number of space probes, the concept of the individual scholar is increasingly meaningless; science is a team effort, and we should treat it as such. There’s clearly an appetite for it, judging from the results of a poll recently run by Nature magazine.

Academia still needs senior, tenured professors, whether directing the team, joining the dots, analysing data for answers, or asking the right questions. Those people filling these roles need to understand the whole of the research project; they need an understanding of the “ins and outs” of each role to appreciate how they might affect the overall results. Perhaps a PhD is still a necessary precursor to fill these roles, but to me it doesn’t seem like an absolute requirement.

To those who might argue that such division of labour would introduce unpleasant hierarchies in science, I would counter with two points; first, that industry has achieved it successfully for years, and second that as a PhD student I had more respect and gratitude for the lab technicians and fieldwork specialists with whom I worked than almost any other collaborator.

Such a system would allow for a wider range of individuals to find their niche in STEM careers, while preventing an overabundance of PhD students. Young people interested in academic research could still find a role and hone a specific skill, giving them an edge for a future career outside of academia. Why not stop pretending that a PhD offers a guaranteed career in academia and revise how we treat the increasingly wide pyramid of academic labour?

Value in uselessness

What makes a place valuable? The concept of land value has varied throughout human history, dependent on the needs of the civilisations living nearby to a given setting. Economic factors have encouraged humans to compete for farmland or other resource-rich regions since the earliest peoples inhabited the fertile crescent, but we sometimes forget that cultural values are also tied up in how we value land. Sites of religious significance, or those associated with myth and legend, have inherent value to certain people. The advent of neoclassical economics and colonialism has led to a tendency to value land numerically (and thus in terms of the resources) but for aboriginal peoples around the world the spiritual and social connection to the land is not well quantified by such metrics.

Today national parks are designated to capture some of the cultural or scientific value of a landscape – at least where the potential for resource exploitation is not too tempting for a given state to allow extraction.

But what about areas that have no value attached to them by humans, either economic or cultural? There are many areas that, as a result of lack of infrastructure or impossible logistics, have limited or no economic value – think of the Tibetan plateau, the Empty Quarter of Saudi Arabia, or even the Antarctic desert. It’s fair to say that if the market value was pushed high enough and resources were found in such areas, they could still be exploited, but many such locations are simply lacking in any resource of interest.

Moreover, there are some locations where this lack of any economic value is combined with negligible human historical or cultural value; ‘useless’ places where humans have never been able to gain a foothold or established civilisation. There are not so many places, since humans have broadly found their way everywhere; but a search on google for ‘least accessible mountains’ comes up with some desperately isolated peaks in the high parks of Tibet, or those in Antarctica. I was fortunate to recently visit a place nearly as forgotten and empty as these, but situated in the heart of the United States: the Henry Mountains.

Looking east over Canyonlands National Park, from the south end of the Henrys

The mountains are situated in the southern part of Utah, surrounded by the more famous national parks of Zion, Arches, Canyonlands and Bryce. So much of southern Utah is protected as part of national parks or reserves that it is perhaps surprising to find these mountains are not designated as part of any protected area. At the same time, there is essentially no development of the area for economic purposes; meagre desert grasses support marginal grazing and a single paved road runs through them, but otherwise they are bereft of civilisation. The geologist GK Gilbert, who was the first white scientist to fully describe the range for the USGS in 1877, wrote that ‘No one but a geologist will ever profitably seek out the Henry Mountains’, concluding that the economic value of the land was negligible.

And thus the 5 peaks of the range have remained more or less untouched. In a recent visit, we took a hike for 5 hours in the southern part, and not a single car drove past, nor did we encounter a single other person. This was during peak tourist season in Utah, but the lack of infrastructure and presence of more prestigious wilderness areas nearby seemingly precludes any tourism there. The Henry mountains lack the more unique landforms expressed in the aforementioned national parks, and this disqualifies them from national park designation. And as Gilbert described, their economic value is near nil. Prehistoric peoples lived in the regions nearby to the mountains, but no significant archaeological remains have yet been found upon the mountains themselves. The Navajo people refer to the range as Dził Bizhiʼ Ádiní, literally meaning “mountain whose name is missing” [1].

So it seems like these mountains are nearly totally useless by the metrics with which we usually judge land value. Even I wouldn’t have been likely to visit if Gilbert hadn’t made them sound so tempting to geologists. But standing atop Mt Ellsworth, in the southern part of the range, not only were the vistas of the surrounding wild lands as stunning as any you could find anywhere around the world, but the sense of isolation really drove home the uniqueness of the experience.

And that’s what I find most interesting. The drive to find ever more interesting and diverse experiences is one of the defining factors of the modern era, and it has driven people to travel all over the world to seek out far flung locations. Our experiences define our lives, so we’re told, and I personally consider the diversity of experience to be an important factor in how I judge success in my own life. The national parks of Utah are rightly fêted for the wonder they instil in a visitor, but with the numbers of park visitors skyrocketting [2], one might be hard-pressed to experience them alone. Perhaps it’s selfish to seek out experiences that are unique to oneself; but if we were to treat experience as a commodity, then scarcity would no doubt increase its value.

Given that for most tourists and visitors, we don’t own the land we stand upon, but merely own the experience of standing there, perhaps we should consider the value inherent in useless places. Assigning value to places that we may otherwise deem to be worthless on the basis of our own subjective experience is perhaps an ironic inversion of postmodern thinking, but in an era that seems to be increasingly defined by relativism (‘alternative facts’) it was for me a unique joy to ignore any objective truth about a place and revel in the isolation and wilderness, which offered my own subjective paradise.

The author atop Mt Ellsworth – photo credit Laurence Pryer

[1] Linford, L. Navajo Places. History, Legend, Landscape. University of Utah Press. 2000


Desert and Geomorphology in Utah

America is a country of which I’ve only scratched the surface, and particularly in terms of the extensive network of national parks. Last week I set out to remedy some of that, travelling to southern Utah with a friend. With 5 national parks and numerous national monuments and preserves, the proportion of land in some way protected in the park is amongst the highest of any of the US states. As a visitor, then, it makes for a compact (ish) trip to visit a number of these sightseeing meccas, and while they’re all ostensibly within the same near-desert / arid environment the forms and shapes of the different landscapes are highly diverse, driven by differing erosional processes.

The chief purpose of our trip wasn’t just to visit the national parks though. Hidden amidst the national parks is a small mountain range – the Henry Mountains. 140 years ago a USGS scientist, GK Gilbert, produced a report on the geology of this range, the last mountain range in the lower 48 states to be mapped. The report itself is not what one would expect from a modern scientific report; it’s qualitative rather than quantitative, and in many places nearly poetic in the style of writing. It also contains a concise description of the field I studied for my PhD – geomorphology. Many, if not most of the ideas that we are still working on today in the field are pretty well explained by Gilbert in the his report, so the trip was in some respects me trying to understand a little of the man who defined my field by visiting the mountains he studied.

The trip was a productive source of a number of ideas for writing, which I hope to work on over the coming weeks. As a brief summary, below are a number of images from this stunning part of the world.

bryce stitch
Bryce Canyon
Capreef stitch
Grand Staircase / Escalante National Monument
henrys stitch correct
Mt Hilliers, from the south of the Henry Mountains
Delicate Arch, Arches National Park
The night sky over Canyonlands National Park
Road through Canyonlands



Are research papers outmoded?

Rankings, metrics, scores; numerical methods are so widely used at the moment to judge and analyse different systems that we often forget there are alternatives. A number is objective, and plugs nicely into algorithms to allow better assessments of a whole range of human interactions – and science is no different. Whether h-index, m-index, citation score, or even your Researchgate number, scientists are often gauged on their performance by these numerical metrics. The relative merits and disadvantages of these scores have been widely described, but one relatively under-discussed aspect is the very basis of how they’re calculated – which, for the most part, is the research papers the scientists have themselves written.

The professor and philosopher Marshall McLuhan famously argued that ‘the medium is the message’ – that the medium by which information is communicated affects the information itself. For example, the introduction of the telegram didn’t just allow people to send messages over long distances – it allowed news to be reported immediately across continents, irrevocably changing the type of news that was reported; in other words, it changed the society that it was used in. In other words, technology is rarely neutral in its social effect. A simpler example: the electric lightbulb allowed people to see and work at night – giving society the chance to change working hours.

Scientific journals and the papers therein have existed in some form or other since the mid-17th century. Many aspects have changed, including the important advent of peer-review in the early 19th century, but at their most simple terms they remain the same: written text, published, and therefore unchanged after they go into print. Rarely do we ask what this medium means for the research that is published and the broader communication of science. In particular, if scientists are judged on what they’ve authored, what are the implications of this medium for our metrics?

A published paper is by it’s nature static, unchanging, and part of the historical record. Contrast a paper with a lecture, or conference presentation, for example; a public talk is seen once by an audience, but unless it is recorded, then it cannot be judged again or referred back to. A paper, on the other hand, can be referred to and cited from the point it is published. These aspects are essential to modern science; we must have record of prior work, in order to justify the assumptions within novel studies.

However, once published, a research paper is left, unaltered. This stands in contrast to science as a whole; no theory should go unquestioned, and new hypotheses should redress the issues with prior studies. Is it fair, then, to judge a scientist upon older papers that may have been disproved – even by the researcher themselves? Given the tendency for older papers to be superseded, how are we to factor this in when assessing a researcher’s oeuvre? If a journalist pens a series of articles on an event that is still ongoing, would it be fair to assess them on pieces published before all the facts emerged? The parallel to science is clear, with the important caveat that scientific research is always evolving.

The tradition of published research predates Karl Popper, the philosopher of science, and I would argue that some aspects of the medium are contradictory to the way he argued science should be conducted. Popper argued in the first half of the 20th century that for a statement to be scientific, it must be falsifiable. Providing definitive proof of a statement is not logically possible, due to the problem of induction, and as such science should offer only falsifiable hypotheses that represent the best current understanding of a problem.

Other thinkers have added to this notion. I particularly note Imre Lakatos’ contribution. In his paper Falsification and the Methodology of Scientific Research Programs he suggests that

“Intellectual honesty does not consist in trying to entrench, or establish one’s position by proving (or ‘probabilifying’) it intellectual honesty consists rather in specifying precisely the conditions under which one is willing to give up one’s position.”

If one is judged by the research that one has published – and in particular the amount of citations your work receives – it hardly incentivises you to state the precise conditions under which you’d be prepared to admit you’re wrong. In fact, it encourages the opposite behaviour, since a more embedded idea will likely stick around longer and accumulate more citations. The essence of science (at least since logical positivism was largely discredited in the last 100 years) is to embrace being wrong in the search for a deeper understanding of the world at large, but this is certainly not mirrored in our publication model.

So how can we address this? Evolving scientific understanding could benefit from evolving accounts of science, moderated and curated by researchers. The internet provides us with a platform for continually updating our understanding, and in a fundamentally collaborative way. Wikipedia is a clear example of just such a platform, where the state of the art can be continually adjusted and revised. A hypothetical ‘unified compedium of knowledge’ could operate and evolve much like the code bases of tech systems; changing in response to new discoveries, but where archived versions could show the evolution of ideas.

“But wait,” I hear many scientists interject, “what about peer review? How can we trust content that isn’t reviewed?” In response, I would again turn to the philosophers of science. Why should a one-time peer review guarantee the long-term validity of a study? Contrary evidence could arise at any later point (and this indeed is the problem of induction), and I would argue that instead of a one-time review, we should consider all work critically at all points, whether before or after peer-review. This attitude would naturally lend itself to a perpetually updated repository of knowledge.

One can imagine, of course, that this kind of project could rapidly stagnate; if researchers disagree, they could demonstrate the false nature of each others’ ideas without significantly contributing to the knowledge-base. Here, Lakatos can offer some guidance. He suggests we should only consider a theory falsified if an alternative theory is provided, that can both explain the existing observations and “predicts novel facts” – i.e., improves over the prior theorem.

This centralised model would (in my eyes at least) increase collaborative work, and since new theorems would have to explain the existing observations, there is an in-built mechanism to encourage testing of the reproducibility of findings. Continual review and improvement would also be immanent. Open access to data and methods would be necessary to this kind of model, and it would be necessary to train authors and contributors to state the conditions under which their findings would be falsified.

Metrics to gauge the contribution of individuals to this kind of project would not be dependent on how fast a given field evolves, but they would likely look significantly different to those we currently work with. However, the amount of data that would be generated would provide ample opportunity to assess researchers in a fundamentally different way.

It remains to be seen whether such a model would even be conceivable. Science changes slowly, and the objectives that funding agencies look for are not necessarily aligned; universities may not be appreciative of researchers sharing insight with competing institutions, much like commercial entities try to avoid corporate espionage. But if we genuinely value the advancement of science rather than local politicking, then these kind of concerns should not prevent a shift.

Open Access & Industry funded research

Increasing public access to scientific research has become an important target in many democracies over recent years. Both the researchers funded by government or taxpayer sourced money and the taxpayers themselves have advocated for more of the results to be published in places where members of the public can access and use the results free of charge. In the UK, for example, publications from research funded by the National Environment Research Council (NERC) must be ‘open access’ – freely available to the public (1). Elsewhere, the German Helmholtz Institutes have recently announced they will cancel subscriptions of journals from the company Elsevier in part due to lack of open access options (2).

These efforts are certainly having a clear effect on academic science; discussions of open access options are increasingly incorporated into how researchers decide where to publish their findings. However, even if every scientist working at a university with government funding published all of their papers with open access journals, this wouldn’t give the public access to all of the research and development taking place; in fact, it wouldn’t even be a majority.

It should come as no surprise at all that private business invests significantly in science, but I was personally shocked when I found out quite how much of it is privately funded. According to the OECD, an average of 60% of funding in the developed world comes from for-profit business (3), although this varies widely between countries (as low as 30% in Greece, but greater than 80% in Israel).

Does this privately funded research get published? I found it difficult to get decent statistics for the proportions, so I took a sample of data myself to test. I looked at the 87 papers published in the open access journal PloS One on June 30th 2017, and looked at the statement of conflicting interests to get a sense of which articles might have been funded by private institutes. In general, this is where authors would be obliged to list any conflicting financial interests, which broadly includes funding from commercial sources.

Of those 87 articles, only 8 had any conflicting interests (the data are available in a Google document (4)). Naturally, it may well be the case that the journal and sample set I used was not in any way representative, but it matches up with my experience working as an editor at Nature Geoscience; published science is dominated by research that isn’t funded by private institutions, even though they provide the bulk of the financing.

“Of course”, an intelligent reader would say, “the incentives are different in industry.” True enough; publications can often seem like a goal in themselves for researchers, but commercial enterprise has other objectives, not least of which is turning a profit. Sending your rivals a project for peer review would be a disastrous way of handing your secrets to competitors; holding onto data preserves the edge that a business works hard to create. Moreover, review and publishing take time, further eating into tight margins.

The possibility that more than half of scientific endeavour is never published does seem disheartening for those who believe that science as a human construct is a collaborative system. Are there any arguments that could persuade corporate interests to release their information into the public sphere?

It seems clear that only the most philanthropic of corporations would want to make their new findings freely available. Indeed, we shouldn’t expect this to change. However, old research or redundant data that no longer impacts corporate bottom lines could still be a benefit to other researchers, looking for alternative ideas or datasets. Consider a model like a pharmaceutical company with leftover stock of drugs that are past their patent expiry, and as such lower in commercial value. These drugs could be donated to countries unable to afford the brand new state-of-the-art drugs, which could certainly aid corporate image in the public consciousness. In such a scheme, data and results would not need to be formally written up, but even under such a ‘buyer-beware’ system, useful information might be gleaned.

Image conscious companies would naturally only make up a fraction of all industry R&D. A purely profit motivated organisation would need other incentives, but here government could step in. Government subsidy is an important part of many industries, and we might envision that a quid pro quo for subsidy assistance would be to expect that some proportion of the R&D conducted by such firms would be made publicly available.

Only a few days prior to this post (at the end of August 2017) the UK government announced that £140 million would be provided as subsidy to encourage collaboration between academia and industry in the life sciences (5). If results from this collaboration are not subject to the same requirements as other UK government funded research, it would seem extremely hypocritical.

The benefits of offering industry data up to the public are not limited to scientific research. If governments are interested in holding corporate entities accountable for their actions, the R&D research would be a useful place to check. The example making the rounds in the science-environment media at the moment is that Exxon Mobil researchers were well aware of the risks of climate change, but executives didn’t communicate the potential threat (6). We know now, too, that tobacco companies engaged in similar behaviour decades earlier.

In a similar fashion, increasing pressure is now placed upon pharmaceutical companies to publish the results of clinical trials (e.g. the All Trials organisation (7)). Naturally, pharmaceutical companies have looked to lobby against these changes. It need not be said that implementing changes to the way industry shares research findings at a broader scale would be just as difficult, if not impossible. Government oversight and corporate accountability are not strongly compatible with the current laissez-faire economic models. However, the scientists working at the bench aren’t so different between academia and industry; in both areas researchers benefit from access to prior work, and so perhaps is it incumbent upon the researchers themselves to push for this kind of data sharing.








Information asymmetry in science & publishing

Suppose you want to buy a used car, but your knowledge of car maintenance is limited, and you need a car quickly. The dealership you visit has a range of cars, some better than others. Since you find it difficult to tell the difference between the good and bad cars, you might be inclined to lower your offer for any of the cars, to avoid overpaying for a bad car (a ‘Lemon’) by mistake. If the dealer knows you won’t pay enough for a higher quality car, but can’t tell the difference, they have then a stronger motivation to try and shift one of the lower quality motors. This means that the better cars remain unsold. This effect stems solely from a difference in the information about the product for the buyer and seller.

This thought experiment is based on George Ackerlof’s famous 1970 paper, ‘The market for Lemons’ (1), where he explored the concept and effects of asymmetric information in economics. In transactions where one party has differing information to the other, adverse effects can occur; economists refer to this as an imperfect market, where often both buyer and seller can lose out.
This idea has gained traction among economists, who have linked problems in (for example) social mobility, or Obamacare, to imperfect information. In the sciences, however, we rarely think it such stark, transactional terms. This may be to our detriment, however, since in the process of publishing science we are likely to encounter several points where different parties have varying levels of information about a given study which could potentially lead to a poorer communication of facts and data.

The audience for a scientific study has a range of information at their fingertips – author affiliations, potential conflicting financial interests, for example – that enable a judgement to be made about the content of the work. This kind of meta-information provides a useful link between author and reader that can help provide trust in the work at hand, but there are other ‘meta’ aspects of research that are trickier to communicate; why have the authors chosen to write up this specific set of data, rather than any other findings? What, if anything, changed during the review process? These are potential times where an asymmetry in the information available about a scientific study could limit the trust which a reader could place in the findings.

Competition could encourage the omission of details if it could be of financial benefit to an individual or a corporation. If an author cannot tell whether a study represents just the best results or not, then their trust in a research project could be limited; research shows that in many cases drug trials go unpublished or unfinished (2), so how seriously should we take those that are published? Without full information about unpublished studies, the value of published studies could be questionable, across the whole field in question.

Who knew what, and when?

In a general sense, we can think about the asymmetry of knowledge between different parties involved in publishing; what do authors know that the readers (and to a different extent editors) don’t?

We can assume that the author tends to have a greater grasp of the information involved in study than the reader; they make judgements about which data to include, and which aspects of their research should be written up completely. Few scientists would be able to claim they’d published every part of the trains of thought that had led them to where they currently are in their careers. In most cases, the decisions as to the selection of data can be which results are most interesting, or offer the best chance for success in a high tier journal, or in the simplest case, those data that pertain to the hypothesis in question (why mention data you don’t believe pertains to the question you’re asking?)

Even if the choices of experimental design and data to include in published papers are generally made in good faith, it can be difficult to explain to readers. The controversy surrounding the hacked emails of the Climate Research Unit at UEA highlights how the disparity between formal and informal communication in science can be misconstrued, at great cost to public trust in science in this case (3). Where readers may have a sense that a backstory could be omitted, the value they place in a given study may decline, regardless of the history of an article.

A culture that prioritises the publication of interesting research in higher tier journals leaves less room for academics to give weight to the work that lies between these topics. Perhaps we should give credit to scientists for keeping a public research diary of sorts, that could serve as an open archive of the direction in which they are working. This may be a harder sell where competition between differing research groups is a driving factor, but the flip-side could be to actually foster a more cooperative research environment.

An even slower publication process

During submission, review and publication of papers, there are a number of facets that may induce an asymmetry of information. The editor naturally asks for expert opinion as to the quality of an article through peer review – much like an antiques salesperson would seek a valuation of a supposedly priceless heirloom to avoid fraud. In this way, the editor seeks to increase their information about the article, and thus can value them more appropriately; but where the referee isn’t given sufficient evidence to make these judgements, the editor can be left blind. Thus we see the value in providing all data to allow complete review.

However, there are other, more opaque parts of the publication process that could limit what each party knows. An editor must make a subjective judgement about whether a submission is suitable for their audience; if readers or authors are unaware of the rationale for these decisions, it may affect their impression of the finally published articles.

Of course, publicising such details stands in contrast to the business models of many journals, and it almost need not be mentioned how much longer this would take overall. Should we advocate for a fully open publication process, at the expense of an even longer turn-around time for research papers?

Expediency or openness?

Where information is not evident to readers, it tends to be the result of processes to expedite the wheels of scientific advancement; the need for a reader to absorb all meta information about a study (history, outliers, rationale, even the train of thought) would markedly increase the time required to understand a research field.

Should we then be weighing up expediency against trust in science? In the present research enviroment, with questions about the trust placed in the scientific endeavour, this is a valid question to ask. It may even be the case that such a slowdown in research may not be the case; with fewer repeated trips down blind avenues of study, and the potential for greater communication and cooperation, there is potential for advancement to still occur swiftly, with a greater sense of trust from readers and governments that may be funding our studies.




Volcanic Geology in North Western USA

To reach our chosen spot to watch the eclipse (Madras, in central-west Oregon) we took a road trip over a few days, from Vancouver and back to Grand Forks in British Columbia. Our outbound and return legs took us west and east of the Cascade mountains respectively, which offered some interesting insights into the different styles of volcanism that have shaped the landscape of the Pacific North West. In particular, it was striking to see the contrast between the much more recent volcanic activity at Mt St Helens and the ancient but vast deposits of the Columbia River Flood Basalts. While these are well studied and documented geological formations, I felt it would be interesting to write up some of these observations.

Road Trip Route

Mt St Helens

Even non-scientists will be familiar with Mt St Helens. An active stratovolcano, close to population centres in the US, was always likely to attract attention, but the hugely dramatic eruption in 1980 is well known as a prototypical volcanic disaster. In many ways, though, it was an unusual event; the ash and pyroclastic flows were a product of explosive decompression, after the entire side of the mountain slid away.

The park rangers give a great analogy to describe what happened. Under the volcanic cone, magma was gradually building up – much of it full of volcanic gasses. Imagine a fizzy drink bottle, full of bubbles. The magma pushed its way into the subsurface, and in doing so caused a number of earthquakes. The largest of these, just prior to the main eruption, caused the entire side of the mountain, made of relatively loose material, to collapse as a giant landslide. This acted to hugely reduce the pressure on the magma inside the volcano, still full of gas; imagine that now the fizzy drink bottle has had its lid removed, having been shaken up. You can imagine the resulting eruption!
This kind of collapsing mountainside-triggered event was unusual though; Mt St Helens was the first well-documented example. The resulting blast levelled trees all across the landscape; even today, these either lie where they fell or have been washed en masse into the nearby lakes.


Logs washed from the still nearly-bare hillsides into the lakes.

The mountain itself is still stunning, albeit without the symmetric character that led to it being dubbed ‘Mt Fuji of the Americas’ before the eruption. The giant crater and the smaller incipient ridges within it (produced since the eruption as magma pushes upwards into the crater) is a formidable sight. In the early Victorian period, rugged mountain ranges were often viewed as terrible, forbidding scenes, a perspective that contrasts with the more modern view of mountains as sites of awe and beauty. Mt St Helens manages to bridge that divide – at least for me. The impressive nature of the topography cannot be disentangled from the very human side of eruptive history, in which dozens perished.

The Northern Aspect of Mt St Helens from the boundary trail, showing the crater and landslide deposit below.

Personal bias as a result of my own previous study of landslides ensured that I spent a fair while considering the debris avalanche and the deposit that remains. In the image above, the whole foreground is dominated by the deposit, which is still the largest debris landslide in recorded history. The ‘hummocky’, or lumpy landscape results from great chunks of mountain that slid downhill overlain by finer, loose deposits on top.

A couple of interesting aspects, from my perspective. First, the snow melt and flooding is clearly cutting quickly into the loose deposits every year; those canyons that can be seen in the centre of the image are being incised at high speed (I estimate on the order of metres per year). This could be a great set of field observations for sediment scientists, if it hasn’t already been studied!

Secondly, those familiar with my PhD work will know I looked at the way in which landslides can affect the amount of dissolved mineral elements in the water draining across and through their deposits. As such, seeing the largest landslide on record certainly piqued my interest as to the state of the water chemistry in the Toutle River, which drains the bulk of the deposit. It would be a cool test of concept if a time series exists! If anyone is aware of such a data series, please do let me know.

Mt St Helens is only one of the sporadically active Cascade volcanoes. Their form is in many ways similar; they stand proudly above the surrounding landscape by several thousands of feet, formed as individual cones. We saw a number of these (Mts Jefferson & Hood were visible from the vantage point where we watched the eclipse), and last month I climbed Mt Baker, the northernmost large volcano.

2017-07-23 13.39.38.jpg
Mt Baker, northern aspect; at the peak, small volcanic outgassing can discolour the snow, testament to its continued activity.

The landscape over which these volcanoes tower bears the marks of another type of volcanic eruption; perhaps less obvious, but only because the hallmarks are thousands of kilometres from edge to edge.

Columbia River Flood Basalts


Volcanoes like the Cascade examples form as magma rises at the margin of colliding tectonic plates; many may have heard of the ‘Pacific Ring of Fire’, and these volcanoes are part of that system. This isn’t the only way in which volcanoes can form though. In some places, hot material can well up from many thousands of km deep within the Earth’s mantle, and as it nears the surface, begins to melt due to the elevated temperature. This molten magma then erupts through the plate above – as a ‘hot spot’. This is the kind of volcanism we see today in places such as Hawaii.
These kind of eruptions are generally less explosive than those at plate boundaries, but can also produce vast quantities of lava; and that’s exactly what scientists believe happened in Washington and Oregon some 14-17 million years ago. The huge quantity of lava that was erupted eventually inundated the landscape, in some cases to over a km in depth, over nearly 200,000 square kilometres.

And so, as we drove across this landscape, we saw outcrops of this lava exposed literally everywhere. The lava cooled into basalt, and as it did it cracked in very distinct ways – forming ‘columnar joints’, much like the famous Devil’s Causeway in Northern Ireland.

Columns of Basalt, near the Columbia River Gorge, North Oregon.

The landscape created by these vast eruptions is primarily a flat one, but rivers have incised deep gorges into the lava flows in many places, such as seen here near Warm Springs, Oregon:

Canyon cut by river into the thick lava deposits, in the Oregon Desert.

We drove for nearly a thousand kilometres through and on top of these lava flows, and that more than anything else gave the best sense of the size of these eruptions. The columns of basalt, which for a while seem monotonous in colour and form, eventually become more and more astonishing as it becomes clear that this is among the largest volcanic structures on the planet. The volcanoes to the west are suddenly shrunk in the mind’s eye, even when the tallest (Mt Rainier) is over 4km above sea level.

Notes from the Eclipse

(N.B. this was written in two parts, the day before and the day after the total eclipse. These parts are labelled accordingly)

20/08/17, 16:24 PST: August has been an unexpectedly busy and scattered month for me. I had anticipated writing a number of pieces, and while several of these are either soon-to-be-published or ready to send off for consideration, a number of short trips have made it somewhat more tricky to write and research every day. At the time of writing I’m on another excursion (although this one has been planned for a while) to watch the total solar eclipse taking place in Madras, Oregon, on the 21st of August.

‘The Great American Eclipse’, as it is being referred to, will cross the mainland United States from central Oregon in the West all the way to South Carolina in the East. The number of people caught in the shadow of the moon will be unprecedented in the modern age, and unsurprisingly a huge number of people have been making the effort to drive somewhere where the eclipse will be total.

We have travelled to Madras, Oregon, where the climatic conditions indicate the lowest chance of cloud in the country. 100,000 others are expected to join here, all camping in the ‘Solartown’ that has been set up specifically for the event. Madras is a town of only 6,000, but the anticipated apocalyptic traffic tailbacks didn’t materialise; this may not be the case as visitors try to leave en masse tomorrow after the eclipse!

22/08/17, 22:20 PST: The clouds and smoke conditions were in our favour – we were able to see the total eclipse clearly, and it was truly a stunning event. I had been prepared for the dance of sun and moon together to be striking, but the effect on the surroundings was perhaps more memorable; time will tell.

It was suggested by other observers that we should, as observers, discuss what we were seeing through the eclipse from first contact all the way through totality until the sun returned at full strength. In the hour before totality we did just that, watching as the sun was gradually eaten into by the moon, noting minor changes in brightness and temperature. In the few minutes around the totality, however, it was hard to keep up; changes happened thick and fast.

Not only did temperatures drop noticeably, but the light level dropped so fast that one could see it shift second-to-second. And then, amid cheers from the thousands of others around, the moon finally obscured the orb of the sun completely; the wink of the diamond-ring-like ‘Bailey’s Beads’ was clear from our vantage point. We were transfixed by the blackened orb and the ring around for a few seconds, but quickly it became clear that there was visual magic happening elsewhere too.


We were fortunate with our location that we could see several of the volcanic peaks that make up the Cascade mountain range, including mounts Jefferson and Hood. The latter of these peaks was outside of the zone of totality, and while we were completely shaded by the moon we could still see this peak lit in the near twilight; further into the distance were other red-shaded peaks that were hitherto unseen behind haze. The whole horizon, in fact, was lit as if sunset was happening all around us; the transition between the deep blue-black sky above with pinpoints of stars and planets and the 360 horizon lit to near-scarlet was genuinely moving.

A quick aside to give a simplified explanation about what was happening: when sunlight hits the atmosphere, some portion of the light is scattered by the air molecules (Rayleigh scattering). This acts more strongly on the blue part of the sun’s spectrum of light – this gives the sky a blue colour during daytime. In the evening, the angle of the sun is such that direct light must pass through more of the atmosphere, which means more of the blue light is lost, and the result is that we see mainly the red light from the sun. This gives the emotive colours we attach the sunset.

In this case, the red light from the horizon wasn’t coming directly from the sun, which was covered above. Instead, this was diffuse light from all around; the longer path that this diffuse, reflected light had to take gave it the red colour.

Despite the dry, scientific explanation, I think there’s something truly amazing at play here. It only struck me afterwards, but this diffuse red glow on the horizon is always there when the sun is up; it’s normally hidden by the bright light of the sun, but one could poetically say that those sunset colours that prompt such emotional response in many people are always glowing.

The lit horizon below the obscured sun


The two minutes of total shadow were certainly fleeting; some observers stood silently; a few loosed fireworks, while many worked frantically at camera equipment. The emotional affect was clear, as the gasps and cheers of 100,000 or so were not subtle. One enthusiastic watcher near to us began to clap as the moon shifted away and the sun reappeared; he quickly ceased though – who was he applauding?

Nature has afforded us on Earth with a rare set of circumstances. The balance of the size of the moon to the distance to the sun is so perfect; a larger satellite would obscure the sun, but the ring of solar flares would be less visible, and the ‘sunset horizon’ would also be absent. A smaller moon wouldn’t throw such a large shadow on the Earth’s surface, and the stars wouldn’t be visible at 10.20 in the morning.

Awe and wonder at nature seem to be part of the human psyche. Very mundane parts of the human condition were on display only a minute or two after the totality subsided, however, as thousands tried to beat the traffic out of Madras, even while 95% of the sun was still obscured. I think it would be hard to forget those two minutes of darkness, though; it’s more than the sum of the parts involved in terms of the celestial mechanics. Offered the opportunity again, I would jump at the chance to see another.


Can we still shift paradigms?

One of the most painfully overused phrases in science is ‘paradigm shifting’. The roots of the term as used in a scientific context come from the philosopher Thomas Kuhn, who utilised it in his model of scientific advancement. While researchers have interpreted Kuhn’s work in different ways, a general sense of the model is as follows:
Science proceeds under a paradigm of knowledge, methods, and techniques, which together define a kind of overarching global perspective. As scientists continue to accumulate knowledge, anomalous results begin to build up, until they are no longer explicable as merely errors under the existing paradigm; once the community of scientists accepts that these anomalies require a new global perspective to fit these anomalous findings in, then science undergoes ‘a paradigm shift’ to a new framework of knowledge, approaches and methods.
In my brief editorial experience, it seems like many researchers are big fans of this term – many use it in cover letters to suggest that their work is valuable and significant. I don’t intend here to either question the model of scientific advancement suggested by Kuhn, or to debate the various merits of the supposedly paradigm-shifting work submitted by different authors. No, here I’d like to contend that the modern relationship between science and publishing makes a genuine paradigm shift in the context described by Kuhn rather more difficult; much more difficult, in fact, than one might believe based on the frequency at which the term is used in the media and in cover letters.

Two factors are at play here, I think. First, the critical process of peer review means that anomalous results are potentially more likely to receive intense scrutiny, making it ever harder to publish work that might significantly undermine the existing core perspective. Secondly, since number of publications tends to be an important metric by which academics are judged, there is an incentive to break down radically anomalous findings into smaller publishable pieces. While individual small publications can still add to the body of anomalous results, for researchers with a grand, game-changing idea, the potential lure of multiple papers might outweigh the hard work in building a large case for a new mode of thinking that cannot be supported under the existing framework.

Peer review is considered a vital part of modern science, but when Kuhn published The Structure of Scientific Revolutions in 1962 (in which he proposed the debated model of scientific advancement), peer review was only beginning to be formalised. As part of the post-war scientific boom in the west, peer review was becoming increasingly important to secure funding1, but it is notable that many of the paradigm shifts considered by Kuhn, such as the shift from a Ptolemaic view of planetary motion to a solar-centric one, was based on science prior to the advent of review. Galileo and his contemporaries were able to publish without first getting their results past their peers, who may have had personal bias against such radical ideas. I’d suggest that it’s worth asking how easy it is today to persuade referees of a novel idea when they are working in an existing paradigm.

The second point is arguably more subtle. I think it’s fair to say that even a short contribution can dramatically change the way we think about the world, but making a case for a dramatic shift in scientific frameworks can require a large body of evidence. The Origin of Species is not a short book; Charles Darwin used a vast range of examples and data to build his case, and in combination they provide a new framework for understanding life on earth. The modern publishing incentives seem unlikely to encourage such large compilations, however. Judgement of researchers based on the number and citation count of their publications encourages splitting of projects into smaller parts (or even, derogatorily, ‘Least Publishable Units’), while simultaneously discouraging scientists from putting out anomalous results without context, which would be unlikely to achieve high impact (and as suggested above, may have trouble getting through review). I’d suggest that The Origin of Species as a series of small papers would be unremarkable until the final short-format piece that linked it all together; I’m unsure whether this would be a successful way to build a career in the modern academic environment.
Younger researchers are encouraged to publish more (and thus potentially split their work up more), and may therefore be more prone to this kind of effect. Older, more experienced scientists may have more intellectual and emotional capital invested in the framework within which they have spent their careers working; the tendency to promote game-changing suggestions may thus be more limited amongst more established researchers; I’d love to hear counter-examples, though.

A positive suggestion to address these aspects might be to emphasise the importance of conferences! There, unrefereed work can be judged by a broader community, and anomalous results presented concurrently, by researchers of all ages and backgrounds. With plenty of discussion and open-mindedness, these should serve as highly productive ground for giant leaps in our understanding of the world around us.

Perhaps these suggestions do not, in reality, limit the progression of science under Thomas Kuhn’s model. I do think, however, it’s worth questioning the modern understanding of the term, especially as science has changed so much in the past decades. When some studies suggest that global scientific output is doubling roughly every nine years2, it’s worth considering whether our models to describe its advancement are still valid.


1: Csiszar, A. Peer review: Troubled from the start. (2016),

2: Bornmann, L., & Mutz, R. Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references (2014). Uploaded to

Taiwan Fieldwork & Recent Update

I recently returned from helping out a colleague from my old department at the GFZ (where I worked for my PhD project) in the central mountains of Taiwan. Essentially, we were working to collect samples to answer some of the outstanding questions from my PhD work; several aspects of how the physical parameters of landslides affect the net weathering remain unclear, and so I was asked to help Dr Aaron Bufe – a postdoc in my old group – with addressing some of these issues.

Giant Landslide in the Chenyoulan River, central Taiwan. More details to come!

With these in mind we looked to sample a diverse range of landslides in the central part of Taiwan, but while the initial sampling worked out well (see photo) we ended up getting caught in a huge (and unusual) storm system, during which well over a metre of rain fell over around 48 hours. The result was that rivers and roads became essentially impassible in many parts of the catchment in which we were working, severely limiting the access to many of the sites we had hoped to access.

It was, however, a fascinating experience, and really made me appreciate what intense rainfall entails in tropical regions. In fact, the whole trip offered some fantastic opportunities to learn about life and geomorphic processes during extreme weather events, which I am working on putting together in a longer form post (incorporating some of the approximately hour of video footage I took while I was there) for publication in the near future. In the meantime, some short clips on Twitter may be of interest:

With much more to come soon.

As well as fieldwork I have been busy writing, both academically and in a more science communications capacity. A revised version of the 3rd PhD paper has gone back to the journal, while I have had two new pieces published recently. The first is an exhibition review which I wrote while I was working at Nature Geoscience, on the recent “Volcanoes” exhibition at the Bodleian Library in Oxford:

Currently this is behind the Nature Geoscience paywall – please do contact me for a copy if necessary.

I also wrote a piece for Atlas Obscura on my recent visit to the Millennium Seed Bank, run by Kew Gardens:

I’m hoping to flesh out some of the details in these pieces within this blog when time allows. Finally, I’m excited that tomorrow morning another of my articles will be posted on the EGU’s lead blog page – Geolog – discussing my recent editorial experience.

All of this has been a lot of fun, and I’m just as excited to have a chance to settle down for a couple of weeks to write it all up, and tell some stories.