This is the first post in what I anticipate will be a series on ‘things I’m learning about science from the publishing side’. For context, I’ve just finished a 4 year Ph.D. project, and while the institute where I worked operated with an aim of providing societal benefit from the research undertaken, my personal project was somewhat less applicable to day-to-day life for the majority of the populace. Thus, despite my abiding interest in the intersection of science and society, I’ve been less exposed to these links, and my current work at Nature Geoscience is offering new and interesting insights, some of which I’d like to highlight.
An interesting article crossed my path recently, in which the authors discussed what makes ‘excellent’ science (1). “Centres of excellence”, or “excellence in sourcing funding” are buzzwords that are thrown around regularly, but these terms can be a bit meaningless when contrasted with the scientific model taught to school children. What is excellent science? There are a number of useful points made in this article which I’d recommend to anyone interested, but I want to talk a little here about a related idea that I’m becoming more aware of at the moment: the sense that there’s a degree of circularity in the way that funding, publication and prestige are self reinforcing, but that these aspects are not necessarily linked to the scientific progression. In doing so, I’m going to argue that the way we look at science right now is as much of a value judgement as the way we assess sports.
To start, I’d like to suggest that good scientific work could be defined as clear, testable hypotheses, reproducible methods, clear error analysis, and conclusions that can be used to make predictions for future work. I’m sure that definition could be improved, but arguably those aspects are key to any successful study. Importantly, I want to argue that nowhere in this is a link to previous studies, or any judgement of whether the topic is a worthwhile endeavour.
This clearly isn’t enough to define ‘excellent’ research, however. The topics of interest are always a value judgement; for example, a funding body might ask what is the benefit to national or international interests? These judgements have to be informed by prior experience, and quite a lot of that comes down to well received or highly cited publications. This is where editorial work comes in: papers are accepted if they are interesting or a big advance, but this needs to be interesting for a wide audience in the higher tier journals. Thus, editors will also make value judgements about whether research is interesting based on previous research. Moreover, the prestige of authors is attached to their most impressive publications, and money for further research goes alongside it. I worry that this kind of feedback might lead to gaps in our overall knowledge, or at least to blind avenues of research.
Think of it like the funding for British Olympic sports. Those sports that were successful in 2008 or 2012 received more funding for 2016. This led to a record medal haul in Rio, but some sports definitely lost out. One can wonder whether uniquely gifted athletes might miss out for the greater good of overall medal totals? The same might be true in science; hard work by talented researchers in fields that never previously produced ground-breaking papers might get ignored, even if it is revolutionary, while other fields produce increasingly marginal gains despite how fashionable they are.
I also have a sense that this leads to individual research fields becoming more and more defined. A common description of a Ph.D. project is that you push the envelope of human knowledge a tiny bit further out (2), but if that notional envelope gets pushed out more and more in one specific place over successive studies because that’s what has received most interest, then there’ll be gaps between those places that are unaddressed.
A slightly tenuous analogy: if we once again think of the Olympics, the 100m world record has got incrementally better over the years, but even with the world population growing exponentially the amount by which the record decreases has got smaller and smaller each time (excepting some unique athletes like Usain Bolt). Each prospective runner has to beat everyone who has gone before.
Imagine instead however that you wanted to run the 137m race at the Olympics. No-one would care because it’s not a race that’s got any historical precedent (even though it’s just another distance), but you could go and set a world record immediately! This is obviously a stupid example, but what about research topics that might be similarly low hanging fruit? Surely they aren’t exhausted?
To me, it comes down to a question of how we want science to progress. On the one hand it could look like the Olympics, with a defined set of fields and only a few elite institutions able to get people to beat everyone else in those topics. Or instead it could look like the Guinness Book of Records, where there are innumerable questions not necessarily linked to one another. There are certainly issues with the more chaotic second model, of course, but if there are benefits that can be reaped from applying it then I don’t see why we should worry nearly as much about previous ‘excellent’ research.
What does this mean for publishing? I’d suggest that prospective cover letter authors focus more on what we can gain from their research rather than why the field in question is a ‘hot button’ one. I appreciate that in many cases close links to societal benefit are what make some topics fashionable, and that’s great – but that judgement of societal benefit should be assessed on the merits of the individual study, not the previous work. Just a thought!
(1) http://www.palgrave-journals.com/articles/palcomms2016105