By Daniel Tarade
I recently wrote that our system of scientific publishing is broken, having been corrupted by a capitalist model that has placed undue importance on prestige, sexiness, and arbitrary measures of importance. A brief recap. The majority of science is published by for-profit journals that charge obscene rates both for publishing with and subscribing to their publication. Academics are most often publicly funded. Much like when one drives across the border of two countries where they drive on opposite sides of the road, chaos ensues. Scientists often give up copyright to their own work, the public is prevented from reading the advancements that they funded, and editors at for-profit journals have become gate keepers of what is important science. It is fucked up. And, like most broken systems, it could change overnight. In the words of John Lennon, we could get [proverbial] peace today, but we have to want it. However, scientists have co-opted industry standards for important science and advancement during one’s career necessitates playing the game. A combination of indoctrination and outdated hiring practices has maintained the parasitic publishing system. However, although most science is published by for-profit journals, there are different models of publishing available. This essay will describe different approaches to scientific publishing as an exercise in moving towards an ideal mechanism of scientific dissemination.
One of the complaints of the traditional publishing system is that data is kept behind paywalls. So not only do researchers pay for publication but their own institutions have to pay insane subscription fees to access the very same research. For the public, to access a single manuscript will often cost over $50. The same research that you may have funded with your taxpayer dollars. Of course, libraries cannot afford to subscribe to every single journal either. I once published a short article in Mini-Reviews in Medicinal Chemistry. Neither of my institutions had a subscription to the journal and the editors never provided me with a PDF copy of the final, type-set manuscript. If you want access to this manuscript, it would cost $63! So, despite paying over $1000 to publish with this journal, I had to resort to pirating my own manuscript. Of the many despicable aspects of traditional publishing, closed-access publication will most likely be the first to fall. A recent analysis of the prevalence of open-access in scientific publishing concluded that roughly a quarter of all scientific literature is freely available and that half of newly published work is open access.[i] This positive trend has be spurred by major funding agencies from North America and Europe (NIH, Wellcome Trust, European Commission, The Bill and Melinda Gates Foundation, US National Science Foundation) that require any funded and published work be made freely available. Most often, scientists have to pay an additional fee to offset lost profits. Our recent publication in an open access journal cost around $9000. Open access makes sense as much of the scientific research in the world is funded according to a socialist model (i.e. taxes). If open-access is an ideal most scientists can agree upon, perhaps it is time to cut out the middleman.
Perhaps I want too much but for me open-access is just the beginning towards rectifying the situation. Another issue I talked about last week is that the high-stakes profit-driven world of scientific publishing means that most journals won’t publish work if someone else has already published a similar manuscript. This is because the second manuscript, even if the work was completed contemporaneously, will not be cited as often and for-profit journals are under no obligation to hurt their impact factor. As a result, many scientists are secretive of their work, refusing to discuss unpublished work at conferences, regardless of how important the conclusions may be for the field. The secrecacy allows redundant and misguided research to fester, all in the name of scientific competition. However, the concept of pre-print servers has become more popular, particularly in the physical sciences. The basic idea is that, rather than wait for months and years to successfully navigate peer-review at a traditional journal, a version of your manuscript can be disseminated directly to the masses. The benefits are clear. Scientists can lay to stake to a claim more quickly while other scientists in the field can more rapidly try to replicate findings or modify their own research questions accordingly. The strategy is definitely more transparent. Further, publication via a pre-print server is not mutually exclusive to peer-reviewed research. Most articles indeed go on to be published in a more traditional journal. BioRxiv, the most popular pre-print server for manuscripts in the field of biology, will provide a DOI number for all manuscripts that will remain with the manuscript even when transferred to another journal, allowing for an accurate record of citations. The concept of multi-stage publishing will be important as scientists cultivate a more open approach to exploring the universe.
More problems still! With editors playing the role of gate-keepers, scientists have internalized concepts of what is publishable. Most generally, scientists will focus on novel, unexpected, or so-called positive results. A positive result is an arbitrary concept but relates to when a hypothesis is not nullified. As a result, research that shows that a protein does not promote cancer (despite literature that might suggest otherwise) or a drug does not inhibit viral infection (despite a promising hypothesis) might not be published. Either the manuscript will be repeatedly rejected by for-profit editors or researchers will not even bother trying to publish. You see, scientists are self-conscious. Often, we would rather retroactively change our hypothesis or hide our results than appear wrong. The conditions are such that important information is not disseminated and pressure mounts to find exceedingly interesting results, even if such results should make our bayesian senses tinge. The conditions are also in place for outright fabrication of data. So what can a journal do? The journal Cortex has established the concept of a registered report.[ii] Before even conducting a study, scientists are encouraged to register their study design with the journal. Rather than sending results out for peer-review, research questions, hypothesis, and study design are instead subjected to scientific scrutiny. Studies that address important questions in an appropriate manner are accepted-in-principle before the study is commenced. As long as the study is conducted in the pre-approved manner, the study will be published, even if hypotheses are disproved. In such a system, fabrication and dishonest reporting are dis-incentivized and negative findings can be distributed.
A different strategy also exists for promoting reproduction studies and the reporting of all science, not just those studies deemed to be of “importance” by those with vested financial considerations. Journals such as PLoS One (standing for Public Library of Science) offer not only open access publishing but evaluate submitted research only on the basis of scientific rigour. Rather than evaluate if work is interesting, an exceedingly subjective question, editors and reviewers ask how well the conclusions are supported by the data provided. The first manuscript I ever published was in PLoS One and it remains one of the most rigorous scientific processes I endured.[iii] Three rounds of revisions lead to substantially tempered conclusions, expanded discussion, completion of additional control experiments, and a changed title. We had submitted the manuscript to four other journals, all of which resulted in a rejection at the hands of editors who stated that the work was not of sufficient interest. And I agree. I hypothesized that the chemotherapeutic drug, Combretastatin A4, may have additional targets within a cell aside from the cellular skeleton, its classically understood target. The results of our experiments would have implication on how future combretastatin drugs are tested pre-clinically. Ultimately, we did not uncover information that supported our hypothesis. Most of our manuscript actually belonged to the category of “negative data.” Yet, others in the world may have had a similar hypothesis (I didn’t just conjure up the idea from thin air). One can foresee a scenario where scientists logically arrive at similar hypotheses, which when tested are nullified and, thus, never published. Rather than one or two studies being published and laying the idea to rest, generations of scientists continue asking the same question and burying the same data.
The last journal I would like to discuss is eLife. Unlike PLoS One, eLife still does evaluate work on the basis of ‘importance’ but is still markedly different from other journals. eLife is open access, non-profit, and refuses to advertise its impact factor. In doing so, the journal tries to uncouple arbitrary, industry standards of importance from ‘promise’ to advance a field. One reason why eLife is successful in this regard is that all manuscripts are handled by active scientists who are experts in their respective field. Rather than receive a rejection letter that simply states that the work is not of sufficient interest (and I have received my fair share of those), a scientist will discuss strengths and weaknesses of the work. Although I prefer a publishing system that is entirely uncoupled from concepts of importance, this is a suitable intermediate; let scientists be gate-keepers, not editors at for-profits. However, eLife has been experimenting with other strategies of publishing as well. One frustration with traditional publishing is how long it takes to navigate peer-review, with multiple rounds of revision resulting in work that can take longer than a year to be published, if it isn’t ultimately rejected. To circumvent this delay in the dissemination of findings, eLife is experimenting with accepting all manuscripts sent out for peer-review, regardless of how reviewers receive the work.[iv] Based on comments from reviewers, the authors have full discretion over how they will respond to the comments; authors can perform additional experiments, revise conclusions, or withdraw the manuscript. However, upon providing their revised manuscript, it is guaranteed to be published (along with the reviewer comments). It is a remarkably transparent system that is cognizant of the fact that peer-review is not the be-all and end-all of scientific discussion. Peer-review is merely the first step and to delay publishing over a pedantic back-and-forth is silly. In the experimental eLife publishing stream, manuscripts will also be evaluated on the basis of how well they responded to reviewer comments and suggestions. Expedited and transparent is a good look for science.
The ideas above largely address the many issues with for-profit publishing but are integrated piecemeal. Scientists also are hesitant to publish in journals like PLoS One because of its lower impact factor and reputation as a journal where people dump their un-interesting research. Many scientists appreciate the idea of journals like PLoS One in their first breath, while recommend against publishing in such journals with their next; such a move may stall your career before it even begins, they say! In reality, these journals are best thought of as the individual who chooses to bicycle to work, religiously recycles, and does not eat meat, all in an attempt to mitigate their carbon footprint while they are routinely cut-off by large trucks on their commute and un-ironically called soy boys by their co-workers. Good intentions are not enough when the entire scientific enterprise (or planet) are in jeopardy.
Postscript on Scientific Piracy
As you can imagine, many people are upset with the idea of closed-access publishing. Enter Sci-Hub. Much like Pirate Bay, but with a moral high-ground, Sci-Hub allows people to access millions of articles, even those that are closed-access. I routinely pirate scientific articles hidden behind paywalls, including many that have been hosted on this blog, rather than access them via the University of Toronto library. It’s the principle.
[i] Piwowar, H., Priem, J., Larivière, V., Alperin, J. P., Matthias, L., Norlander, B., ... & Haustein, S. (2018). The State of OA: A large-scale analysis of the prevalence and impact of Open Access articles. PeerJ, 6, e4375.
[ii] Chambers, Christopher D., et al. "Registered reports: realigning incentives in scientific publishing." Cortex 66 (2015): A1-A2.
[iii] Tarade, Daniel, et al. "Structurally simplified biphenyl combretastatin A4 derivatives retain in vitro anti-cancer activity dependent on mitotic arrest." PloS one 12.3 (2017): e0171806.
[iv] Patterson, M., & Schekman, R. (2018). Scientific Publishing: A new twist on peer review. eLife, 7, e36545.