Skip to main content

On the shifting paradigm for research literature

Background. The way research progress is shared and published has been changing during the last decades from the old paradigm that revolved around the fact that publication had to be on paper. The transformation is remarkably slow, however. We are clinging on to basic concepts that were natural for the old paper model (e.g. enormous amounts of journals) but are now of little or no advantage. Among other reasons, the observed inertia is explained by the fact that the traditional business model for scientific literature is ideal for the established publishers. This model has been discussed by many (regularly in The Economist, for instance), whereby publicly-funded researchers produce the content and most of the quality control, and publicly-funded libraries pay juicy subscription fees for researchers to be able to access the research of others. Add to it the partly monopolistic character of the business (an author of an article can choose where to publish, but a reader of that article cannot choose where to read it, a library therefore needing to subscribe to all credible journals), and you get the ideal business with enviable profit margins.

Of course, this somewhat cynical depiction is not a fair one for many responsible publishers that have been loyally serving different academic communities for many decades, many of them based on learned societies like the APS. It is however enlightening to remember the story of Robert Maxwell, publishing tycoon of the 60’s through 80’s, founder and developer of Pergamon, which was later bought and absorbed by Elsevier (see e.g. this article in The Guardian). I would describe the model by the (admittedly provocative) motto: anything that moves, give it a journal!

The Open Access movement came about in response against the mentioned business model, with the aspiration that the research literature should be accessible to everyone to read. This aspiration, however, demands a shift in business model. The model that has appeared in response to the drive for open access (“gold” open access) is that of authors paying once and upfront a publication fee per accepted paper, and the publisher makes the final form of the paper accessible (hopefully) in perpetuity to everybody with internet access. It has its own problems, however. “Predatory journals” have mushroomed in great numbers offering publication for a fee, which generates great benefits if all they do is hanging virtually all as-received papers on a web server. Serious journals, however, struggle to survive even pushing to quite high fees, given the fact that the gold open access business model is at odds with “importance” selection: the fee is charged per paper published, while a substantial part of the costs scale with the number of papers processed. That is the main reason why many of the most prominent journals are not very happy with the new prospects offered by open access, and that is why many successful researchers that build their reputations by publishing in those prominent journals are not too happy either.

This resistance has become apparent with the appearance of Plan S, a proposal by major funders of research to impose open access publication of every result of the research they fund. Similar initiatives have already been spearheaded by specific funders, but Plan S represents a dramatic shift given the very wide scope of funding agencies behind it, and, especially, given its opposition to hybrid solutions – journals offering authors gold open access to their papers if they pay, while keeping subscription fees for the overall journal. I think Plan S represents a great opportunity for a rethink of the model.

New paradigm

The thirty years of history of arXiv.org has demolished many myths around academic publishing. The arXiv is not a publisher in a strict sense, though. As a preprint-sharing service it does not commit to curation and maintaining access in perpetuity, for instance, but several communities (mostly around mathematics and theoretical physics at first) found in that service essentially everything they needed for their work, the actual publishing becoming secondary. The arXiv with its very lean editorial process explicitly demonstrated the opportunities offered by the web for the fundamental revision of the academic literature. Most importantly, it demonstrates that it can be done entirely within the academic community, thereby reclaiming the ownership of research.

Other initiatives have appeared along this line aiming to complete the publishing paradigm as proper journals, including long-term access commitment and indexing in major literature databases. There are several worthy initiatives, but let me just mention PLoS ONE, of the Public Library of Science. It is fully open access, using the author-pays model described above, for quite a moderate fee. It is based on the academic community, supported by academic libraries and research funding agencies. It uses a slightly more demanding editorial scheme than the arXiv, whereby, in addition to a semi-automatised typesetting and formatting check, editors and referees see that each paper appears to be technically correct and at least a minimally new contribution to its field. Two key characteristics of this and similar journals: (i) it does not judge on the importance of the paper, and (ii) it is meant for all research fields (that is what the ONE in the name stands for). It is a bold move rebelling against the old paradigm, using modern technology plus a clever use of tags and metrics. As such, it is a completely indiscriminate journal, the discrimination provided by the tags and metrics.

Discrimination and metaliterature

Discrimination is needed, no question about it. In most research fields, the shear amount of papers appearing every day makes it absolutely impossible to keep abreast of what is happening without the discrimination in “importance” provided by today’s journals. Selecting to a manageable number of papers by field tags or keywords is not enough. We want to know of important contributions in other fields. It is true that time provides good measures of importance for papers, and good metrics can provide the needed discrimination, as proposed by PLoS ONE and others. But that discrimination only works for “old” literature, papers that have been around for at least a year or two. We also need discrimination of the recent literature, however, which is now only provided by the old paradigm, in its highly selective journals.

It is important to recognise, however, that the needed discrimination provided by journals is information about the papers that do not affect the papers themselves. It is metadata, metaliterature. In fact, this kind of discriminating metaliterature is arguably the most important service offered by journals (actually by their names!). But we should be able to do it much better. The old model is wasteful and has undesired side effects. You send your paper to journals of different levels in succession to see at what level you can slot it in, going through editors and referees every step of the way. An evaluation process deciding on a grade instead of a series of yes/no processes would be clearly better, even allowing for the degree of randomness intrinsic to any refereeing process. Of course, provision should be given for suitable appeal processes, including starting afresh in occasions (one can even think of systems in which acting as referee gives you credit to request more refereeing for your own papers). The key point is thinking about “importance” level as a bit of information to be provided in the editorial process to help with recent literature discrimination, not more. Having this categorisation as a searchable tag instead as in the name of the journal would be beneficial against the growing impact-factor-fascination syndrome (IFFS) affecting the evaluation of research and researchers (also called impactitis or impact-factor-myth syndrome, at least in dermatology).

Importance, interest, impact, and leadership

I have been using the term “importance” in quotes, using the pragmatic definition that whatever was considered important in the old model to assign a paper to a journal can still be used in a grading model. A clear problem of the old paradigm, however, is that the model itself is clearly affecting our perception of importance. We all know that deciding on the importance or interest of a piece of research, especially recent, can be quite subjective. I am not denying an intrinsic component to the value of a topic or particular contribution, but it is also undeniable that its impact is affected by other factors. Some topics perceived as important for whatever reason attract more researchers and effort (and funding, and even journals) into them, thereby “demonstrating” their importance as self-fulfilled prophecies.

Leadership plays an important role in this process, and leadership in what is considered important in fundamental research is increasingly being delegated to the editors of high-impact journals. Some journals have been performing that role quite responsibly for years. I am, however, worried about the fact that we are not able to regain control as a community when other interests (e.g. commercial) steer research. Our reliance on, for instance, journals of the Nature publishing group (now owned by Springer), as seemingly highest indicators of scientific quality, worries me, especially in the light of the quite commercial nature of their decisions in the last few years, exploiting our high regard for their brand with more and more journals, invading areas well covered by pre-existing journals for the benefit of nobody but their own. Our need for discrimination is being exploited commercially, and I fear it is too tempting for such companies to try to maximise profit and secure dominance by playing the non-linear dynamics game in their benefit (e.g. accepting papers that will collect most citations being in overpopulated topics).

And we are playing their game too, both personally and institutionally. Take the generalised use of journal names and their impact factors for research evaluation, the mentioned IFFS. In spite of the San Francisco Declaration (DORA) against such practices, many funding agencies and countries use impact factors to decide on funding, from personal salaries to support for institutions, thereby strengthening the leadership of highly selective journals. As illustrative example, it is being widely recognised that a key aspect of the scientific endeavour, reproducibility, is withering away of today’s science practice given the low value ascribed to reproducing experiments in the present model.

The academic literature and metaliterature

The described scenario provides a further reason for a change in paradigm. The separation between literature (curated body of academic papers) and metaliterature (information about them) is the first step, moving away from their entanglement in present journals. It should allow the recovery of leadership by the academic community, in addition to natural open access.  I support having the body of research literature as a single journal repository as pushed by PLoS ONE and others: The drive for an ever increasing number of journals, besides the business opportunity they may represent, has been always justified by specialised topics that are not felt to be quite represented by the pre-existing body of literature, in many occasions due to new (trendy) topics arising. A single journal repository with good and evolving tagging constitutes a much better solution for evolving fields of research. 

The searchable tags would be the pieces of metaliterature bound to the literature body, including the “importance” grading tags. The grade as a searchable tag for recent literature discrimination, not an intrinsic part of the reference of a paper. The grade could even disappear after some time, consistently with the fact that it already served its purpose. For older literature discrimination, metrics could also be managed in the repository, providing within the community what now is offered by the World of Science, and by Scopus (themselves managed by the private companies Clarivate and Elsevier, on which our dependence is also growing, also a component of IFFS).

Of course, there would be scope for external metaliterature to be provided by anyone, independently of the repository. Science news, commentaries, digests of papers, pieces for the general public, are typical examples. Indeed, a substantial part of what provided in an issue of a highly cited magazine is precisely that. (One could argue that a review is in essence metaliterature, but my iconoclasm does not reach that far yet). Private journals based on metaliterature would provide value and have a role to play. Imagine a new Nature with just the first half of every issue. It may happen that being selected for commentary by the new Nature would carry a similar flare to what a published paper in today’s Nature now carries. It may happen that we become affected by new syndromes analogous to IFFS - after all, having somebody else doing the hard work of serious evaluation for funding or promotion will always be tempting. But it may not, it will be up to us.

Peer review

Peer review is a very important component of our present model. It is far from perfect, and ideas are floating around on possible improvements or replacements, from double-blind refereeing to perfectly open, and even communal. I think improving on peer review is a difficult problem, and I do not see clearly through it. In that sense I would be inclined to be cautious.

But in the context of this piece, there are two clear points to make. Firstly, peer review affects both the literature and metaliterature. On the former, it is clear that good refereeing can improve a paper, even substantially. On the latter, it is obvious that the assessment of importance is heavily influenced by refereeing. Secondly, both functions can be still performed in the context of the ideas being pushed here. Furthermore, a wide open-access literature repository probably allows more flexibility for the exploration (and evolution) of new ideas and procedures in the context of peer review.

Curation, long-term commitment, and editorial processing

The management, curation, and accessibility of such body of literature for the foreseeable future is a considerable task, demanding substantial resources. They represent, however, a small fraction of the accumulated subscription bill faced by all academic libraries today. Library consortia should be main players in this game.

Learned societies, as representatives of academic communities, can play an important role in the editorial tasks needed for the literature (good editorial processes, including peer review, do improve papers) and metaliterature (indexing and grading, also including peer review), as they are now doing with their own journals. Editors assigned to papers (for referee management and editorial and formatting oversight) can be from a set of active academics, as now done in many journals, but there is no reason for the set to be small. The majority of established academics could be assigned papers in such a way that everybody gets a manageable number of papers per year, in a similar way to what happens with refereeing. Costs for such a model should be coverable with a relatively modest publication fee.

Getting there

I drafted a paper fifteen years ago containing many of these ideas, fruit of conversations with many people (mostly Jose Soler, Felix Yndurain, Luis Seijo, Zoila Barandiaran, Daniel Sanchez-Portal, Ivo Souza). One of the weak aspects of it was precisely the section on “Getting there”. A paradigm shift as the one envisaged is hard to push, especially involving as much money as it does.

In the last fifteen years many things have happened, however; many people have had similar thoughts (in this piece I am not claiming originality) and we are, in some sense, already slowly going there. Some community initiatives have gone in different directions, many still think in terms of having more journals and/or highly-selective ones (high quality is always a worthy cause). I hope we can all slowly converge to a better model for the academic literature, and I would urge the research community to take the opportunity brought about by the drive for open access and Plan S.

And this brings me back to Phys. Rev. Research. I think it is a step in the right direction, and I also think the further steps proposed in that post would represent a good way of getting us there, hopefully coordinating with initiatives like PLoS ONE. We should all try to fight IFFS as much as possible in our publishing choices. It is hard for scientists starting in their careers when they see that their job prospects rely on collecting the right journal names in their CVs, and it is also hard for institutions that rely on similar collections for their funding. For these reasons I do not advocate for overnight revolution, but there is scope for gradual action (not least educational) that should get us there if we point in the right direction, as the last fifteen years have demonstrated.

A final plea: Although inconsistent with my welcoming Phys. Rev. Research (consistency has never been a strength of mine), please, no more journals!

Comments

Popular posts from this blog

The European Union and fundamental research

The European Commission (EC) has sought feedback from the research community base in the form of a survey for its Horizon Europe research plan. It can be found in   https://ec.europa.eu/eusurvey/runner/659c5eea-5f1d-341b-482e-92b53222f619   The overall strategy is already decided, in terms of basic themes etc, and the questions in the survey are already quite specific. Here is a general comment on EU research as funded by the EC. It relates to fundamental research as opposed to directed or applied.   The framework programs (as all EC endeavors) are meant to complement national efforts and not duplicate them (as a fundamental mandate of relevant treaties). Blue-skies fundamental research is left for national programs, and  is therefore not EC funded, except for the Marie Curie program with the training and mobility argument, and the European Research Council (ERC) with the excellence argument. The latter is essentially an award progra

Phys. Rev. X Quantum

It is a new journal that has been recently announced by the American Physical Society (APS). It is introduced as " a highly selective, open access journal featuring quantum information science and technology research with an emphasis on lasting and profound impact." But why? Both Phys. Rev. Research and Phys. Rev. X perfectly cover the remit. It is very disappointing that APS now decides still to play the game of new journals for new trendy topics. I welcomed Phys. Rev. Research  as a step in the right direction ( see previous post ) . I am afraid Phys. Rev. X Quantum goes in the wrong direction, both as a new topical journal and as a highly selective one. My reasons for this are presented in an earlier post . Sadly, APS is following the path defined by others, in a competition among publishers that does not serve the community, and in which it has few chances to maintain (regain?) leadership.  Of course, I have absolutely nothing against the Quantum community, whatever i

The spirit of the San Francisco DORA in scientific meetings

The San Francisco Declaration on Research Assessment (DORA) advocates for avoiding the use of journal impact factors in evaluations of scientists and their contributions. I very much agree with the idea, and so do the many signatories of the declaration, both personal and institutional. Impact-factor fascination syndrome (IFFS, the very thing DORA wants to counter) is however spreading and thriving in the research community. I would propose to extend the spirit of DORA to scientific meetings: Speakers in adhering research meetings should avoid quoting journal names in what they show . Nowadays, the names of one or two authors and the year should suffice to find any paper, if there is no arXiv reference for instance. It sounds sensible that when speakers describe their work, they show the reference of where to find the relevant publication. But we all know that showing references to high-impact-factor journals is used to impress the audience (not to mention journal covers), and I can