Skip to main content

Research antiefficiency principle

It is an intriguing thought (and admittedly a provocative way to put it). University funding in the USA, UK and other research-powerful countries is partly based on grant overheads. That is, funds payed by research agencies to the research institutions, beyond the direct costs of the research itself, to cover for a proportional part of the costs of running the institutions themselves. Sensible.

Overheads do represent a significant part of institutional income. It makes sense in many ways, but, in essence, it leads to the antiefficiency principle: Since overheads scale with the direct research costs, universities and research institutions, more or less directly, tell their research staff: "do your best, for as much money as possible".

The title of this post is provocative because the quoted statement above is not as antiefficient as it sounds. "Do something for as much money as possible" would be antiefficient, but the actual statement implies two maxmisations, "your best" and "as much money as possible", that is, maximise both input and output.

Here is the tricky thing, the efficiency of the system depends on how much weight we put on either maximisation, which depends on various subtle mechanisms, very much in the research culture of the different countries. A balanced system can be well tuned, in the sense that output quality defines the likelihood of new input (with provision for entering the wheel as starting researcher). In the UK the balance is (at least partly) kept by the fact that, complementing the overheads, universities are also funded by direct evaluation of their output in a national research assessment exercise (now REF). The system/culture also depends on time: in some particular country it may have worked well at some point, but then becoming more antiefficient with time.

The fact is that many institutions evaluate researchers by input, using grant income as the key component of their evaluation. That is certainly an aberration. The thinking is: if the researcher has secured the grant, somebody will have evaluated the researcher, and somebody will evaluate the output of the project. It implies that the institution delegates away the assessment of quality of the output. It has now even become common that the input is perceived as the goal in many processes and evaluations. Capacity for raising funds is the wanted trait, whatever done with the direct funds (once the overheads are secured) becomes a secondary consideration.

Some research institutions in Europe have become specialised in attracting European grants (a sport demanding very specialised skills) while not so skilled for the execution of the corresponding projects. I have witnessed situations in which such a research institution gets the grant and, after securing the overheads, they subcontract the actual research work to a private company. Probably an extreme and quite minoritary situation, but, nevertheless, illustrative.

It is the downside of an otherwise sensible model that has worked well in some places for decades. We researchers just need to be aware and help keeping a healthy balance in our own contribution to evaluations, while reminding institutions about the antiefficiency principle.

I have a naughty proposal in this context.  To the many bibliometric indices currently used to evaluate research, we could add the index of citations per dollar. Technically hard to measure (not so hard for large averages, such as per country). As any other index, it would need to be wisely used in conjunction with other metrics, and always allowing for differences among fields of research, etc. It is nevertheless intriguing what the landscape would look like when using it. After all, we owe it to the tax payer.

Comments

Popular posts from this blog

The European Union and fundamental research

The European Commission (EC) has sought feedback from the research community base in the form of a survey for its Horizon Europe research plan. It can be found in   https://ec.europa.eu/eusurvey/runner/659c5eea-5f1d-341b-482e-92b53222f619   The overall strategy is already decided, in terms of basic themes etc, and the questions in the survey are already quite specific. Here is a general comment on EU research as funded by the EC. It relates to fundamental research as opposed to directed or applied.   The framework programs (as all EC endeavors) are meant to complement national efforts and not duplicate them (as a fundamental mandate of relevant treaties). Blue-skies fundamental research is left for national programs, and  is therefore not EC funded, except for the Marie Curie program with the training and mobility argument, and the European Research Council (ERC) with the excellence argument. The latter is essentially an award progra

Phys. Rev. X Quantum

It is a new journal that has been recently announced by the American Physical Society (APS). It is introduced as " a highly selective, open access journal featuring quantum information science and technology research with an emphasis on lasting and profound impact." But why? Both Phys. Rev. Research and Phys. Rev. X perfectly cover the remit. It is very disappointing that APS now decides still to play the game of new journals for new trendy topics. I welcomed Phys. Rev. Research  as a step in the right direction ( see previous post ) . I am afraid Phys. Rev. X Quantum goes in the wrong direction, both as a new topical journal and as a highly selective one. My reasons for this are presented in an earlier post . Sadly, APS is following the path defined by others, in a competition among publishers that does not serve the community, and in which it has few chances to maintain (regain?) leadership.  Of course, I have absolutely nothing against the Quantum community, whatever i

The spirit of the San Francisco DORA in scientific meetings

The San Francisco Declaration on Research Assessment (DORA) advocates for avoiding the use of journal impact factors in evaluations of scientists and their contributions. I very much agree with the idea, and so do the many signatories of the declaration, both personal and institutional. Impact-factor fascination syndrome (IFFS, the very thing DORA wants to counter) is however spreading and thriving in the research community. I would propose to extend the spirit of DORA to scientific meetings: Speakers in adhering research meetings should avoid quoting journal names in what they show . Nowadays, the names of one or two authors and the year should suffice to find any paper, if there is no arXiv reference for instance. It sounds sensible that when speakers describe their work, they show the reference of where to find the relevant publication. But we all know that showing references to high-impact-factor journals is used to impress the audience (not to mention journal covers), and I can