Skip to main content

Research antiefficiency principle

It is an intriguing thought (and admittedly a provocative way to put it). University funding in the USA, UK and other research-powerful countries is partly based on grant overheads. That is, funds payed by research agencies to the research institutions, beyond the direct costs of the research itself, to cover for a proportional part of the costs of running the institutions themselves. Sensible.

Overheads do represent a significant part of institutional income. It makes sense in many ways, but, in essence, it leads to the antiefficiency principle: Since overheads scale with the direct research costs, universities and research institutions, more or less directly, tell their research staff: "do your best, for as much money as possible".

The title of this post is provocative because the quoted statement above is not as antiefficient as it sounds. "Do something for as much money as possible" would be antiefficient, but the actual statement implies two maxmisations, "your best" and "as much money as possible", that is, maximise both input and output.

Here is the tricky thing, the efficiency of the system depends on how much weight we put on either maximisation, which depends on various subtle mechanisms, very much in the research culture of the different countries. A balanced system can be well tuned, in the sense that output quality defines the likelihood of new input (with provision for entering the wheel as starting researcher). In the UK the balance is (at least partly) kept by the fact that, complementing the overheads, universities are also funded by direct evaluation of their output in a national research assessment exercise (now REF). The system/culture also depends on time: in some particular country it may have worked well at some point, but then becoming more antiefficient with time.

The fact is that many institutions evaluate researchers by input, using grant income as the key component of their evaluation. That is certainly an aberration. The thinking is: if the researcher has secured the grant, somebody will have evaluated the researcher, and somebody will evaluate the output of the project. It implies that the institution delegates away the assessment of quality of the output. It has now even become common that the input is perceived as the goal in many processes and evaluations. Capacity for raising funds is the wanted trait, whatever done with the direct funds (once the overheads are secured) becomes a secondary consideration.

Some research institutions in Europe have become specialised in attracting European grants (a sport demanding very specialised skills) while not so skilled for the execution of the corresponding projects. I have witnessed situations in which such a research institution gets the grant and, after securing the overheads, they subcontract the actual research work to a private company. Probably an extreme and quite minoritary situation, but, nevertheless, illustrative.

It is the downside of an otherwise sensible model that has worked well in some places for decades. We researchers just need to be aware and help keeping a healthy balance in our own contribution to evaluations, while reminding institutions about the antiefficiency principle.

I have a naughty proposal in this context.  To the many bibliometric indices currently used to evaluate research, we could add the index of citations per dollar. Technically hard to measure (not so hard for large averages, such as per country). As any other index, it would need to be wisely used in conjunction with other metrics, and always allowing for differences among fields of research, etc. It is nevertheless intriguing what the landscape would look like when using it. After all, we owe it to the tax payer.

Comments

Popular posts from this blog

On the shifting paradigm for research literature

Background. The way research progress is shared and published has been changing during the last decades from the old paradigm that revolved around the fact that publication had to be on paper. The transformation is remarkably slow, however. We are clinging on to basic concepts that were natural for the old paper model (e.g. enormous amounts of journals) but are now of little or no advantage. Among other reasons, the observed inertia is explained by the fact that the traditional business model for scientific literature is ideal for the established publishers. This model has been discussed by many (regularly in The Economist, for instance), whereby publicly-funded researchers produce the content and most of the quality control, and publicly-funded libraries pay juicy subscription fees for researchers to be able to access the research of others. Add to it the partly monopolistic character of the business (an author of an article can choose where to publish, but a reader of that article c…

Welcome Physical Review Research

This post is addressed to fellow physics researchers:

I very much welcome the new open access journal Physical Review Research by the American Physical Society. It is a step in the right direction. I have great hope in the APS keeping its leadership in physics publishing in a way that journals serve the academic community and not the other way around. PRR aims to serve the whole physics community, subfields being identified by searchable tags. Ideal next steps to my mind:

(i) Gradually subsume the Physical Review journals into PRR (easier said than done, I know, especially moneywise).

(ii) Analogously to the tags identifying subfield, tags should also reflect “importance and broad interest” as now done by the categories of regular articles, rapid communications, and Physical Review Letters (or Physical Review X). A numerical tag would suffice: 1, 2 and 3 for the three mentioned categories, for instance. One could even go for a level 4, indicating the level of papers that would go into…