What is academics’ responsibility in dealing with indicators?

by Maximilian Fochler and Sarah De Rijcke

Source: "Piled Higher and Deeper" by Jorge Cham www.phdcomics.com
Source: “Piled Higher and Deeper” by Jorge Cham www.phdcomics.com

The metric tide is in. The use of quantitative indicators for the evaluation of the productivity and quality of the work of individual researchers, groups and institutions has become ubiquitous in contemporary research. Knowing which indicators are used and how they represent one’s work has become key for junior and senior academics alike.

Proponents of the use of metrics in research evaluation hope that it will provide a more objective and comparable way of assessing quality, one that is less vulnerable to the biases and problems often ascribed to qualitative peer-review based approaches.

However, critical research in science and technology studies and elsewhere increasingly points to considerable negative effects of the impeding dominance of quantitative assessment. In fact, indicator systems might serve as an infrastructure fueling hyper-competition, with all its problematic social and epistemic effects. They create incentives for researchers to orient their work to where high metric impact might be expected, thus potentially fostering mainstreaming at the expense of epistemic diversity, and prioritizing delivery over discovery.

Over recent years, many important initiatives have pushed for a more responsible use of metrics in research evaluation. The DORA declaration, the Leiden manifesto and the Metric Tide report are just the most prominent examples of discussions in academia and in the institutions that govern it. The recommendations of these initiatives have mostly focused on those actors which seem to have most bearing on the processes of concern: academic institutions, the professional communities providing the methods and data metrics build on, as well as evaluators.

But what about individual researchers? What is their responsibility in dealing with indicators in their everyday practices in research? Twenty years ago, when the metric tide was still but a trickle, the eminent anthropologist Marilyn Strathern (1997) wrote “Auditors are not aliens: they are a version of ourselves.” (p. 319). Still today, it would be simplistic and wrong to assume that researchers are merely victims to bureaucratic auditors imposing indicators on them.

Don’t we all strategically use those metric representations of our work we see as advantageous for whichever goals we are currently pursuing? Do metric logics structure the way we present ourselves in our profiles on academic social networks, and how we look at others’ portfolios? Isn’t there a secret joy in watching one’s citation scores and performance metrics grow? In how far do we individually play along the logics that we might criticize as a more collective phenomenon in our more reflexive moments? Is this a problem? If so, should finding ethical ways of dealing with indicators not be part and parcel of being a responsible researcher today?

These are the core questions of a recent debate “Implicated in the Indicator Game?” we edited for the journal Engaging Science Technology and Society. This debate gathers essays of a cast of junior and senior scholars in science and technology studies (STS). STS is an interesting context for discussing these wider questions, because scholars in this field have contributed particularly strongly to the critical discourse on indicators. Still, in their own careers and institutional practices, they often have to decide how to play the indicator game – for not playing it seldom seems a viable option.

In one essay in this collection, Ruth Müller asks, quoting an informant from her own fieldwork: “Do you think that the structure of a scientific career is such that it tends to make you forget why you’re doing the science?”. Diagnosing a loss of meaning in running to fulfill quantitative indicators, she points to aspects of work in science and technology studies which are indispensable for quality, but hardly to be expressed in indicators – interdisciplinary engagement with the sciences and engineering being the most important example for STS.

So, what can individual researchers and institutions do? Our collection contains many different answers to this question. All agree, however, that ignoring or boycotting indicators cannot be the solution. As Alan Irwin reminds us, the questions of accountability that indicators are supposed to answer will not go away. They need to be answered in different terms, by offering and celebrating new non-reductive concepts of the quality of research in different fields. For individual researchers, this calls for confidence to stand up for the quality also of those aspects of their work that cannot be well expressed in metrics, but also to recognize these qualities in others’ work.

As an outcome of our debate, we offer the concept of evaluative inquiry as a starting point for a more responsible dealing with indicators. In a nutshell, evaluative inquiries may present research work numerically, verbally, and/or visually – but aim to do so in ways which do justice to the complexity of actual practice and its engagements, rather than to reduce for the sake of standardization. They also do not jump to a reductive understanding of what counts in an assessment (such as publications), and aim to produce and represent the multiple meanings and purposes of researchers’ work. They are processual in the sense that the choice of criteria and of whether or not certain indicators make sense cannot be fully described in advance, but needs to be negotiated in the process of evaluating.

Of course this all sounds nice in theory. But it will require researchers to engage in these practices, rather than in hunting metric satisfaction. And it will require institutional actors to engage in more substantive discourses about the quality of research.

References

Strathern, M. (1997). ‘Improving ratings’: audit in the British University
system. European Review 5(03), 305—321.


Maximilian Fochler is assistant professor and head of the Department of Science and Technology Studies of the University of Vienna, Austria. His main current research interests are forms of knowledge production at the interface of science and other societal domains (such as the economy), as well as the impact of new forms of governing science on academic knowledge production. He has also published on the relations between technosciences and their publics as well as on publics’ engagement with science.

Sarah de Rijcke is associate professor and deputy director at the Centre for Science and Technology Studies (CWTS) of Leiden University, the Netherlands. She leads a research group that focuses on a) developing a theoretical framework on the politics of contemporary research governance; b) gaining a deep empirical understanding of how formal and informal evaluation practices are re-shaping academic knowledge production; c) contributing to shaping contemporary debates on responsible research evaluation and metrics uses (including policy implications).