Making and doing a new Handbook for STS

by Victoria Neumann

Source: Department of Science and Technology Studies, University of Vienna

The moment is finally here: The new edition of the Handbook of Science and Technology Studies is finally finished and available. To say it in Latour’s terms, the Handbook is now “ready made”, but how was it in the making? How was this important representation of our discipline co-produced by the numerous authors, editors, politics and business of academic publishing and last but not least the material constrains of putting whole fields of study and their intersections into roughly 1200 pages? This post is looking back at the sometimes messy processes in the creation of the latest edition of the Handbook and the attempts to keep the mess under control. Stories from the Handbook back office. And to make it more playful and in order to advertise, I have hidden the titles of several chapters in this post. How many can you find? (Hint: the table of contents can be viewed here)

The story of the new edition of the Handbook began with an outreach, which was the call for abstracts of chapters. This bottom up approach from the STS community provided the editors with what they described in the introduction as the seeds for the landscape that this volume was about to become. In between the abstracts and the finished version many choices were made in shaping this landscape. What needs to be included (what chapters, sections, historical approaches and strands of research)? How much can be included (e.g., chapter lengths, images, bibliographic references)? Thinking of STS as a transdisciplinary environment of research, those choices made an effort to open up the field rather than limiting it. Of course, environmental justice to such a large field is nearly impossible while balancing limited resources — time, space in the future book, money — at hand, and the aim of making the topics comprehensible for the imagined future readers, for instance. However, structural inequality still persists, in a sense of still lacking non-Euro-American authors and arguably certain less mainstream perspectives. The editors reflect on this issue in their introduction.

My work for the Handbook began when the first full drafts of the chapters arrived, and we started the initial revision process. Finding and getting peer reviewers for each chapter was a long process. Fellow scholars do reviews without being paid, and often in their spare time on top of their work. Consequently (and very understandable), many people who we asked for, politely declined. For us, this meant we had to ask around 3-4 times more persons to comment in order to achieve our aim of around 3 reviewers per chapter.

After the reviewers, it was up to the editors to rethink the documents we received, the draft chapters and their respective reviews, in order to develop the STS Handbook. This included cropping, shortening and reformulating content in a sensible and sensitive way. This was often about politics, as a Handbook chapter also tells a story on the development of certain branch of research, so authors sometimes got defensive about their contribution, resulting in excessive self-referencing, or left out rival strands of research. Here the editors often wrote long Emails or talked personally the authors about these issues.

Retrospectively, with the review process my most important job began: the surveillance and regulation of (laboratory) practices of academic writing. Coordinating an international project with numerous actors involved is not an easy task. Timelines and plans were set for all the diverse steps of a handbook, from the first draft to the peer review process to the final proof reading. However, deadlines given to contributors were often seen more as a recommendation rather than a fix commitment or obligation. This PhD comic describes the situation quite well:


Academic Deadlines
Source: “Piled Higher and Deeper” by Jorge Cham

The result: Deadlines passed, but the inbox remained empty. Clearly, if we wanted to stay in time, we needed to reframe our science communication. But how do you get people to do their work? The solution: a tight deadline reminder regime! Consequently, one of the most work and time intense tasks for me was the sheer amount of reminders I had to write and sent out via mail. As in any project, there was a steep learning curve in disciplining our subjects – I mean our colleagues – for us and so early phases (e.g., during peer review) we would only send out reminder Emails after a deadline passed, in later stages we switched to also reminding the authors ahead of a coming deadline. While writing multiple reminders, I also had to learn how to deal with my own inhibitions. How could I be friendly and respectful, but authoritative at the same time when requesting overdue work? Adjusting the tone was especially difficult given my position as a graduate student and doing ‘merely’ an administrative job telling (often well-known) senior scholars what to do. In most cases, it was enough to switch from “Please get back at us until the [date]” to a more decisive version like: “Due to our tight schedule we manage to deal with any further delays. We do expect to receive the chapter within the next three days. This means Sunday, [date] at the very latest. Thank you for your understanding and collaboration”. More frustrating to me than writing endless Emails into a seemingly non-responding void, were a few authors who never responded to me, but only corresponded with the editors exclusively, which often caused more delays. Yet, in general the spamming technique plus the increasingly authoritative language worked surprisingly well. Foucault would have been proud of the way this resulted in self-discipline.

However, even this regime did not prevent bottlenecks sometimes. When the first final versions of the chapters were handed into the typesetting, this was an excellent point to research disasters from an STS perspective as the majority of chapters came at once, leaving us very little time to go through them before an important deadline with MIT Press. In order not to miss this (already extended) deadline, we and some friendly helpers spent around two weeks, going through all chapters, correcting mistakes, and standardizing the bibliography entries as well as bringing back ordering systems that obviously had been dismissed as oppressive by some authors (such as an alphabetic order in the reference list). At this point the Handbook began to age— we were too, including some new grey hairs — and the socio-material constitution of later life of the chapters began to look like an actual book (in pdf form).

A few months later, we got the chapters back for the final proof reading. Once more, we had a number of helpers who helped us going through all chapters again. At this point it became clear that gender and (in)equity in the scientific workforce  also impact us as a field , since we noticed that the majority of our volunteering helpers were female. They contributed in the last re-configurations, the finishing touch of the Handbook and during our main reading session including dinner (thankfully paid for by the lead editor) discussions on the co-production of knowledge and food.

Looking back at the scientometrics of all the intellectual and practical contributions to the STS Handbook:  we had more than 121 authors, around the same amount of reviewers, circa 30 persons involved in the editing and proof reading processes, and over 6000 Emails were sent and received. And this did not even include the many other actors contributing to what the Handbook is now, e.g. typesetters, citation software, managing personnel at the 4S or MIT Press. Thank you all for your time and work, sharing all the moments of joy, despair, frustration, thoughtfulness, and creative engagement. It was a wonderful and valuable experience.

In the end, the handbook was co-produced by a whole community, full of formal and informal work, and every interaction in between. It is not only a representation, but also a materialization of this community and the process of the Handbook’s creation showed its messiness, structures, hierarchies, and politics. Now it is out there, so please do what STS does best: Discuss it! De-construct it! Re-construct it! Teach with it! Criticize it! Use it as a pillow while studying!


After all, I claim: One does not need a laboratory to raise a discipline, one just needs to produce a Handbook.

Victoria Neumann is currently finishing the Master program ‘Science-Society-Technology’ at the University of Vienna. Apart from working for the Handbook, she is interested in biomedicine, time, and critical data studies.

More work is required to make academic “timescapes” worth inhabiting and to open up space for creative work

by Ulrike Felt

Clock by Hefin Owen. This work is licensed under a CC BY-SA 2.0 license.
Clock by Hefin Owen, licensed under a CC BY-SA 2.0 license.

Is the problem of contemporary academia really about acceleration – the continuous need to squeeze ever more elements into a finite amount of time? And, if so, would the proposed solution be to simply slow down, as many contemporary writers suggest? Acceleration and, more generally, a “culture of speed” have become the defining characteristics of contemporary societies and modern life, a trend echoed by their recent prominence in academic debates. In particular, young scholars account for the feeling of a growing pressure coupled with a worrisome degree of alienation when facing the discrepancy between how they imagined science to be and how science expects them to perform in order to succeed.

One can certainly find specific elements in the contemporary academic research system to support the drawn conclusion of speed as a major problem. Nonetheless, I would argue that focusing too much on acceleration might cause us to overlook a more complex phenomenon at work. Indeed, the feeling of acceleration might actually be understood as the outcome of a gradual process of reconfiguring the temporal infrastructures of academic work and life on multiple levels. A good example of avoiding such a normative dual view of fast or slow is Filip Vostal’s Accelerating Academia.  So, if acceleration is not the core problem then deceleration is certainly not the solution. From where, then, does this strong feeling of time pressure and haste in contemporary academia emerge?

How time gets made in academia

In moving away from conceptualising time as a straightforward physical entity, we must shift our attention to the places and moments where time gets made. What is needed is a careful investigation of the key sites in academia that create binding temporal requirements and regulations; thus, imposing specific rhythms, which standardise as well as homogenise academic time. In short, we have to study what Rinderspacher calls “time generators.” More concretely this means looking into the academic system’s multiple recent reforms – i.e. in funding structures, assessment exercises, accountability procedures, curricula or career paths – as all of these are involved in doing important temporal reordering work. Indeed, I might suggest, as I have done in a recent book chapter, that for any problem academia encounters the appropriate response seems to be the establishment of a new time generator.

The challenge to boost quality in research led to a competitive distribution of funding via the project, subsequently putting time limits on what we can think and research, creating a new iron cage of project bureaucracy. This projectification of academic work has also generated a whole new category of researchers who, as Oili-Helena Ylijoki points out, temporarily join academic institutions as project collaborators and “sell their labour” through the commodity of “project time”.

Concern over quality at individual, collective and institutional levels brought a flurry of assessment exercises. Academic education has become increasingly structured through stressing what should to be taught per time unit and careers have become temporalised according to the paradigm of excellence and selectivity. The counting of publications per time unit, along with the valuing of journals expressed through the tallying of average numbers of citations per time unit (the impact factor) are yet further examples of how time gets interwoven into academic valuing and living practices. We thus encounter a bewildering multiplicity of ever-new time structures permeating academic lives.

Unintended consequences and temporal inconsistencies

These new temporalities do not leave core academic work untouched. We can see shifts in how we attribute value to both the manner in which we work and the outcomes we produce. We observe changes in academic lifestyles — affecting who remains in academia and who leaves. Furthermore, we can trace impacts on researchers’ ability or willingness to take the time to engage beyond their field, to do support work or to collaborate beyond the pragmatic and formal level. Or we might speculate that an unintended consequence of these temporal reorderings is the so-called reproducibility crisis, the inability of researchers to reproduce others and – even more troubling – their own experimental data.

This bewildering variety of temporalities tacitly governing academia pushes and pulls researchers in many different directions at once. In this context, the key question is how academics manage to create coherence between these different, often competing temporal structures and their values and attending demands. This leads to a deep feeling of asynchronicity. The rhythms of reporting and assessing, of lives and careers in research, and of projects and publications no longer seem to fit together. This creates ruptures and tensions from which arise the constant demand on academics to work on repairing inconsistencies. The feeling of acceleration can then be understood as a failure to synchronise adequately and the lasting feeling of “not being in/on time”. On the one hand the different temporal rhythms and their non-alignment create the feeling of constant demands to meet all kinds of deadlines. On the other hand, it expresses a deep struggle to embrace these new temporal imaginations, performances and demands. The latter becomes palpable through the oscillations between academic nostalgia, expressed in the partly nostalgic recollections of a better, “slower” past, and the quite radical rejections of the past as inadequate and inefficient.

Time and power

These observations, however, raise our attention to the deep entwinement of the control of time and the exercise of power. Controlling researchers’ temporal resources and being able to regulate their rhythms of work, defining the duration of research activities as well as the length of a researcher’s stay at an institution, and prescribing the speed of production as well as the rhythm of evaluations, are all expressions of power. Therefore, questions of inclusion and exclusion from the academic system (i.e. a factor often underestimated in debates on gender and academia) must be seen through the lens of time and the introduction of ever-new time generators. Being able both to coordinate one’s time within institutional/departmental time structures and to synchronise with other actors vital to one’s work becomes fundamental for access to opportunities and recognition. Consequently, this ability allows for decision-making at appropriate moments and thus, in the end, to successfully survive in academia.

What to conclude?

Contemporary researchers are confronted with many different temporal structures and must develop the capacity to fold them in ways appearing to fit with their expectations of a good academic life. However, this demands substantive work and it is highly questionable if the growing temporalisation of academia will actually produce the desired effects.  More attention thus needs to be devoted to the ways different academic times come together to form a “timescape”, a term coined by Barbara Adam. Let’s make an analogy to landscapes: we appreciate the attention devoted to the spatial arrangement of the different elements in ways found sustainable and attractive for their inhabitants, cherish the know-how of landscape architects and acknowledge the work it involves. Analogously, more care should be paid to how different times come together to form academic timescapes, how they form a scape worth inhabiting and allow for creative work to unfold. This also means to engage in a deeper reflection on the necessity of ever-new time generators — ultimately they may create as many problems as they promise to solve. In short, we face a need to “retime research and higher education”, as I have recently argued. However, there is also a need to acknowledge the work that needs to be done to make a timescape worth inhabiting and open up space for creative work. Finally, as is done for landscapes, academic institutions would need to take the time to reflect on and thoroughly care for the academic timescapes they create — perhaps a new task for academic leadership.

Note: This piece originally appeared on the LSE Societal Impact blog. The original post can be viewed here.

Ulrike Felt is Professor of Science and Technology Studies and currently Dean of the Faculty of Social Sciences at the University of Vienna. Her research interests span number of themes including issues of science and democracy, questions of responsible research and innovation as well as the analysis of changing cultures of academic knowledge production. Understanding temporal structures in science and society as well as the importance of future making practices is her keen interest across the above-mentioned issues.

What is academics’ responsibility in dealing with indicators?

by Maximilian Fochler and Sarah De Rijcke

Source: "Piled Higher and Deeper" by Jorge Cham
Source: “Piled Higher and Deeper” by Jorge Cham

The metric tide is in. The use of quantitative indicators for the evaluation of the productivity and quality of the work of individual researchers, groups and institutions has become ubiquitous in contemporary research. Knowing which indicators are used and how they represent one’s work has become key for junior and senior academics alike.

Proponents of the use of metrics in research evaluation hope that it will provide a more objective and comparable way of assessing quality, one that is less vulnerable to the biases and problems often ascribed to qualitative peer-review based approaches.

However, critical research in science and technology studies and elsewhere increasingly points to considerable negative effects of the impeding dominance of quantitative assessment. In fact, indicator systems might serve as an infrastructure fueling hyper-competition, with all its problematic social and epistemic effects. They create incentives for researchers to orient their work to where high metric impact might be expected, thus potentially fostering mainstreaming at the expense of epistemic diversity, and prioritizing delivery over discovery.

Over recent years, many important initiatives have pushed for a more responsible use of metrics in research evaluation. The DORA declaration, the Leiden manifesto and the Metric Tide report are just the most prominent examples of discussions in academia and in the institutions that govern it. The recommendations of these initiatives have mostly focused on those actors which seem to have most bearing on the processes of concern: academic institutions, the professional communities providing the methods and data metrics build on, as well as evaluators.

But what about individual researchers? What is their responsibility in dealing with indicators in their everyday practices in research? Twenty years ago, when the metric tide was still but a trickle, the eminent anthropologist Marilyn Strathern (1997) wrote “Auditors are not aliens: they are a version of ourselves.” (p. 319). Still today, it would be simplistic and wrong to assume that researchers are merely victims to bureaucratic auditors imposing indicators on them.

Don’t we all strategically use those metric representations of our work we see as advantageous for whichever goals we are currently pursuing? Do metric logics structure the way we present ourselves in our profiles on academic social networks, and how we look at others’ portfolios? Isn’t there a secret joy in watching one’s citation scores and performance metrics grow? In how far do we individually play along the logics that we might criticize as a more collective phenomenon in our more reflexive moments? Is this a problem? If so, should finding ethical ways of dealing with indicators not be part and parcel of being a responsible researcher today?

These are the core questions of a recent debate “Implicated in the Indicator Game?” we edited for the journal Engaging Science Technology and Society. This debate gathers essays of a cast of junior and senior scholars in science and technology studies (STS). STS is an interesting context for discussing these wider questions, because scholars in this field have contributed particularly strongly to the critical discourse on indicators. Still, in their own careers and institutional practices, they often have to decide how to play the indicator game – for not playing it seldom seems a viable option.

In one essay in this collection, Ruth Müller asks, quoting an informant from her own fieldwork: “Do you think that the structure of a scientific career is such that it tends to make you forget why you’re doing the science?”. Diagnosing a loss of meaning in running to fulfill quantitative indicators, she points to aspects of work in science and technology studies which are indispensable for quality, but hardly to be expressed in indicators – interdisciplinary engagement with the sciences and engineering being the most important example for STS.

So, what can individual researchers and institutions do? Our collection contains many different answers to this question. All agree, however, that ignoring or boycotting indicators cannot be the solution. As Alan Irwin reminds us, the questions of accountability that indicators are supposed to answer will not go away. They need to be answered in different terms, by offering and celebrating new non-reductive concepts of the quality of research in different fields. For individual researchers, this calls for confidence to stand up for the quality also of those aspects of their work that cannot be well expressed in metrics, but also to recognize these qualities in others’ work.

As an outcome of our debate, we offer the concept of evaluative inquiry as a starting point for a more responsible dealing with indicators. In a nutshell, evaluative inquiries may present research work numerically, verbally, and/or visually – but aim to do so in ways which do justice to the complexity of actual practice and its engagements, rather than to reduce for the sake of standardization. They also do not jump to a reductive understanding of what counts in an assessment (such as publications), and aim to produce and represent the multiple meanings and purposes of researchers’ work. They are processual in the sense that the choice of criteria and of whether or not certain indicators make sense cannot be fully described in advance, but needs to be negotiated in the process of evaluating.

Of course this all sounds nice in theory. But it will require researchers to engage in these practices, rather than in hunting metric satisfaction. And it will require institutional actors to engage in more substantive discourses about the quality of research.


Strathern, M. (1997). ‘Improving ratings’: audit in the British University
system. European Review 5(03), 305—321.

Maximilian Fochler is assistant professor and head of the Department of Science and Technology Studies of the University of Vienna, Austria. His main current research interests are forms of knowledge production at the interface of science and other societal domains (such as the economy), as well as the impact of new forms of governing science on academic knowledge production. He has also published on the relations between technosciences and their publics as well as on publics’ engagement with science.

Sarah de Rijcke is associate professor and deputy director at the Centre for Science and Technology Studies (CWTS) of Leiden University, the Netherlands. She leads a research group that focuses on a) developing a theoretical framework on the politics of contemporary research governance; b) gaining a deep empirical understanding of how formal and informal evaluation practices are re-shaping academic knowledge production; c) contributing to shaping contemporary debates on responsible research evaluation and metrics uses (including policy implications).

Too bad to fail?

by Dorothea Born

"Piled Higher and Deeper" by Jorge Cham
Source: “Piled Higher and Deeper” by Jorge Cham

The other day I discussed the difficulties of living and working in academia with a very successful former professor of mine. When it came to his own career, he made an interesting confession: “It was pure chance that I ended up doing what I am doing now,” he said, “After I graduated from high-school, I tried out several jobs and studies, until I found my place. These early years, I always leave them out in my CV.” This made me wonder about the role of CVs in academic practice and careers.

Read More

Performing Moral Stories through Bodyweight

by Michael Penkler

In June, the United States Food and Drug Administration has approved a new weight-loss device: AspireAssist. The device is surgically inserted into the abdomen and allows patients who have failed at losing weight by other means to drain ingested food from the stomach. After eating, users go to the toilet, plug in a tubing set into a tube that leads to the stomach, and “aspirate” (or, less prosaically, pump) up to 30% of their meal into the toilet.

penkler-grafikSource: FDA

Read More

A reflection on engaging with public engagement with science

by Robin Rae

“And what can I do here?” people ask me curiously one after another, eyeballing a mountain bike standing upright in front of a computer screen. I am in the lecture room of the Department for Science and Technology Studies (STS), University of Vienna, which is filled with people, technological objects, and further installations about RFID chips, artificial intelligence, and visions of reproductive medicine and self-driving cars. It is a Friday night in April 2016, the so-called “Lange Nacht der Forschung” (i.e. Long Night of Research). This nation-wide biannual science communication event invites diverse publics to interactively explore current research at more than 250 institutions. With its interactive installations, the STS department aimed to spark discussions about how technologies affect and shape society, bodies, everyday lives, and futures. While that only partially explains the bike standing in the room, read on to learn how challenges in planning my installation contributed to its realization.

Read More

When mass communication turns into mass surveillance

by Pouya Sepehr, Maresa Barbara Wolkenstein, Helene Sorgner and Marilen Hennebach

New technologies have given governments an unprecedented means to access personal information. In order to ensure that all people can seek information and express themselves freely, there must be reasonable checks and balances on governments’ ability to access, collect, and store individuals’ data. Both security and freedom can be protected, but only through balanced laws and policies that uphold human rights. Surveillance happens at many levels: It can be eavesdropping programmes of foreign and local governments, it can be commercial corporations on a global scope, it can be more or less institutionalised and it has many different aspects, reaching from self-censorship to pleasure, from activism to fatalism. The question, though, is not so much if we mind but rather how and when we mind.

The revelation of NSA documents through Edward Snowden in 2013 had brought otherwise secret intelligence activities into the light of global attention. It has been shocking for many to realise that mass surveillance technologies are targeting civilian communication, including social media platforms. In fact, the era of mass communication has become the era of mass surveillance and hence, the question of personal freedom of expression has gained a technological dimension. The revelations have also shown that national security agencies have strong ties with giant tech companies which are willingly cooperating in giving access to information, proving that even civilians have “nowhere to hide” anymore.

mass surveillance seminar Read More

Society in the making: quantification and accountability

by Andreas Schadauer

© Schadauer 2016

“The top 10% of Austrian households own 61% of all real estate assets.” For a certain time, this statistical argument could be read in several newspapers, was taken for granted by some journalists and commentators, and was used as a strong argument for inheritance and wealth taxes. But how did this statistical argument get accepted, persistent and influential? Who or what was able and enabled to produce it? And who or what is accountable for this statistical argument?

For the last question, the answers provided by the textbooks of empirical research I read as student of Sociology at the University of Vienna are quite clear-cut. If produced methodologically correct, numbers and statistics represent reality objectively (e.g. Diekmann, 2007: 23f) and due to this have authority, superiority and are politically neutral (Kreutz, 2009: 3). This notion stands in stark contrast to approaches in STS which point out the social, political and institutional quality of scientific methods (e.g. Desrosières, 2002; Kenney, 2015; Law, 2010).

Read More

“Frugal Innovation” – an inquiry into a blind spot in STS

by Lisa Sigl

Frugal Innovation

Concepts and notions of innovation are societal and political battlegrounds. How strongly they imply imaginations about responsibilities between science and society becomes apparent when comparing notions of innovation in different socio-political contexts. In European policy contexts, innovation is mostly defined as technological innovation for the market, insinuating that its primary responsibility is to secure competitiveness, economic growth and jobs. With this market-orientation in mind, the vehemence with which the Indian government promotes “frugal innovation” as “sharply” contrasting with this “conventional approach” in its current Five Year Plan is striking. The absence of critical reflection in STS on this remarkable innovation concept is so striking that I want to open a discussion here.

Read More

Exploring new questions of “Gender, justice and the political economy of the cross-border fertility industry”. A wrap-up and outlook of an international workshop.

by Daniela Schuh

Gender, justice and the political economy of the cross-border fertility industry
The workshop was organized by Kathrin Braun (Univ. of Vienna), Gesine Fuchs (Hochschule Luzern) and Daniela Schuh (Univ. of Vienna).

While cross-border fertility travel has become an expanding industry, knowledge about its actual scope, structure, regulation and practices is still  sparse. A workshop in April organized by members of the University of Vienna and the Hochschule Luzern met this situation head on by bringing together a diverse program. Scholars from all across Europe and with diverse scientific and institutional backgrounds came together to collectively explore vital questions about the cross-border fertility industry:

How is this industry stratified in terms of gender, ethnicity, race, class, able-bodiedness, and further axes of inequality? How does the rise of cross-border fertility industry and/or corresponding state policies affect gender relations? How can we assess these policies and developments from a gender and social justice perspective? And how should we understand and engage with this industry in the first place?

Read More