A Conversation on AI Policy, Governance, and the Making of Academic Work

A conversation between Bao-Chau (Bao-Chi) Pham and Katja Mayer

We’re excited to share an interview with our colleague Bao-Chi, whose recent publications on artificial intelligence policy and governance offer fresh and critical insights into the field. Bao-Chi’s work highlights the socio-political imaginaries of AI, focusing on concepts such as risk, trust, and how global contexts shape perceptions of AI’s challenges and opportunities. These papers not only enrich scholarly debates but also provide a window into the process of academic publishing itself—a process of making knowledge that often remains invisible.

Katja: In this conversation, we’re taking a dual approach. First, we explore the content of Bao-Chi’s papers—their core arguments and contributions to the field of AI policy and governance. Then, we shift focus to the process—the often overlooked aspects of research, writing, collaboration, and peer review that shaped the final publications.

We believe that reflecting on how scholarly work is produced is as important as discussing what it argues. By uncovering these processes, we aim to demystify academic publishing and inspire reflection on our collective research practices.

Bao-Chi, could you start by introducing yourself and telling us about your PhD journey? What led you to focus on AI governance, and how did these two papers emerge from your research?

Bao-Chi: Thanks a lot for the introduction, Katja. Since joining the Vienna STS department in September 2020, I’ve been working on my PhD project “Imagining and Governing Artificial Intelligence in Europe”. My research explores how AI and Europe are co-produced in political and policy discussions. In other words, I study imaginaries that shape these conversations and how particular visions of AI and “Europeanness” are enacted, circulated, and stabilized.

The two papers we’re discussing today are part of my cumulative dissertation. The first, co-authored with my supervisor Sarah Davies, was published in Critical Policy Studies under the title: What problems is the AI Act solving? Technological solutionism, fundamental rights, and trustworthiness in European AI policy.

In this paper, we examine the European Union’s AI Act, a regulatory framework initiated by the European Commission, which came into force on 1 August 2024. Among other concrete measures, the AI Act introduces a risk-based tier system that stipulates what kind of oversight measures the AI systems deployed and implemented in Europe are subjected to.

“…the AI Act enacts a particular vision of Europe – one that positions the EU as an exceptional regulatory leader and reinforces the idea of the EU as a coherent political community. This, in turn, forecloses other possible ways of characterizing and addressing AI as a policy issue.

The two papers we’re discussing today are part of my cumulative dissertation. The first, co-authored with my supervisor Sarah Davies, was published in Critical Policy Studies under the title What problems is the AI Act solving? Technological solutionism, fundamental rights, and trustworthiness in European AI policy. In this paper, we examine the European Union’s AI Act, a regulatory framework initiated by the European Commission, which came into force on 1 August 2024. Among other concrete measures, the AI Act introduces a risk-based tier system that stipulates what kind of oversight measures the AI systems deployed and implemented in Europe are subjected to.

We analyse the AI Act using Carol Bacchi’s What’s the Problem Represented to Be? (WPR) approach, which highlights how policies actively construct the very problems they claim to address. By focusing on the AI Act’s risk-based classification system, we unpack how AI is problematized within EU policy-making. Beyond that, we consider the effects of these problem representations – not just in terms of measurable policy outcomes but also in what John Law refers to as collateral realities. Our key argument is that the AI Act enacts a particular vision of Europe – one that positions the EU as an exceptional regulatory leader and reinforces the idea of the EU as a coherent political community. This, in turn, forecloses other possible ways of characterizing and addressing AI as a policy issue.

The second paper, Trust in AI: Producing Ontological Security through Governmental Visions published in Cooperation & Conflict, is co-authored with Stefka Schmid (TU Darmstadt) and Anna-Katharina Ferl (Stanford University) and emerged from our interdisciplinary discussions on AI governance and security. We take a comparative approach, analysing EU, US, and Chinese AI policy documents to explore how AI is framed as a security concern, not just in military but also in civilian contexts.

Our key argument is that AI policies shape future visions by fostering ontological security – a sense of stability and continuity in a state’s identity which is reaffirmed, for example, through the performance of familiar routines and narratives, and the maintenance of relationships. While AI is often framed as a national security threat, we find that policies also draw on Human-Computer Interaction (HCI) concepts, such as trust, to position AI as a manageable and governable object. By introducing ontological security into AI governance debates, our paper highlights how policies don’t just regulate AI as a technology. They also help governments and institutions maintain a stable self-image by positioning AI as something controllable, thereby reinforcing trust in governments.

The Writing Process: From Ideas to Published Work

Bao-Chi: Looking back, both papers were shaped by informal exchanges and unexpected opportunities, and both stem from conference experiences.

The first paper emerged from a workshop in Graz in September 2021, organized by STS Austria. I wasn’t presenting my own research but a collaborative autoethnography project with our colleagues Fredy Mora Gámez, Andrea Schikowitz, Sarah Davies, and Esther Dessewffy. One evening, Nina Klimburg-Witjes mentioned that she and Paul Trauttmansdorff were editing a volume called Technopolitics and the Making of Europe Infrastructures of Security, bringing together debates from STS and Critical Security Studies. She asked whether Sarah and I would be interested in contributing a chapter on AI. At that point, I had only just begun my empirical work on European AI policy, but we agreed it was a great opportunity. Writing that chapter, in which we conceptualized AI policy as infrastructure, was the springboard for our paper.

When Sarah and I began working on the paper, I came across the WPR approach. Bacchi and Goodwin’s Poststructural Policy Analysis: A Guide to Practice (2016) was particularly helpful in my own grappling with a core STS principle: that things could be otherwise. The idea that policies don’t just respond to problems but actively shape what is seen as a problem aligned closely with my interest in the co-production of AI and Europe. The WPR approach also provided a practical way to navigate the AI Act, a dense and technical legal text. The framework’s seven guiding questions structured our analysis and allowed us to address audiences beyond STS, particularly policy-makers and practitioners. In many ways, applying WPR to the AI Act was a way to translate STS sensibilities to a broader audience.

Similarly, the second paper emerged from a conference experience. In 2021, I submitted an abstract to the Science, Peace, and Security conference. A few weeks later, the organizers emailed me, copying in another participant whose abstract was very similar. They suggested we either collaborate or decide who would present, while the other could produce a poster. I remember feeling apprehensive: was this my first encounter with the infamous competitiveness of academia? I was very happy to collaborate, but I also wondered: was that the “strategic choice” an early-career researcher should make? Anna and I reached out to each other, got along brilliantly, and decided to work together. Stefka was in the online audience during our presentation and later reached out because she was intrigued by our use of sociotechnical imaginaries. She was working on a comparative project on AI governance and suggested we collaborate.

As we developed the paper, we turned to the concept of ontological security, which we hadn’t yet seen discussed much in relation to AI. AI policy, especially in international relations and military contexts, often focuses on hard security, meaning physical threats to state sovereignty or military stability. We were interested in what else these policies were doing. Drawing on Lupovici’s work (2022) on ontological security and cybersecurity, we explored how AI policies don’t just address external threats but also help sustain a sense of stability and identity.

In this way, both conceptual approaches – WPR and ontological security – helped my co-authors and me to move beyond instrumental, techno-solutionist understandings of AI governance. They allowed us to ask what policies do beyond regulating technology: how they shape identities and visions of the future. These perspectives also make our work more accessible to broader audiences. The WPR approach offers a structured way to reflect on how policy frames the problems it addresses. Ontological security, meanwhile, provides a language for thinking about AI policy not just in terms of risk management or security threats but in terms of how states and institutions construct meaning and stability alongside technological change.

The Role of Collaboration

“…having regular discussions with our colleagues about what counts as authorship, how to acknowledge contributions, and what is expected of each collaborator was immensely helpful in setting and managing expectations and workload.”

Bao-Chi: Absolutely, collaboration was central to both papers, but the experiences were quite different, each shaping my growth as a researcher.

Working with Sarah on the first paper was my first journal article and my first as a lead author. Writing with someone more experienced and in a clear position of seniority brought a certain safety but also required some navigation. On the one hand, I benefited enormously from Sarah’s guidance, particularly in structuring the paper, crafting a clear argument, and writing for an academic audience. On the other, I was learning to find my own voice as an early-career researcher, constantly asking myself, “What do I think is important? What do I want to say, and how? Is this good enough for an academic publication (and ultimately for my PhD)?” More than once, I felt stuck, procrastinated, and pushed back deadlines. This didn’t feel great in a collaboration, especially with my supervisor. I really appreciated Sarah’s patience and encouragement, and that she didn’t “need” this paper for her publication record – it was about getting me over the line with my first paper – and she was happy to let me take the lead and work at my own pace.

Here, having regular discussions with our colleagues about what counts as authorship, how to acknowledge contributions, and what is expected of each collaborator was immensely helpful in setting and managing expectations and workload. It also proved incredibly useful for the second collaboration, which had a very different setup.

Stefka, Anna, and I work in different disciplines (Computer Science, Peace Studies, and STS) and at different institutions in Germany and Austria. We were also all PhD candidates at the time. All of this meant that integrating our perspectives took extra effort. Our collaboration was mostly mediated by digital tools – Zoom for discussions, Stefka’s university’s cloud system for file-sharing, and Overleaf, a LaTeX editor, for drafting, which – I’ll be honest – was clunky at times! But those logistical hurdles were secondary to the real task: ensuring all three of our voices were present in the paper. Since for two of us the paper counts towards our dissertations, it felt particularly important that we each saw ourselves reflected in the final text.

“I also realized how much I had taken certain STS tenets for granted, such as that technologies don’t simply exist but are always co-produced with societal and political orderings. Explicitly articulating that to my collaborators, both in writing and discussions, made me more aware of my own assumptions”

The first paper gave me practical experience to write in an STS style and to develop a clearer sense of what “good writing” looks like in our field. That, in turn, helped me push back at certain points in the second paper’s interdisciplinary writing process when I felt the STS perspective risked getting lost. At the same time, I was also learning from my co-authors: both Stefka and Anna would highlight aspects that required more precision in their own disciplines. I started to notice a shift in my role – from receiving feedback on my first paper to flagging when our argument needed sharpening in the second paper. I also realized how much I had taken certain STS tenets for granted, such as that technologies don’t simply exist but are always co-produced with societal and political orderings. Explicitly articulating that to my collaborators, both in writing and discussions, made me more aware of my own assumptions.

In short, both collaborations shaped me in different ways. The first paper grounded me as an STS scholar; the second challenged me to articulate that position in a broader interdisciplinary conversation while also learning from other disciplines and their conventions.

The Peer Review Journey

“Having someone engage thoughtfully and thoroughly with our work felt like entering into a conversation, rather than receiving a one-sided judgment”

Katja: The peer review process is often challenging yet transformative. How did it shape your papers? Did you adapt your writing style for different journals or audiences? What was it like navigating reviewer feedback, and how long did the process take? We’d love to hear any advice you have for early-career researchers about managing revisions and responding to reviewer comments constructively.

Bao-Chi: I was initially apprehensive about the peer review process, especially with cautionary tales and the notorious “reviewer 2” memes circulating online. However, I was pleasantly surprised by how constructive and insightful the reviews were, despite the process not being entirely smooth (but then, which review process ever is?).

For the first paper, the review process took 15 months from submission to publication. Instead of harsh comments, reviewer 2’s feedback was minimal – only one sentence – so the editor reached out to a third reviewer for a more substantial response. Despite the lengthy process, the feedback was very helpful in refining the argument. In particular, the reviewers criticized the policy solution we aimed to unpack using the seven-step WPR approach. This led us to focus on the AI Act’s risk-based classification system, rather than the more ambiguous “trustworthy AI” policy discourse, which, I think, ultimately strengthened our paper.

Having someone engage thoughtfully and thoroughly with our work felt like entering into a conversation, rather than receiving a one-sided judgment. Sarah also introduced me to a very useful system for addressing peer review, which I’ve continued to use ever since: creating a table with each comment and treating it like a to-do list. This made the revision process more structured and helped me manage what would otherwise feel incredibly overwhelming.

For the paper with Stefka and Anna, we received a desk rejection from another journal just before Christmas. Had I been on my own, I might have been discouraged by the setback, but I am grateful that Stefka quickly resubmitted the paper to Cooperation and Conflict and after review and 9 months, the paper was accepted.

In terms of revisions, the feedback helped us sharpen our argument and better highlight our contribution. For example, one reviewer suggested we clarify the literature we were engaging with, suggesting to better contextualize and acknowledge the works we were drawing on. This not only strengthened our own contribution but also helped us link to the literature more effectively into the empirical sections. Having an external voice pointing out areas where we had made implicit connections that were unclear to readers was very useful in streamlining and signposting our text.

Overall, I found the review process to be far more collaborative and rewarding than I had expected (though, it definitely helped that both papers were co-authored to begin with). It reminded me that revision is an essential part of the academic writing journey, that can help sharpen ideas and strengthen the argument.

A final piece of advice: treating the review process like a to-do list has been extremely useful to me in knowing when to stop editing. It gives you a clear goal: addressing all the comments, either by integrating them into the text or justifying why you chose not to, helped me understand when the papers were “ready” for publication. This is something we often don’t have when preparing papers for initial submission.

Wrapping Up: Reflecting on Key Themes

Katja: Before we close, let’s return to the papers’ content. What key themes connect them? What motivates you in your research, and what messages were you aiming to convey? How do you see these papers contributing to ongoing debates in AI governance?

“AI policy is not just about managing technological risks and their implications but also actively shapes how AI is understood and governed – that is, how AI is made doable and thinkable… policy shapes what AI is understood to be and what kinds of futures become possible.”

Bao-Chi: A key theme in both papers is that AI policy is not just about managing technological risks and their implications but also actively shapes how AI is understood and governed – that is, how AI is made doable and thinkable. In other words, policy shapes what AI is understood to be and what kinds of futures become possible. Both papers therefore take a co-productionist perspective, highlighting how policy is neither neutral nor inevitable but instead reflects specific political choices and value-laden assumptions. The first paper examines how the AI Act constructs a particular vision of AI through its risk-based classification system, reinforcing specific political choices and particular visions of Europeanness. The second paper explores the role of ontological security, demonstrating that AI policies do not just regulate technology but also aim to provide a sense of stability in a rapidly shifting technological landscape.

What motivates my research is to critically examine how AI is framed, governed, and imagined, and to explore how things could be otherwise. Much of the current AI policy debate is framed in narrow, technical terms, focusing on terms like trust, risk, transparency, and explainability. At the same time, we also see non-governmental actors, particularly large technology companies, increasingly shaping these discussions. In this context, I am especially interested in how policies that appear neutral or solution-oriented actually reproduce and circulate implicit assumptions about what kind of AI – and what kind of society – we should be striving for. By unpacking these assumptions, our research contributes to challenging dominant narratives and opening up space for alternative ways of thinking about AI governance and what kinds of future are made possible or foreclosed through it.

Katja: Thank you so much, Bao-Chi, for sharing these reflections. We’re inspired not only by your research but also by your openness in discussing the academic process. Your insights on collaboration, writing, and peer review offer valuable lessons for all of us dealing with the complexities and often pressures of scholarly publishing.