Embedding AI in Deliberation: The Role of Institutional Interventions
The promise of using AI applications in democratic deliberation between citizens is a compelling one to public organisations. Whether it’s summarising large volumes of citizen comments or moderating interactions between deliberators, AI technologies seem to offer solutions to long-standing challenges in traditional deliberative processes. Nevertheless, positioning AI as the ultimate problem-solver risks making technology the driving force behind reshaping deliberative practices. Too often, the conversation on AI-enabled deliberation – both in research and practice – revolves solely around the AI applications themselves. This tech-centred focus overlooks the interplay between AI technologies and the social context in which these technologies are situated and deployed – i.e., the interactions between deliberators and official decision-making processes.
The interplay between AI applications and deliberative practices also structures the choices made for specific applications. Take, for example, the keen interest in using AI for summarisation in deliberative settings. This focus reflects existing institutional contexts and social practices in which managing high volumes of input or synthesising input efficiently is troublesome. Still, no matter how sophisticated the summarisation function, implementing the AI application alone will not make deliberation more meaningful. For example, human actors are still essential – both to operate the AI tools and to make sense of their input. This means that new practices, norms, and institutional structures must evolve alongside the technology.
Work Package 3 (WP3), within the AI4Deliberation project, concentrates on the institutionalisation of AI-enabled deliberation. It aims to explore the new practices needed to meaningfully, sustainably, and ethically use AI applications in democratic deliberation. By understanding the needed practices, we can inform and instruct public organisations how to embed AI technologies in their own deliberation processes.
From a theoretical perspective, WP3 will study institutional interventions needed to enact AI-enabled deliberation. Institutional interventions comprise all forms of social rules at the disposal of public organisations that structure the behaviour of actors in deliberative practices. These include procedures, work instructions, codes of conduct, but also efforts to establish shared understandings among actors, and more. Institutions are not just policy afterthoughts; they actively shape how AI tools are used and what kind of practices emerge around them. As such, institutional interventions are a means to realise AI-enabled deliberation that adheres to, guarantees, and strengthens democratic and Rule of Law practices in general. Institutions can, for example, regulate how the output of an AI-enabled deliberation process is adopted within official decision-making processes, strengthen the position of marginalised groups in deliberation, and guarantee means of self-governance by deliberators. Yet, institutions are often harder to define and measure than the technical artefact themselves. Their intangible nature makes their influence less visible, and their effectiveness more difficult to assess. Still, they are essential.
WP3 also brings these abstract notions of institutional interventions to practices. We develop a comprehensive framework that guides public organisations in designing AI-enabled deliberative processes, and supports them in using these technologies in deliberation. The framework consists of practical guidelines, policy recommendations, and road maps that should enable public organisations to meaningfully and effectively embed AI technologies in deliberative processes. To this end, the framework will be accompanied by training material to engender capacity building, and by information on investment decisions.
WP3’s approach is grounded in Action Design Research (ADR) – a methodology that starts with real-world problems and uses them to design practical solutions while generating broader insights. The framework is being co-created with practitioners and experts within the four pilots in the AI4Deliberation process. These co-creation sessions model the kind of deliberation we hope support.
AI-enabled deliberation is not just about implementing new tools in deliberative processes – it’s about reshaping how citizens interact among themselves and with government. A comprehensive framework of institutional interventions can help public organisations build the right foundations for meaningful, sustainable, and ethical AI-enabled deliberation.
Blog AI4Deliberation
Sem Nouws – TUD
July 2025
Title:
Embedding AI in Deliberation: The Role of Institutional Interventions
The promise of using AI applications in democratic deliberation between citizens is a compelling one to public organisations. Whether it’s summarising large volumes of citizen comments or moderating interactions between deliberators, AI technologies seem to offer solutions to long-standing challenges in traditional deliberative processes. Nevertheless, positioning AI as the ultimate problem-solver risks making technology the driving force behind reshaping deliberative practices. Too often, the conversation on AI-enabled deliberation – both in research and practice – revolves solely around the AI applications themselves. This tech-centred focus overlooks the interplay between AI technologies and the social context in which these technologies are situated and deployed – i.e., the interactions between deliberators and official decision-making processes.
The interplay between AI applications and deliberative practices also structures the choices made for specific applications. Take, for example, the keen interest in using AI for summarisation in deliberative settings. This focus reflects existing institutional contexts and social practices in which managing high volumes of input or synthesising input efficiently is troublesome. Still, no matter how sophisticated the summarisation function, implementing the AI application alone will not make deliberation more meaningful. For example, human actors are still essential – both to operate the AI tools and to make sense of their input. This means that new practices, norms, and institutional structures must evolve alongside the technology.
Work Package 3 (WP3), within the AI4Deliberation project, concentrates on the institutionalisation of AI-enabled deliberation. It aims to explore the new practices needed to meaningfully, sustainably, and ethically use AI applications in democratic deliberation. By understanding the needed practices, we can inform and instruct public organisations how to embed AI technologies in their own deliberation processes.
From a theoretical perspective, WP3 will study institutional interventions needed to enact AI-enabled deliberation. Institutional interventions comprise all forms of social rules at the disposal of public organisations that structure the behaviour of actors in deliberative practices. These include procedures, work instructions, codes of conduct, but also efforts to establish shared understandings among actors, and more. Institutions are not just policy afterthoughts; they actively shape how AI tools are used and what kind of practices emerge around them. As such, institutional interventions are a means to realise AI-enabled deliberation that adheres to, guarantees, and strengthens democratic and Rule of Law practices in general. Institutions can, for example, regulate how the output of an AI-enabled deliberation process is adopted within official decision-making processes, strengthen the position of marginalised groups in deliberation, and guarantee means of self-governance by deliberators. Yet, institutions are often harder to define and measure than the technical artefact themselves. Their intangible nature makes their influence less visible, and their effectiveness more difficult to assess. Still, they are essential.
WP3 also brings these abstract notions of institutional interventions to practices. We develop a comprehensive framework that guides public organisations in designing AI-enabled deliberative processes, and supports them in using these technologies in deliberation. The framework consists of practical guidelines, policy recommendations, and road maps that should enable public organisations to meaningfully and effectively embed AI technologies in deliberative processes. To this end, the framework will be accompanied by training material to engender capacity building, and by information on investment decisions.
WP3’s approach is grounded in Action Design Research (ADR) – a methodology that starts with real-world problems and uses them to design practical solutions while generating broader insights. The framework is being co-created with practitioners and experts within the four pilots in the AI4Deliberation process. These co-creation sessions model the kind of deliberation we hope support.
AI-enabled deliberation is not just about implementing new tools in deliberative processes – it’s about reshaping how citizens interact among themselves and with government. A comprehensive framework of institutional interventions can help public organisations build the right foundations for meaningful, sustainable, and ethical AI-enabled deliberation.