Overview of Methods for Computational Text Analysis to Support the Evaluation of Contributions in Public Participation

In this publication in Digital Government: Research and Practice Julia Romberg and Tobias Escher offer a review of the computational techniques that have been used in order to support the evaluation of contributions in public participation processes. Based on a systematic literature review, they assess their performance and offer future research directions.

Abstract

Public sector institutions that consult citizens to inform decision-making face the challenge of evaluating the contributions made by citizens. This evaluation has important democratic implications but at the same time, consumes substantial human resources. However, until now the use of artificial intelligence such as computer-supported text analysis has remained an under-studied solution to this problem. We identify three generic tasks in the evaluation process that could benefit from natural language processing (NLP). Based on a systematic literature search in two databases on computational linguistics and digital government, we provide a detailed review of existing methods and their performance. While some promising approaches exist, for instance to group data thematically and to detect arguments and opinions, we show that there remain important challenges before these could offer any reliable support in practice. These include the quality of results, the applicability to non-English language corpora and making algorithmic models available to practitioners through software. We discuss a number of avenues that future research should pursue that can ultimately lead to solutions for practice. The most promising of these bring in the expertise of human evaluators, for example through active learning approaches or interactive topic modelling.

Key findings

  • There are a number of tasks in the evaluation processes that could be supported through Natural Language Processing (NLP). Broadly speaking, these are i) detecting (near) duplicates, ii) grouping of contributions by topic and iii) analyzing the individual contributions in depth. Most of the literature in this review focused on the automated recognition and analysis of arguments, one particular aspect of the task of in-depth analysis of contribution.
  • We provide a comprehensive overview of the datasets used as well as the algorithms employed and aim to assess their performance. Generally, despite promising results so far the significant advances of NLP techniques in recent years have barely been exploited in this domain.
  • A particular gap is that few applications exist that would enable practitioners to easily apply NLP to their data and reap the benefits of these methods.
  • The manual labelling efforts required for training machine learning models risk any efficiency gains from automation.
  • We suggest a number of fruitful future research avenues, many of which draw upon the expertise of humans, for example through active learning or interactive topic modelling.

Publication

Romberg, Julia; Escher, Tobias (2023): Making Sense of Citizens’ Input through Artificial Intelligence. In: Digital Government: Research and Practice, Artikel 3603254. DOI: 10.1145/3603254.

Expert evidence: State of research on opportunities, challenges and limitations of digital participation

As set out in the German Site Selection Act (StandAG), the Federal Office for the Safety of Nuclear Waste Management (BASE) is charged with the comprehensive information and participation of the public in regards procedure for the search and selection of a repository site for the final disposal of high-level radioactive waste. In this context, in February 2022 BASE commissioned an expert report on the “Possibilities and limits of digital participation tools for public participation in the repository site selection procedure (DigiBeSt)” from the Düsseldorf Institute for Internet and Democracy (DIID) at Heinrich Heine University Düsseldorf in cooperation with the nexus Institute Berlin. For this purpose, lead by Tobias Escher a review of the state of research and current developments (work package 2) was prepared has been summarised in a detailed report (in German).

Selected findings from the report are:

  • Social inequalities in digital participation are mainly based on the second-level digital divide, i.e. differences in the media- and content-related skills required for independent and constructive use of the internet for political participation.
  • Knowledge about the effectiveness of activation factors is still often incomplete and anecdotal, making it difficult for initiators to estimate the costs and benefits of individual measures.
  • Personal invitations have been proven to be suitable for (target group-specific) mobilisation, but the established mass media also continue to play an important role.
  • Broad and inclusive participation requires a combination of different digital and analogue participation formats.
  • Participation formats at the national level face particular challenges due to the complexity of the issues at stake and the size of the target group. Therefore, these require the implementation of cascaded procedures (interlocking formats of participation at different political levels) as well as the creation of new institutions.

Publication

Lütters, Stefanie; Escher, Tobias; Soßdorf, Anna; Gerl, Katharina; Haas, Claudia; Bosch, Claudia (2024): Möglichkeiten und Grenzen digitaler Beteiligungsinstrumente für die Beteiligung der Öffentlichkeit im Standortauswahlverfahren (DigiBeSt). Hg. v. Düsseldorfer Institut für Internet und Demokratie und nexus Institut. Bundesamt für die Sicherheit der nuklearen Entsorgung (BASE). Berlin (BASE-RESFOR 026/24). Available online https://www.base.bund.de/DE/themen/fa/sozio/projekte-ende/projekte-ende.html .

Enriching Machine Prediction with Subjectivity Using the Example of Argument Concreteness in Public Participation

In this publication in the Workshop on Argument Mining, Julia Romberg develops a method to incorporate human perspectivism in machine prediction. The method is tested on the task of argument concreteness in public participation contributions.

Abstract

Although argumentation can be highly subjective, the common practice with supervised machine learning is to construct and learn from an aggregated ground truth formed from individual judgments by majority voting, averaging, or adjudication. This approach leads to a neglect of individual, but potentially important perspectives and in many cases cannot do justice to the subjective character of the tasks. One solution to this shortcoming are multi-perspective approaches, which have received very little attention in the field of argument mining so far.

In this work we present PerspectifyMe, a method to incorporate perspectivism by enriching a task with subjectivity information from the data annotation process. We exemplify our approach with the use case of classifying argument concreteness, and provide first promising results for the recently published CIMT PartEval Argument Concreteness Corpus.

Key findings

  • Machine learning often assumes a single ground truth to learn from, but this does not hold for subjective tasks.
  • PerspectifyMe is a simple method to incorporate perspectivism in existing machine learning workflows by complementing an aggregated label with a subjectivity score.
  • An example of a subjective task is the classification of the concreteness of an argument (low, medium, high), a task whose solution can also benefit the machine-assisted evaluation of public participation processes.
  • First approaches to classifying the concreteness of arguments (aggregated label) show an accuracy of 0.80 and an F1 value of 0.67.
  • The subjectivity of concreteness perception (objective vs. subjective) can be predicted with an accuracy of 0.72 resp. an F1 value of 0.74.

Publication

Romberg, Julia (2022, October). Is Your Perspective Also My Perspective? Enriching Prediction with Subjectivity. In Proceedings of the 9th Workshop on Argument Mining (pp.115-125), Gyeongju, Republic of Korea. Association for Computational Linguistics. https://aclanthology.org/2022.argmining-1.11

Automated Topic Categorization of Citizens’ Contributions: Reducing Manual Labeling Efforts Through Active Learning

In this publication in Electronic Government, Julia Romberg and Tobias Escher investigate the potential of active learning for reducing the manual labeling efforts in categorizing public participation contributions thematically.

Abstract

Political authorities in democratic countries regularly consult the public on specific issues but subsequently evaluating the contributions requires substantial human resources, often leading to inefficiencies and delays in the decision-making process. Among the solutions proposed is to support human analysts by thematically grouping the contributions through automated means.

While supervised machine learning would naturally lend itself to the task of classifying citizens’ proposal according to certain predefined topics, the amount of training data required is often prohibitive given the idiosyncratic nature of most public participation processes. One potential solution to minimize the amount of training data is the use of active learning. While this semi-supervised procedure has proliferated in recent years, these promising approaches have never been applied to the evaluation of participation contributions.

Therefore we utilize data from online participation processes in three German cities, provide classification baselines and subsequently assess how different active learning strategies can reduce manual labeling efforts while maintaining a good model performance. Our results show not only that supervised machine learning models can reliably classify topic categories for public participation contributions, but that active learning significantly reduces the amount of training data required. This has important implications for the practice of public participation because it dramatically cuts the time required for evaluation from which in particular processes with a larger number of contributions benefit.

Key findings

  • We compare a variety of state-of-the-art approaches for text classification and active learning on a case study of three nearly identical participation processes for cycling infrastructure in the German municipalities of Bonn, Ehrenfeld (a district of Cologne) and Moers.
  • We find that BERT can predict the correct topic(s) for about 77% of the cases.
  • Active learning significantly reduces manual labeling efforts: it was sufficient to manually label 20% to 50% of the datasets to maintain the level of accuracy. Efficiency-improvements grow with the size of the dataset.
  • At the same time, the models operate within an efficient runtime.
  • We therefore hypothesize that active learning should significantly reduce human efforts in most use cases.

Publication

J. Romberg and T. Escher. Automated topic categorisation of citizens’ contributions: Reducing manual labelling efforts through active learning. In M. Janssen, C. Csáki,I. Lindgren, E. Loukis, U. Melin, G. Viale Pereira, M. P. Rodríguez Bolívar, and E. Tambouris, editors,Electronic Government, pages 369–385, Cham, 2022. SpringerInternational Publishing. ISBN 978-3-031-15086-9

A Corpus of German Citizen Contributions in Mobility Planning: Supporting Evaluation Through Multidimensional Classification

In this publication in the Conference on Language Resources and Evaluation, Julia Romberg, Laura Mark and Tobias Escher introduce a collection of annotated datasets that promotes the development of machine learning approaches to support the evaluation of public participation contributions.

Abstract

Political authorities in democratic countries regularly consult the public in order to allow citizens to voice their ideas and concerns on specific issues. When trying to evaluate the (often large number of) contributions by the public in order to inform decision-making, authorities regularly face challenges due to restricted resources.

We identify several tasks whose automated support can help in the evaluation of public participation. These are i) the recognition of arguments, more precisely premises and their conclusions, ii) the assessment of the concreteness of arguments, iii) the detection of textual descriptions of locations in order to assign citizens’ ideas to a spatial location, and iv) the thematic categorization of contributions. To enable future research efforts to develop techniques addressing these four tasks, we introduce the CIMT PartEval Corpus, a new publicly-available German-language corpus that includes several thousand citizen contributions from six mobility-related planning processes in five German municipalities. The corpus provides annotations for each of these tasks which have not been available in German for the domain of public participation before either at all or in this scope and variety.

Key findings

  • The CIMT PartEval Argument Component Corpus comprises 17,852 sentences from German public participation processes annotated as non-argumentative, premise, or major position.
  • The CIMT PartEval Argument Concreteness Corpus consists of 1,127 argumentative text spans that are annotated according to three levels of concreteness: low, intermediate, and high.
  • Der CIMT PartEval Geographic Location Corpus consists of 4,830 locations and the GPS coordinates for 2,529 proposals from public consultations.
  • The CIMT PartEval Thematic Categorization Corpus relies on a new hierarchical categorization scheme for mobility that captures modes of transport (non-motorized transport: cycling, walking, scooters; motorized transport: local public transport, long-distance public transport, commercial transport) and a number of specifications, such as moving or stationary traffic, new services, and inter- and multimodality. In total, 697 documents have been annotated according to this scheme.

Publication

Romberg, Julia; Mark, Laura; Escher, Tobias (2022, June). A Corpus of German Citizen Contributions in Mobility Planning: Supporting Evaluation Through Multidimensional Classification. In Proceedings of the Language Resources and Evaluation Conference (pp. 2874–2883), Marseille, France. European Language Resources Association. https://aclanthology.org/2022.lrec-1.308

Corpus available under

https://github.com/juliaromberg/cimt-argument-mining-dataset

https://github.com/juliaromberg/cimt-argument-concreteness-dataset

https://github.com/juliaromberg/cimt-geographic-location-dataset

https://github.com/juliaromberg/cimt-thematic-categorization-dataset

Robust Methods for Classifying Argument Components in Public Participation Processes for Mobility Planning

In this publication in the Workshop on Argument Mining, Julia Romberg and Stefan Conrad address the robustness of classification algorithms for argument mining to build reliable models that generalize across datasets.

Abstract

Public participation processes allow citizens to engage in municipal decision-making processes by expressing their opinions on specific issues. Municipalities often only have limited resources to analyze a possibly large amount of textual contributions that need to be evaluated in a timely and detailed manner. Automated support for the evaluation is therefore essential, e.g. to analyze arguments.

In this paper, we address (A) the identification of argumentative discourse units and (B) their classification as major position or premise in German public participation processes. The objective of our work is to make argument mining viable for use in municipalities. We compare different argument mining approaches and develop a generic model that can successfully detect argument structures in different datasets of mobility-related urban planning. We introduce a new data corpus comprising five public participation processes. In our evaluation, we achieve high macro F1 scores (0.76 – 0.80 for the identification of argumentative units; 0.86 – 0.93 for their classification) on all datasets. Additionally, we improve previous results for the classification of argumentative units on a similar German online participation dataset.

Key findings

  • We conducted a comprehensive evaluation of machine learning methods across five public participation process in German municipalities that differ in format (online participation platforms and questionnaires) and process subject.
  • BERT surpasses previously published argument mining approaches for public participation processes on German data for both tasks, reaching macro F1 scores of 0.76 – 0.80 for the identification of argumentative units and macro F1 scores of 0.86 – 0.93 for their classification.
  • In a cross-dataset evaluation, BERT models trained on one dataset can recognize argument structures in other public participation datasets (which were not part of the training) with comparable goodness of fit.
  • Such model robustness across datasets is an important step towards the practical application of argument mining in municipalities.

Publication

Romberg, Julia; Conrad, Stefan (2021, November). Citizen Involvement in Urban Planning – How Can Municipalities Be Supported in Evaluating Public Participation Processes for Mobility Transitions?. In Proceedings of the 8th Workshop on Argument Mining (pp. 89-99), Punta Cana, Dominican Republic. Association for Computational Linguistics. https://aclanthology.org/2021.argmining-1.9

Results of the first practical workshop of the junior research group CIMT

Our first practical workshop in summer 2020 focused on the question of how the evaluation of citizen contributions can be technically supported and what requirements practitioners have for a software solution designed to (partially) automate the evaluation.

More information can be found in the working paper (German version only!):