0000000000286486
AUTHOR
Iryna Gurevych
Focusing Knowledge-based Graph Argument Mining via Topic Modeling
Decision-making usually takes five steps: identifying the problem, collecting data, extracting evidence, identifying pro and con arguments, and making decisions. Focusing on extracting evidence, this paper presents a hybrid model that combines latent Dirichlet allocation and word embeddings to obtain external knowledge from structured and unstructured data. We study the task of sentence-level argument mining, as arguments mostly require some degree of world knowledge to be identified and understood. Given a topic and a sentence, the goal is to classify whether a sentence represents an argument in regard to the topic. We use a topic model to extract topic- and sentence-specific evidence from…
Automatically Detecting Incivility in Online Discussions of News Media
Detecting biased language in written discourse is a highly relevant area of research in political communication and other social sciences, given the large quantity of information exchanged in public online platforms. In this abstract, we discuss an approach based on the concept of "incivility"-assessing biased text on the Facebook pages of established news media. News outlets are forced to put increasing efforts into preventing heated debates from turning into disrespectful discussions on their social media platforms. By scaling the analysis from a few thousand manually coded samples to more than a million comments, we take a step towards supporting media outlets in (semi-)automatizing the …
Investigating label suggestions for opinion mining in German Covid-19 social media
This work investigates the use of interactively updated label suggestions to improve upon the efficiency of gathering annotations on the task of opinion mining in German Covid-19 social media data. We develop guidelines to conduct a controlled annotation study with social science students and find that suggestions from a model trained on a small, expert-annotated dataset already lead to a substantial improvement - in terms of inter-annotator agreement(+.14 Fleiss' $\kappa$) and annotation quality - compared to students that do not receive any label suggestions. We further find that label suggestions from interactively trained models do not lead to an improvement over suggestions from a stat…