The Interplay of Emotions and Convincingness in Argument Mining for NLP (EMCONA)
Whether an argument is convincing may depend on various factors, for example its logical structure, the clarity of its presentation, but also its emotional connotation. It is known from research in argumentation theory and social psychology that arguments and emotions interact, but this prior knowledge has not yet been (deeply) exploited in natural language processing (NLP) - in contrast, existing work in argument mining for NLP treats emotionality in a shallow manner, typically as an ordinal variable. In EMCONA, we study the interplay of emotion and convincingness from an NLP perspective and focus on several aspects. We will (1) analyze how emotions are communicated in the context of argumentation. To do so, we will build on top of psychological emotion theories, particularly appraisal theories to estimate the role of societal standards and own goals for the development of emotions. Based on this knowledge and the subsequent development of fine-grained emotion analysis systems, we will (2) analyze the interplay with the convincingness of arguments. This will lead to computational models which jointly represent emotions and argument convincingness, controlled for topic and stance. We will study the interplay between these variables not only in deep learning-based classification settings, but also, (3), in a conditional argument generation setup, which is then (4) evaluated in a user study to understand the boundaries of such systems and where emotions and convincingness are in conflict. Based on these models and their introspection, we will also, (5), study if they make their decisions following known patterns of communication strategies, for instance "fear-then-relief" (in which fear is induced in an interlocutor for which a solution is subsequently offered) or "door-in-the-face" (which makes a larger initial request than the actual one of interest; in which the emotion of guilt plays an important role) to better understand modeling decision processes. With this research, we will develop an improved understanding how emotions and convincingness interact in computational argument mining systems. We expect that this knowledge will improve classification approaches for all involved variables. Further, our conditional generation models serve an educational purpose: we ultimately aim for ethical bias-free (fallacy-free) argumentation models that do not exploit unjustified emotions. Our project thus provides the basis to warn (for instance) users in social media in the future when certain argumentation strategies are used, but may also support inexperienced participants in discussions in the creation of high-quality arguments (which make only justified use of emotions). We therefore expect our project to have important societal impact.
The project starts in May 2024 and is funded by the German Research Foundation (DFG). The project is co-lead with Steffen Eger (Universität Mannheim).