SODAS Lecture: Automatically explaining fact checking predictions

SODAS Blurp

We are delighted to host Isabelle Augenstein for this SODAS Lecture.

Title:

Automatically explaining fact checking predictions

Abstract:

The past decade has seen a substantial rise in the amount of mis- and disinformation online, from targeted disinformation campaigns to influence politics, to the unintentional spreading of misinformation about public health. This development has spurred research in the area of automatic fact checking, from approaches to detect check-worthy claims and determining the stance of tweets towards claims, to methods to determine the veracity of claims given evidence documents. These automatic methods are often content-based, using natural language processing methods, which in turn utilise deep neural networks to learn higher-order features from text in order to make predictions. As deep neural networks are black-box models, their inner workings cannot be easily explained. At the same time, it is desirable to explain how they arrive at certain decisions, especially if they are to be used for decision making. While this has been known for some time, the issues this raises have been exacerbated by models increasing in size, and by EU legislation requiring models to be used for decision making to provide explanations, and, very recently, by legislation requiring online platforms operating in the EU to provide transparent reporting on their services. Despite this, current solutions for explainability are still largely lacking in the area of fact checking. This talk provides a brief introduction to the area of automatic fact checking, including claim check-worthiness detection, stance detection and veracity prediction. It then presents some first solutions to generating and automatically evaluating explanations for fact checking.

Bio:

Isabelle Augenstein is an Associate Professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. Her main research interests are fact checking, low-resource learning, and explainability. Prior to starting a faculty position, she was a postdoctoral researcher at University College London, and before that a PhD student at the University of Sheffield. She currently holds a DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media’, and is a member of the Young Royal Danish Academy of Sciences and Letters.


This spring, the theme of the SODAS lecture series is "Philosophy of the Predicted Human".

The Predicted Human

Being human in 2022 implies being the target of a vast number of predictive infrastructures. In healthcare algorithms predict not only potential pharmacological cures to disease but also their possible future incidence of those diseases. In governance, citizens are exposed to algorithms that predict - not only their day-to-day behaviors to craft better policy - but also to algorithms that attempt to predict, shape and manipulate their political attitudes and behaviors. In education, children’s emotional and intellectual development is increasingly the product of at-home and at-school interventions shaped around personalized algorithms. And humans worldwide are increasingly subject to advertising and marketing algorithms whose goal is to target them with specific products and ideas they will find palatable. Algorithms are everywhere – as are their intended as well as unintended consequences.

Predicting and manipulating the future behavior of human beings is nothing new. Most of the quantitative social sciences focus on this topic in a general sense. There are entire subfields of statistics dedicated to understanding what can be predicted and what cannot. Yet the current situation is different. Computers’ ability to analyze text and images has been revolutionized by the availability of vast datasets and new machine learning techniques. We are currently experiencing a similar shift in terms of how algorithms can predict (and manipulate) human behavior. Human beings can be algorithmically shaped, we can be hacked.

The ambition with this semester’s SODAS Lectures is to present and discuss different perspectives on human prediction. Inviting a list of distinguished scholars and speakers whose expertise ranges from traditional social sciences, over machine learning and data science to philosophy and STS, we hope to delve into some of the principles and dynamics which govern our ability to predict and control both individual and collective human behaviors.

Venue: CSS, Sodas conference room 1.2.26
or via Zoom: https://ucph-ku.zoom.us/j/61119140063