SODAS Lecture: Predictive coding, neo-Kantianism and the Lifeworld

SODAS Blurp

We are delighted to host Professor Dan Zahavi for the first SODAS Lecture this spring.

Abstract

Recently, a number of neuroscientists and philosophers have taken the so-called prediction error minimization theory to support a form of radical neuro-representationalism, according to which the content of our conscious experiences is a neural construct, a brain-generated simulation. There is remarkable similarity between this account and ideas found in and developed by German neo-Kantians in the mid-19th century. Eventually, however, some neo-Kantians as well as central figures in phenomenology came to have serious doubts about the cogency and internal consistency of the model. In my talk, I will argue that this criticism has implications for our assessment of the contemporary theory as well.

Bio

Dan Zahavi is Professor of Philosophy and director of the Center for Subjectivity Research at the University of Copenhagen. Zahavi’s primary research area is phenomenology and philosophy of mind, and their intersection with empirical disciplines such as psychiatry and psychology. In addition to a number of scholarly works on the phenomenology of Husserl, Zahavi has mainly written on the nature of selfhood, self-consciousness, intersubjectivity, empathy, and most recently on topics in social ontology. His most important publications include Self-awareness and Alterity (1999/2020), Husserl’s Phenomenology (2003), Subjectivity and Selfhood (2005), The Phenomenological Mind (together with Shaun Gallagher) (2008/2012/2021), Self and Other (2014), Husserl’s Legacy (2017), and Phenomenology: The Basics (2019). Since 2020, Zahavi has been PI on a 5-year research project entitled Who are We? which is supported by the European Research Council and the Carlsberg Foundation. Zahavi’s writings have been translated into more than 30 languages.

This spring, the theme of the SODAS lecture series is "Philosophy of the Predicted Human".

The Predicted Human

Being human in 2022 implies being the target of a vast number of predictive infrastructures. In healthcare algorithms predict not only potential pharmacological cures to disease but also their possible future incidence of those diseases. In governance, citizens are exposed to algorithms that predict - not only their day-to-day behaviors to craft better policy - but also to algorithms that attempt to predict, shape and manipulate their political attitudes and behaviors. In education, children’s emotional and intellectual development is increasingly the product of at-home and at-school interventions shaped around personalized algorithms. And humans worldwide are increasingly subject to advertising and marketing algorithms whose goal is to target them with specific products and ideas they will find palatable. Algorithms are everywhere – as are their intended as well as unintended consequences.

Predicting and manipulating the future behavior of human beings is nothing new. Most of the quantitative social sciences focus on this topic in a general sense. There are entire subfields of statistics dedicated to understanding what can be predicted and what cannot. Yet the current situation is different. Computers’ ability to analyze text and images has been revolutionized by the availability of vast datasets and new machine learning techniques. We are currently experiencing a similar shift in terms of how algorithms can predict (and manipulate) human behavior. Human beings can be algorithmically shaped, we can be hacked.

The ambition with this semester’s SODAS Lectures is to present and discuss different perspectives on human prediction. Inviting a list of distinguished scholars and speakers whose expertise ranges from traditional social sciences, over machine learning and data science to philosophy and STS, we hope to delve into some of the principles and dynamics which govern our ability to predict and control both individual and collective human behaviors.

Venue: CSS, room 1.1.02
or via Zoom: https://ucph-ku.zoom.us/j/62932255935