SODAS lecture with Aniko Hannak

SODAS Blurp

Title:
New Faces of Bias in Online Platforms 

Abstract:
The internet is fundamentally changing how we socialize, work, or gather information. The recent emergence of content serving services creates a new online ecosystem in which companies constantly compete for users' attention and use sophisticated user tracking and personalization methods to maximize their profit. 

My research investigates the potential downsides of the algorithms commonly used by online platforms. Since these algorithms learn from human data, they are bound to recreate biases that are present in the real world. In this talk, I will first present a measurement methodology developed to monitor personalization algorithms in the context of platforms such as Google Search or online stores. Second, I will talk about recommendation and rating systems in the context of employment related platforms such as job search sites, freelancing marketplaces and online professional communities, and their danger to reinforce gender inequalities.

Bio:
Aniko is an assistant professor at the University of Zurich, leading the Social Computing Group. She received her PhD from the College of Computer & Information Science at Northeastern University, where she was part of the Lazer Lab and the Algorithmic Auditing Group. Aniko’s main interest lies in computational social science, more specifically she is focusing on the co-evolution of online systems and their users. Broadly, her work investigates a variety of content serving websites such as search engines, online stores, job search sites, or freelance marketplaces and uncovers potential negative consequences of the big data algorithms that these websites deploy. 

This spring, the theme of the SODAS lecture series is "Philosophy of the Predicted Human".

The Predicted Human

Being human in 2022 implies being the target of a vast number of predictive infrastructures. In healthcare algorithms predict not only potential pharmacological cures to disease but also their possible future incidence of those diseases. In governance, citizens are exposed to algorithms that predict - not only their day-to-day behaviors to craft better policy - but also to algorithms that attempt to predict, shape and manipulate their political attitudes and behaviors. In education, children’s emotional and intellectual development is increasingly the product of at-home and at-school interventions shaped around personalized algorithms. And humans worldwide are increasingly subject to advertising and marketing algorithms whose goal is to target them with specific products and ideas they will find palatable. Algorithms are everywhere – as are their intended as well as unintended consequences.

Predicting and manipulating the future behavior of human beings is nothing new. Most of the quantitative social sciences focus on this topic in a general sense. There are entire subfields of statistics dedicated to understanding what can be predicted and what cannot. Yet the current situation is different. Computers’ ability to analyze text and images has been revolutionized by the availability of vast datasets and new machine learning techniques. We are currently experiencing a similar shift in terms of how algorithms can predict (and manipulate) human behavior. Human beings can be algorithmically shaped, we can be hacked.

The ambition with this semester’s SODAS Lectures is to present and discuss different perspectives on human prediction. Inviting a list of distinguished scholars and speakers whose expertise ranges from traditional social sciences, over machine learning and data science to philosophy and STS, we hope to delve into some of the principles and dynamics which govern our ability to predict and control both individual and collective human behaviors.

Venue: CSS, Sodas conference room 1.2.26