Public online seminar, 4pm 17 September 2020 AEST
Seda Gürses, TU Delft will give the seventh HMI Data, AI and Society public seminar.
Seda is currently an Associate Professor in the Department of Multi-Actor Systems at TU Delft at the Faculty of Technology Policy and Management, an affiliate at the COSIC Group at the Department of Electrical Engineering (ESAT), KU Leuven and a member of the Institute for Technology in the Public Interest. Previously she was an FWO post-doctoral fellow at COSIC/ESAT, and a research associate at Princeton University and NYU. Her work focuses on privacy enhancing and protective optimization technologies (PETs and POTs), privacy engineering, as well as questions around software infrastructures, social justice and political economy as they intersect with computer science.
Title: ‘Protective Optimization Technologies’
Abstract: The transformation to service-oriented architectures (SOA) and agile development as the dominant modes for producing software has far reaching implications and has given rise to a new type of system: optimization systems. As software engineering shifted in the 2000s from packaged software and PCs to services and clouds, enabling distributed architectures that incorporate real-time feedback from users [Kaldrack and Leeker, 2015], digital systems became layers of technologies, metricized under the authority of objective functions. These functions define the optimization objectives of, among others, the selection of software features, the orchestration of cloud usage, and the design of user interaction and growth planning [Gurses and van Hoboken, 2018]. In contrast to traditional information systems, which treat the world as a static place to be known and focus on storage, processing, transport, and organizing information, systems produced under logics of optimization consider the world as a place to sense and co-create. They seek maximum extraction of economic value by optimizing the capture and manipulation of people’s activities and environments [Agre 1994; Curry and Phillips, 2003], often leading to asymmetrical concentration of resources in the hands of a few companies [Hwang 2018, Poon 2016].
Fairness frameworks proposed by computer scientists [Barocas, Hardt, Narayanan, 2020] have recently come into vogue as a way to address the economic, moral, social, and political impact of optimization systems on populations. These frameworks succeed, in part, by narrowing the problem definition to reduce complexity. Not surprisingly, this simplification limits the ability of these frameworks to capture and mitigate a variety of harms caused by optimization systems.
In this talk, Seda will first characterize these limitations and evaluate their consequences using concepts from requirements engineering and from social sciences. In particular, she will show that the focus on the inputs and outputs of algorithms misses the way that harms are manifested when systems interact with the "world". Furthermore, the frameworks' reliance on the service provider focuses on mitigations possible through an incentivized service provider and does not explore avenues of action in cases where service providers are not cooperative or intentionally adversarial. To broaden the scope of the field we propose a new class of solutions that explore other approaches to capturing harms and contesting optimization systems: Protective Optimization Technologies (POTs). While POTs may not solve the broader set of problems that optimization systems bring about, they propose a way to raise awareness, push back and contest in the presence of powerful service providers.