Cynthia Dwork & Deirdre K. Mulligan, Aligning Classification Systems with Social Values through Design

Cynthia Dwork & Deirdre K. Mulligan, Aligning Classification Systems with Social Values through Design

Comment by: Joseph Turow

PLSC 2012

Workshop draft abstract:

Ad serving services, search engines, passenger screening systems all rely on mathematical models to make value-judgments about how to treat people:  What ads to serve them, what urls to suggest, whether to single them out for extra scrutiny or prohibit them from boarding a plane. Each of these models presents multiple points for value judgments:  which data to include; how to weigh and analyze the data; whether to prefer false positives or false negatives in the identified sets.  Whether these judgments are thought of as value judgments, whether the value judgments reflected in the systems are known to those who rely on or are subject to them, and whether they are viewed as interrogable varies by context.

Currently objections to behavioral advertising systems are framed first and foremost as concerns about privacy. The collection and use of detailed digital dossiers of information about individuals’ online behavior that reflects not only commercial activities but also the political, social, and intellectual life of internet users is viewed as a threat to privacy.  Privacy objections run the gamut from objections to the surreptitious acquisition of personal information, to the potential sensitivity of the data, to the retention and security practices of those handling the data, as well as the possibility that it will be accessed and used by additional parties for additional purposes.  Framed as privacy concerns the responses to these systems—both policy and technical—aim to provide internet users with the ability to limit or modify their participation in them.

This is an incomplete response that stems in part from an imprecise documentation of the objections.  Objections to behavioral advertising systems in large part stem from concerns about its power to invisibly shape and control individuals’ exposure to information.  This power raises a disparate set of concerns including, the potential of such algorithms to discriminate or marginalize specific populations, and the potential to balkanize and sort the population through narrowcasting thereby undermining the shared public sphere. While often framed first as privacy concerns, these objections raise issues of fairness and are better understood as concerns with social justice and related but not synonymous concerns about social fragmentation and its impact on deliberative democracy.

Our primary focus is on this set of objections to behavioral advertising that lurk below the surface of privacy discourse.

The academic and policy communities have wrestled with this knot of concerns in other settings and have produced a range of policy solutions, some of which have been adopted.  Policy solutions to address concerns related to segmentation have focused primarily on limiting its impact on protected classes.  They include the creation of “standard offers” made equally available to all; the use of test files to identify biased outputs based on ostensibly unbiased inputs; and required disclosure of categories, classes, inputs, and algorithms. More recently, researchers and practitioners in computer science have developed technical approaches to mitigate the ethical issues presented by algorithms. They have developed a set of methods for conforming algorithms to external ethical commands; advocated systems push value judgments off to end users (for example whether to err toward false positives or false negatives); developed techniques for formalizing fairness in classification schemes; and advocated approaches that expose embedded value judgments and allow users to manipulate and experience the outcomes various values produce.  They have also used datamining to reveal discriminatory outputs.

Given the intersecting concerns with privacy and classification raised by behavioral advertising, proposed policy and technical responses to date are quite limited.  Regulatory efforts are primarily concerned with addressing the collection and use of data to target individuals for advertising. Similarly, technical efforts focus on limiting the use of data for advertising purposes or preventing its collection. The responses are largely silent on the social justice implications of classification.

This paper teases out this latter set of concerns with the impact of classification. As in other areas, digging below the rhetoric of privacy one finds a variety of outcome-based objections that reflect political commitments to equality, opportunity, and community.  We then examine concerns and responses to classification in other areas.  We then consider the extent to which computer science methods and tools can be deployed to address this set of concerns with classification in the behavioral advertising context.  We conclude with some broader generalizations about the role of policy and technology in addressing this set of concerns in automated decision making systems.