Archives

Ariel Porat and Lior Jacob Strahilevitz, Personalizing Default Rules and Disclosure with Big Data

Ariel Porat and Lior Jacob Strahilevitz, Personalizing Default Rules and Disclosure with Big Data

Comment by: Lauren Willis

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2217064

Workshop draft abstract:

The rise of Big Data has become one of the most important recent economic developments in the industrialized world, while simultaneously posing vexing privacy issues for policymakers. While the use of Big Data to help firms sort consumers (e.g., for marketing or risk-pricing purposes) has been the subject of sustained discussion, scholars have paid little attention to governmental uses of Big Data for purposes unrelated to law enforcement and national security. In this paper, we show how Big Data might transform adjudication, disclosure, and the tailoring of contractual and testamentary default provisions.

The paper makes three key contributions. First, it uses the psychology and computer science literatures to show how Big Data tools classify individuals based on “Big Five” personality metrics, which in turn are predictive of individuals’ future consumption choices, risk-preferences, technology adoption, health attributes, and voting behavior. For example, the paper discusses new research showing that by systematically analyzing data from individuals’ smartphones, researchers can identify particular phone users as particularly extrovereted, conscientious, open to new experiences, or neurotic. Smartphones, not eyes, turn out to be the true windows into our souls. Second, it shows how Big Data tools can match individuals who are extremely similar in terms of their behavioral profiles and, by providing a subset of “guinea pigs” with a great deal of time and information to make choices, it can extrapolate ideal default rules for huge swaths of the population. Big Data might even be used to promote privacy to some degree, by offering pro-privacy defaults to consumers whose past behavior suggests a strong preference for privacy and pro-sharing defaults to consumers whose past behavior indicates little interest in safeguarding personal information. Third, the paper is the first to show that the influential critiques of disclosure regimes (including FIPs-style “notice and choice” provisions) are at bottom critiques of impersonal disclosure. A legal regime that relies on Big Data to determine which relevant information about risks, side-effects, and other potentially problematic product attributes should be disclosed to individual consumers has the potential to improve the efficacy of disclosure strategies more generally. The paper also confronts a number of important objections to personalization in these various contexts, including concerns about cross-subsidies, government discrimination, and personalization in a world where human preferences and behaviors change over time.

The paper seeks to show privacy scholars and advocates precisely what the stakes will be in the looming fight over Big Data. We seek to demonstrate that Big Data battles are not merely fights about privacy versus profits, but implicate a host of unrecognized social interests as well.

Lauren E. Willis, Why Not Privacy by Default?

Lauren E. Willis, Why Not Privacy by Default?

Comment by: Michael Geist

PLSC 2013

Workshop draft abstract:

We live in a Track-Me world.   Firms collect reams of personal data about all of us, for marketing, pricing, and other purposes.  Most people do not like this.  Policymakers have proposed that people be given choices about whether, by whom, and for what purposes their personal information is collected and used.  Firms claim that consumers already can opt out of the Track-Me default, but that choice turns out to be illusory.  Consumers who attempt to exercise this choice find their efforts stymied by the limited range of options firms actually give them and technology that bypasses consumer attempts at self-determination.  Even if firms were to provide consumers with the option to opt out of tracking completely and to respect that choice, opting out would likely remain so cumbersome as to be impossible for the average consumer.

In response, some have suggested switching the default rule, such that firms (or some firms) would not be permitted to collect (or collect in some manners) and/or use (or use for some purposes) personal data (or some types of personal data) unless consumers opt out of a “Do-Not-Track” default.  Faced with this penalty default, firms ostensibly would be forced to clearly explain to consumers how to opt out of the default and to justify to consumers why they should opt into a Track-Me position.  Consumers could then, the reasoning goes, freely exercise informed choice in selecting whether to be tracked.

Industry vigorously opposes a Do-Not-Track default, arguing that Track-Me is the better position for most consumers and that the positive externalities created by tracking justify keeping that as the default, if not unwaivable, position.  Some privacy advocates oppose both Track-Me and Do-Not-Track defaults on the grounds that the negative externalities created by tracking justify refusing to allow any consumers to consent to tracking at all.

Here I caution against the use of a Do-Not-Track default on different grounds.  Lessons from the experience of consumer-protective defaults in other realms counsel that a Do-Not-Track default is likely to be slippery.  The very same transaction barriers and decisionmaking biases that can lead consumers to stick with defaults in some situations can be manipulated by firms to induce consumers to opt out of a Do-Not-Track default.  Rather than forcing firms to clearly inform consumers of their options and allowing consumers to exercise informed choice, a Do-Not-Track default will provide firms with opportunities to confuse many consumers into opting out.  Once a consumer opts out of a default position, courts, commentators, and the consumer herself are more likely to blame the consumer for any adverse consequences that might befall her.  The few sophisticated consumers who are able to effectively control whether they are tracked will benefit, but at the expense of the majority who will lack effective self-determination in this realm.  A Do-Not-Track default might be a necessary policy way station en route to a scheme of privacy-protective mandates for political reasons, but it also might defuse the political will to implement such a scheme without meaningfully changing the lack of choice inherent in today’s Track-Me world.

I use “track” to mean all forms of personal data collection and use beyond those that are reasonably expected for the immediate transaction at hand.  So, for example, a consumer who provides her address to her bank expects it to be used for the purposes of mailing her information about her accounts, but does not expect it to be used to decide whether or at what price to offer her auto insurance.

Mark MacCarthy, New Directions in Privacy: Disclosure, Unfairness, and Externalities

Mark MacCarthy, New Directions in Privacy: Disclosure, Unfairness, and Externalities

Comment by: Lauren Willis

PLSC 2010

Workshop draft abstract:

Several recent developments underscore a return of public concerns about access to personal information by businesses and its possible misuse. The Administration is conducting an extensive interagency review of commercial privacy, Congress is considering legislation on online behavioral advertising and in November an international conference of government officials will likely approve a global standard on privacy protection.

But what’s the best way to protect privacy? David Vladeck, the new head of consumer protection for the Federal Trade Commission, has said he is dissatisfied with the existing policy frameworks for thinking about the issue. He’s right. The traditional framework of fair information practices is severely limited by excessive reliance on informed consent.  Restrictions on disclosure are impractical in a digital world where information collection is ubiquitous, where apparently anonymous or de-identified information can be associated with a specific person and where one person’s decision to share information can adversely affect others who choose to remain silent.  The alternative “harm” framework, however, seems to allow all sorts of privacy violations except when specific, tangible harm results.  If an online marketer secretly tracks you on the Internet and serves you ads based on which web sites you visited, well, where’s the harm?  How are you hurt by getting targeted ads instead of generic ones?  And yet people feel that secret tracking is the essence of a privacy violation.

The traditional harms approach is clearly too limited.  It defines the notion of harm so narrowly that privacy itself is no longer at stake.  And yet its focus on outcomes and substantive protection rather than process is a step in the right direction.

Part I of this paper describes the limitations on the informed consent model, suggesting that informed consent is neither necessary nor sufficient for a legitimate information practice. Part II explores the idea of negative privacy externalities, illustrating several ways in which data can be leaky.  It also discusses the ways in which the indirect disclosure of information can harm individuals through invidious discrimination, inefficient product variety restrictions on access, and price discrimination. Part III outlines the unfairness model, explores the three-part test for unfairness under the Federal Trade Commission Act, and compares the model to similar privacy frameworks that have been proposed as additions to (or replacements for) the informed consent model.  Part IV explores how to apply the unfairness framework to some current privacy issues involving online behavioral advertising and social networks.

Alessandro Acquisti, The Impact of Relative Standards on Concern about Privacy

Alessandro Acquisti, The Impact of Relative Standards on Concern about Privacy

Comment by: Lauren Willis

PLSC 2009

Workshop draft abstract:

Consumers consistently rank privacy high among their concerns, yet their behaviors often reveal a remarkable lack of regard for the protection of personal information.  We propose that one explanation for the discrepancy is that actual concern about privacy in a particular situation depends on comparative judgments.  We present the results of two studies that illustrate the comparative nature of privacy-related behavior.  The first study focuses on the impact of receiving information about self-revelations made by others on an individual’s own self-revelatory behavior. The second study focuses on the impact of past intrusions on privacy on current self-revelatory behavior.  We find that admission to sensitive and even unethical behaviors by others elicits information disclosure, and that increasing the sensitivity of questions over the course of a survey inhibits information disclosure. Together, these studies can help explain why people profess great concern about privacy yet behave in a fashion that bespeaks a lack of concern.