Archives

Deven Desai, Data Hoarding: Privacy in the Age of Artificial Intelligence

Deven Desai, Data Hoarding: Privacy in the Age of Artificial Intelligence

Comment by: Kirsten Martin

PLSC 2013

Work draft abstract:

We live in an age of data hoarding. Those who have data never wish to release it. Those who don’t have data want to grab it and increase their stores. In both cases—refusing to release data and gathering data—the mosaic theory, which accepts that “seemingly insignificant information may become significant when combined with other information,”1 seems to explain the result. Discussions of mosaic theory focus on executive power. In national security cases the government refuses to share data lest it reveal secrets. Yet recent Fourth Amendment cases limit the state’s ability to gather location data, because under the mosaic theory the aggregate the data could reveal more than what isolated surveillance would reveal.2 The theory describes a problem but yields wildly different results. Worse it does not explain what to do about data collection, retention, and release in different contexts. Furthermore, if data hoarding is a problem for the state, it is one for the private sector too. Private companies, such as Amazon, Google, Facebook, and Wal-Mart, gather and keep as much data as possible, because they wish to learn more about consumers and how to sell to them. Researchers gather and mine data to open new doors in almost every scientific discipline. Like the government, neither group is likely to share the data they collect or increase transparency for in data is power.

I argue that just as we have started to look at the implications of mosaic theory for the state, we must do so for the private sector. So far, privacy scholarship has separated government and private sector data practices. That division is less tenable today. Not only governments, but also companies and scientists assemble digital dossiers. The digital dossiers of just ten years ago emerge faster than ever and with deeper information about us. Individualized data sets matter, but they are now part of something bigger. Large, networked data sets—so-called Big Data—and data mining techniques  simultaneously allow someone to study large groups, to know what an individual has done in the past, and to predict certain future outcomes.3 In all sectors, the vast wave of automatically gathered data points is no longer a barrier to such analysis. Instead, it fuels and improves the analysis, because new systems learn from data sets. Thanks to artificial intelligence, the fantasy of a few data points connecting to and revealing a larger picture may be a reality.

Put differently, discussions about privacy and technology in all contexts miss a simple, yet fundamental, point: artificial intelligence changes everything about privacy. Given that large data sets are here to stay and artificial intelligence techniques promise to revolutionize what we learn from those data sets, the law must understand the rules for these new avenues of information. To address this challenge, I draw on computer science literature to test claims about the harms or benefits of data collection and use. By showing the parallels between state and private sector claims about data and mapping the boundaries of those claims, this Article offers a way to understand and manage what is at stake in the age of pervasive data hoarding and automatic analysis possible with artificial intelligence.


1 Jameel Jaffer, The Mosaic Theory, 77 SOCIAL RESEARCH 873, 873 (2010)

2 See e.g., Orin Kerr, The Mosaic Theory of the Fourth Amendment, 110 MICH. L. REV. __ (2012)

(forthcoming) (criticizing application of mosaic theory to analysis of when collective surveillance steps

constitute a search)

3 See e.g., Hyunyoung Choi and Hal Varian, Predicting the Present with Google Trends, Google, Inc. (April,

2009) available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1659302

Paul Ohm, Branding Privacy

Paul Ohm, Branding Privacy

Comment by: Deven Desai

PLSC 2012

Workshop draft abstract:

This Article focuses on the problem of what James Grimmelmann has called the “privacy lurch,”[1] which I define as an abrupt change made to the way a company handles data about individuals. Two prominent examples include Google’s decision in early 2012 to tear down the walls that once separated data about users collected from its different services and Facebook’s decisions in 2009 and 2010 to expose more user profile information to the public web by default than it had in the past. Privacy lurches disrupt long-settled user expectations and undermine claims that companies protect privacy by providing adequate notice and choice. They expose users to much more risk to their individual privacy than the users might have anticipated or desired, assuming they are paying attention at all. Given the special and significant problems associated with privacy lurches, this Article calls on regulators to seek creative solutions to address them.

But even though privacy lurches lead to significant risks of harm, some might argue we should do nothing to limit them. Privacy lurches are the product of a dynamic marketplace for online goods and services.  What I call a lurch, the media instead tends to mythologize as a “pivot,” a wel-come shift in a company’s business model, celebrated as an example of the nimble dynamism of entrepreneurs that has become a hallmark of our information economy.  Before we intervene to tamp down the harms of privacy lurches, we need to consider what we might give up in return.

Weighing the advantages of the dynamic marketplace against the harms of privacy lurches, this Article prescribes a new form of mandatory notice and choice. To breathe a little life into the usually denigrated options of notice and choice this Article looks to the scholarship of trademark law, representing a novel integration of two very important but until now almost never connected areas of information law.  This bridge has long been overdue to be built, as the theory of trademark law centers on the very same information quality and consumer protection concerns that animate notice and choice debates in privacy law. These theories describe the important informational power of trademarks (and service marks and, more generally, brands) to signal quality and goodwill to consumers concisely and efficiently.  Trademark scholars also describe how brands can serve to punish and warn, helping consumers recognize a company with a track record of shoddy practices or weak attention to consumer protection.

The central recommendation of this Article is that lawmakers and regulators should force almost every company that handles customer information to associate its brand name with a specified set of core privacy commitments.  The name, “Facebook,” for example, should be inextricably bound to that company’s specific, fundamental promises about the amount of information it collects and the uses to which it puts that information. If the company chooses someday to depart from these initial core privacy commitments, it must be required to use a new name with its modified service, albeit perhaps one associated with the old name, such as “Facebook Plus” or “Facebook Enhanced.”

Although this solution is novel, it is far from radical when one considers how well it is sup-ported by the theoretical underpinnings of both privacy law and trademark law. It builds on the work of privacy scholars who have looked to consumer protection law for guidance, representing another important intradisciplinary bridge, this one between privacy law and product safety law.  Just as companies selling inherently dangerous products are obligated to attach warning labels,  so too should companies shifting to inherently dangerous privacy practices be required to display warn-ing labels. And the spot at the top of every Internet web page listing the brand name is arguably the only space available for an effective online warning label. A “branded privacy” solution is also well-supported by trademark theory, which focuses on giving consumers the tools they need to accurately and efficiently associate trademarks with the consistent qualities of a service in ways that privacy lurches disregard.

At the same time, because this solution sets the conditions of privacy lurches rather than prohibiting them outright, and by restricting mandatory rebranding only to situations involving a narrow class of privacy promises, it leaves room for market actors to innovate, striking a proper balance between the positive aspects of dynamism and the negative harms of privacy lurches. Com-panies will be free to evolve and adapt their practices in any way that does not tread upon the set of core privacy commitments, but they can change a core commitment only by changing their brand. This rule will act like a brake, forcing companies to engage more in internal deliberation than they do today about the class of choices consumers care about most, without preventing dynamism when it is unrelated to those choices or when the value of dynamism is high. And when companies do choose to modify a core privacy commitment, its new brand will send a clear, unambiguous signal to consumers and privacy watchers that something important has changed, directly addressing the information quality problems that plague notice-and-choice regimes in ways that improve upon prior suggestions.


[1] James Grimmelmann, Saving Facebook, 94 Iowa L. Rev. 1137 (2009).

Orin Kerr, A Substitution-Effects Theory of the Fourth Amendment

Orin Kerr, A Substitution-Effects Theory of the Fourth Amendment

Comment by: Deven Desai

PLSC 2011

Workshop draft abstract:

Fourth Amendment law is often considered a theoretical embarrassment. The law consists of dozens of rules for very specific situations that seem to lack a coherent explanation. Constitutional protection varies dramatically based on seemingly arcane distinctions.

This Article introduces a new theory that explains and justifies both the structure and content of Fourth Amendment rules: The theory of equilibrium-adjustment. The theory of equilibrium-adjustment posits that the Supreme Court adjusts the scope of protection in response to new facts in order to restore the status quo level of protection.  When changing technology or social practice expands government power, the Supreme Court tightens Fourth Amendment protection; when it threatens government power, the Supreme Court loosens constitutional protection.  Existing Fourth Amendment law therefore reflects many decades of equilibrium-adjustment as facts change.  This simple argument explains a wide range of puzzling Fourth Amendment doctrines including the automobile exception; rules on using sense-enhancing devices; the decline of the “mere evidence” rule; how the Fourth Amendment applies to the telephone network; undercover investigations; the law of aerial surveillance; rules for subpoenas; and the special Fourth Amendment protection for the home.

The Article then offers a normative defense of equilibrium-adjustment. Equilibrium-adjustment maintains interpretive fidelity while permitting Fourth Amendment law to respond to changing facts.  Its wide appeal and focus on deviations from the status quo facilitates coherent decisionmaking amidst empirical uncertainty and yet also gives Fourth Amendment law significant stability.  The Article concludes by arguing that judicial delay is an important precondition to successful equilibrium-adjustment.

Jennifer Rothman, The Inalienability of the Right of Publicity

Jennifer Rothman, The Inalienability of the Right of Publicity

Comment by: Deven Desai

PLSC 2010

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2174646

Workshop draft abstract:

Publicity rights first developed in the heartland of privacy and tort law as a compensation scheme for injuries to personal dignity through the misappropriation of a person’s identity. Today, however, the right of publicity is most often situated as a robust property right. Some courts and scholars have avoided classifying publicity rights as property-based, but have done so only because they say it does not matter whether publicity rights are tort or property-based. This article contends that the difference does matter. Tort-based rights are personal, non-assignable, and cannot be sold to satisfy court judgments. Property rights, however, are assignable and can be sold to satisfy court judgments. Despite the opportunity for individuals to assign in total their publicity rights, courts are uncomfortable with truly divesting an individual of control over his or her identity. One recent example arose when the Goldman estate sought not only to obtain the profits from O.J. Simpson’s publicity rights, but also to affirmatively control the use of his right of publicity. If publicity rights are property rights, such control seems uncontroversial. Nevertheless, taking away Simpson’s control over his own identity challenges the underlying autonomy-based justifications for publicity rights and more generally our commitment to individual liberty. This article will therefore suggest that publicity rights remain privacy-based torts. Resituating publicity rights in tort law will provide a basis for more appropriate limits on both the alienability and scope of publicity rights.

Mary Fan, Quasi-Privacy and Redemption in a World of Ubiquitous Checking-Up

Mary Fan, Quasi-Privacy and Redemption in a World of Ubiquitous Checking-Up

Comment by: Deven Desai

PLSC 2009

Workshop draft abstract:

The solace of those who stumble or free-fall after making a mistake or enduring misfortune is the ability to remake oneself.  The possibility of remaking oneself is one of the casualties of the rampant and insufficiently regulated proliferation of private-sector databases that ossify and construct identity — and impede recovery and self-remaking after traumas like foreclosure, bankruptcy or lesser mistakes and mishaps such as an long-unpaid or mislaid bill scarring a credit score.  Classically, one who suffered a substantial setback might hope that time and effort would expunge or alter natural memory or a geographical move might permit a fresh start beyond the reach of localized memory.  The difficulty with the proliferation of private-sector databases is that memory is artificially prolonged, bureaucratically ossified, and extended in geographical reach, severely circumscribing the ability to remake or rehabilitate oneself.

This article conceptualizes privacy as protecting the plasticity—in the sense of ability to recover from injury—of identity and personhood and the life consequences that flow from identity.  Our understanding of what privacy entails has been crucially updated for the information age by Dan Solove as more than just safeguard against intrusion on what is secret, but also “the ability to avoid the powerlessness of having others control information” affecting critical life consequences like loans, jobs and licensing, and further humanized by Anita Allen to include protection against perpetually dredging up the past so that one can move forward and rehabilitate. The conception of privacy as self-plasticity builds on these understanding.  The article argues that conceptualized thus, privacy as principle and right requires the regulation of memory in the context of private-sector databases to permit attempts at self-remaking and rehabilitation.  Memory regulation would translate into protections like mandatory expungement of records or circumscribing the geographical scope of database information to permit the possibility of self-remaking and rehabilitation.