2011 Participants

Patricia Abril,
Assistant Professor, University of Miami School of Business Administration

Alessandro Acquisti,
Associate Professor, Carnegie Mellon University

Joseph Alhadeff,
VP Global Public Policy and Chief Privacy Strategist, Oracle

Anita Allen,
Professor, University of Pennsylvania

Ken Anderson,
Assistant Commissioner, Privacy, Information and Privacy Commissioner of Ontario

Jonas Anderson,
Assoc. Professor, American University – Washington College of Law

Annie Anton,
Professor, North Carolina State University

Jane Bailey,
Prof, University of Ottawa Faculty of Law

Kenneth Bamberger,
Professor , UC Berkeley, School of Law

Kevin Bankston,
Senior Staff Attorney & Policy Counsel, Electronic Frontier Foundation

Liza Barry-Kessler,
Attorney & Ph.D. Student, University of Wisconsin – Milwaukee

Carol Bast,
Associate Professor, University of Central Florida

Robin Bayley,
Ms., Linden Consulting, Inc. Privacy Advisors

Steven Bellovin,
Professor, Columbia University

Colin Bennett,
Professor, University of Victoria

Laura Berger,
Attorney, Federal Trade Commission

Gaia Bernstein,
Professor, Seton Hall University School of Law

Jody Blanke,
Professor, Mercer University

Marc Blitz,
Professor, Oklahoma CIty University

Caspar Bowden,
Chief Privacy Adviser, Microsoft EMEA

danah boyd,
Senior Researcher, Microsoft Research

Bruce Boyden,
Asst. Prof., Marquette University Law School

Travis Breaux,
Dr., Carnegie Mellon University

Cynthia Brown,
Assistant Professor, University of Central Florida

Jacquelyn Burkell,
Dr., The University of Western Ontario

Aaron Burstein,
Policy Analyst, NTIA, U.S. Dept. of Commerce

Ryan Calo,
Director, Stanford Law School

Timothy Casey,
Professor, California Western School of Law

Anupam Chander,
Professor, UC Davis

Janet Chapman,
SVP, Chief Privacy Officer, Union Bank

Wade Chumney,
Cecil B. Day Assistant Professor of Business Ethics and Law, Georgia Institute of Technology

Andrew Clearwater,
Fellow, Center for Law and Innovation at The Univ. of Maine School of Law

Jules Cohen,
Microsoft Corporation

Amanda Conley,
J.D., NYU School of Law

Lani Cossette,
Attorney, Microsoft

Courtney Bowman,
Forward Deployed Engineer, Palantir Technologies

Thomas Crocker,
Associate Professor, University of South Carolina School of Law / Goethe University, Frankfurt

Mary Culnan,
Slade Professor, Bentley University

Bryan Cunningham,
Senior Advisor, Palantir Technologies, Inc.

Doug Curling,
Managing Principal, New Kent Capital

Anupam Datta,
Assistant Research Professor, Carnegie Mellon University

Jamela Debelak,
Executive Director, CLIP, Fordham Law School

Judith DeCew,
Professor of Philosophy & Dept. Chair, Clark University

Deven Desai,
Academic Relations, Manager, Google, Inc.

Lothar Determann,
Dr., UC Berkeley School of Law

Will DeVries,
Policy Counsel, Google Inc.

Nick Doty,
Lecturer / Researcher, UC Berkeley, School of Information

Cynthia Dwork,
Distinguished Scientist, Microsoft

Catherine Dwyer,
Dr., Pace University

Lilian Edwards,
Professor, University of Strathclyde, Associate Director, SCRIPT, University of Edinburgh

Mary Fan,
Assistant Professor of Law, University of Washington

Ed Felten,
Chief Technologist, Federal Trade Commission

David Flaherty,
Professor Emeritus, University of Western Ontario

Natalie Fonseca,
Co-founder & Executive Producer, Privacy Identity Innovation (pii2011)

Heather Ford,
UC Berkeley School of Information

Tanya Forsheit,
Founding Partner, InfoLawGroup LLP

Susan Freiwald,
Professor of Law, University of San Francisco School of Law

Allan Friedman,
Research Director, Center for Technology Innovation, Brookings Institution

Michael Froomkin,
Prof., U. Miami School of Law

Lauren Gelman,
Principal and Founder, BlurryEdge Strategies,

Ann Geyer,
Chief Privacy & Security Officer, UC Berkeley

John Gilliom,
Professor and Chair, Political Science, Ohio University

Dorothy Glancy,
Professor of Law, Santa Clara University School of Law

Eric Goldman,
Director, High Tech Law Institute, Santa Clara University School of Law

Nathan Good,
principal, Good Research

Benjamin Goold,
Professor, University of British Columbia

Jennifer Granick,
Zwillinger Genetski

John Grant,
Civil Liberties Engineer, Palantir Technologies

Victoria Groom,
Research Fellow, Stanford University

Jens Grossklags,
Assistant Professor, The Pennsylvania State University

Joseph Hall,
Postdoc, UC Berkeley School of Information

Woodrow Hartzog,
Assistant Professor, Cumberland School of Law at Samford University

Mike Hintze,
Associate General Counsel, Microsoft Corporation

Lance Hoffman,
Director, Cyber Security Policy and Research Institute, George Washington University

David Hoffman,
Associate Professor, Temple University

Marcia Hofmann,
Senior Staff Attorney, Electronic Frontier Foundation

Chris Hoofnagle,
Lecturer, Berkeley Law

Trevor Hughes,
President / CEO, IAPP

Kevin Hunsaker,
Senior Counsel, Rearden Commerce

Maritza Johnson,
PhD Candidate, Columbia University

Erica Johnstone,
Partner, Ridder, Costa & Johnstone LLP

Claire Kelleher-Smith,
Student, Berkeley Law

Orin Kerr,
Professor, George Washington University Law School

Ian Kerr,
Canada Research Chair in Ethics, Law & Technology, University of Ottawa

Pauline Kim,
Charles Nagel Professor of Law, Washington University School of Law

Jennifer King,
UC Berkeley School of Information

Anne Klinefelter,
Prof., University of North Carolina

Colin Koopman,
Assistant Professor, University of Oregon

Heidi Kotzian,
Future of Privacy Forum

Rick Kunkel,
Associate Professor, University of St. Thomas

Airi Lampinen,
Researcher, PhD Candidate, Helsinki Institute for Information Technology HIIT

Claudia Langer,
visiting scholar, University of California at Berkeley Law School

Linda Lara,

Stephen Lau,
Systemwide Director of IT Policy, University of California, Office of the President

Barb Lawler,
Chief Privacy Officer, Intuit

Travis LeBlanc,
Special Assistant Attorney General & Special Counsel, California Attorney General’s Office

Nancy Lemon,
Lecturer, Berkeley School of Law

Mark MacCarthy,
Adjunct Professor, Georgetown University

Michelle Madejski,
Columbia University

Carter Manny,
Professor of Business Law, University of Southern Maine

Alice Marwick,
Postdoctoral Researcher, Microsoft Research

Aaron Massey,
PhD Candidate, North Carolina State University

Kristen Mathews,
Partner, Proskauer Rose, LLC

Andrea Matwyshyn,
Asst. Professor Legal Studies & Business Ethics, Wharton School, University of Pennsylvania

William McGeveran,
Associate Professor, University of Minnesota Law School

Anne McKenna,
Parnter, ToomeyMcKenna

Joanne McNabb,
Chief, California Office of Privacy Protection

Edward McNicholas,
Partner, Sidley Austin LLP

David Medine,
Partner, WilmerHale

Sylvain Métille,
Doctor of Law, Attorney at the swiss Bar, BCLT (Swiss visiting scholar)

Jon Mills,
Dean Emeritus, Professor of Law, University of Florida Levin College of Law

Tracy Mitrano,
Director of IT Policy and Computer Policy, Cornell University

Adam Moore,
Associate Prof, University of Washington

Scott Mulligan,
Skidmore College

Deirdre Mulligan,
Assistant Professor, UCB School of Information

Arvind Narayanan,
Dr, Stanford University

Helen Nissenbaum,
Professor, New York University

John Nockelby,
Professor of Law, Loyola Law School Los Angeles

Tom O’Brien,
Legal Counsel, Palantir Technologies

Paul Ohm,
Associate Professor, University of Colorado Law School

Nicole Ozer,
Technology and Civil Liberties Policy Director, ACLU of Northern California

Moira Paterson,
Assoc Prof, Faculty of Law, Monash University

Heather Patterson,
Berkeley Law

Stephanie Pell

Scott Peppet,
Associate Professor, University of Colorado Law School

Sandra Petronio,
Professor, Indiana University-Purdue University Indianapolis

Will Pierog,
Glushko Fellow/Berkeley Law Student, Samuelson Clinic

Vince Polley,
President, KnowConnect PLLC

Jules Polonetsky,
Director, Future of Privacy Forum

Marilyn Prosch,
Associate Professor, Arizona State University

Robert Quinn,
Sr. VP-Fed Reg & Chief Privacy Officer, AT&T Services, Inc.

Charles Raab,
Professor Emeritus, University of Edinburgh

Priscilla Regan,
Professor of Politics & Government, George Mason University

Joel Reidenberg,
Professor & Director, CLIP, Fordham Law School

Virginia Rezmierski,
Adjunct Associate Professor, School of Information and Ford School of Public Policy., University of Michigan

Neil Richards,
Professor of Law, Washington University School of Law

Beate Roessler,
Professor, University of Amsterdam

Sasha Romanosky,
PhD Student, Carnegie Mellon University

Alan Rubel,
Assistant Professor, University of Wisconsin, Madison

Ira Rubinstein,
Senior Fellow, NYU School of Law

James Rule,
Affiliated Scholar, Center for the Study of Law and Society — UCB

Pamela Samuelson,
Professor, UC Berkeley Law

Albert Scherr,
Professor of Law, University of New Hampshire School of Law

Russell Schrader,
Chief Privacy Officer, Visa Inc.

Jason Schultz,
Professor, UC Berkeley School of Law

Paul Schwartz,
Professor of Law, Berkeley Law School

Mark Seifert,
Partner, Brunswick Group

Wendy Seltzer,
Fellow, Princeton CITP

Divya Sharma,
Graduate Student, Carnegie Mellon University

Robert Sloan,
Professor & Department Head, University of Illinois at Chicago

Christopher Slobogin,
Professor, Vanderbilt University Law School

Christopher Soghoian,
Graduate Fellow, Center for Applied Cybersecurity Research, Indiana University

Daniel Solove,
John Marshall Harlan Research Professor of Law, George Washington University Law School

Ashkan Soltani,
Independent Researcher and Consultant ,

Tim Sparapani,
Director, Public Policy, Facebook

Robert Sprague,
Associate Professor, University of Wyoming

Valerie Steeves,
University of Ottawa

Katherine Strandburg,
Professor, New York University School of Law

Frederic Stutzman,
Postdoctoral Fellow, Carnegie Mellon University

Harry Surden,
Professor, University of Colorado Law School

Peter Swire,
C. William O’Neill Professor of Law, Ohio State University

Clare Tarpey,
Fellow, New York Civil Liberties Union

Chuck Teller,
President, Catalog Choice

Omer Tene,
Associate Professor, Israeli College of Management Haim Striks School of Law

David Thaw,
Research Associate, University of Maryland, Maryland Cybersecurity Center

Tim Tobin,
Attorney, Hogan Lovells US LLP

Michael Traynor,
Mr., Cobalt LLP

Jennifer Urban,
Asst Clinical Prof of Law, UC-Berkeley, Samuelson Law, Technology & Public Policy Clinic

Colette Vogele,

Shreya Vora,
Fellow, Future of Privacy Forum

Kent Wada,
Dir, Strategic IT & Privacy Policy, UCLA Office of Information Technology

Richard Warner,
Professor, Chicago-Kent College of Law

Daniel Weitzner,
Deputy Chief Technology Officer, White House Office of Science and Technology Policy

Tara Whalen,
IT Research Analyst, Office of the Privacy Commissioner of Canada

Stephen Wicker,
Professor, School of Electrical and Comp Engineering

Lauren Willis,
Professor of Law, Loyola Law School Los Angeles

Peter Winn,
Attorney-Advisor, Office of Legal Counsel/Dept. of Justice

Christopher Wolf,
Co-Chair, Future of Privacy Forum

Felix Wu,
Assistant Professor of Law, Cardozo School of Law

Jane Yakowitz,
Visiting Professor of Law, Brooklyn Law School

Harlan Yu,
Graduate student, Princeton CITP

Tal Zarsky
, Dr. , NYU Law School/U. of Haifa

Michael Zimmer,
Assistant Professor, University of Wisconsin-Milwaukee

Tal Zarsky, Data Mining, Personal Information & Discrimination

Tal Zarsky, Data Mining, Personal Information & Discrimination

Comment by: James Rule

PLSC 2011

Published version available here:

Workshop draft abstract:

Governments are extremely interested in figuring out what their citizens are going to do. To meet this ambitious objective, governments began to engage in predictive modeling analyses which they apply to massive datasets of personal information at their disposal, from both governmental and commercial sources. Such analyses are, in many cases, enabled by data mining, which allows for both automation, and the revealing of previously unknown patterns. The outcomes of these analyses are rules and associations, providing approximated predictions as to future actions and reactions of individuals. These individualized predictions can thereafter be used by government officials (or possibly solely as part of an automated system) for a variety of tasks. For instance, such input could be used to make decisions regarding the future allocation of resources and privileges. In other instances, these predictions can be used to establish risks posed by specific individuals (in terms of security or law enforcement), while allowing the state to take relevant precautions. The patterns and decision-trees resulting from these analyses (and further applied to the policy objectives mentioned) are very different from those broadly used today to differentiate among groups and individuals; they might include a great variety of factors and variables, or perhaps are only established by an algorithm using ever-changing rules which cannot be easily understand by the observing human.

As knowledge of such ventures is unfolding, and technological advances enable their expansion, policymakers and scholars are quickly moving to point out the determinants of such practices. They are striving to establish which practices are legitimate and which go too far. This article aims to join this discussion, in an attempt to resolve arguments and misunderstandings concerning a central argument raised in this discussion – whether such practices amount to unfair discrimination. This argument is challenging, as governments often treat different individuals differently (and are indeed required to do so). This article sets out to examine why and in which instances must these practices be considered discriminatory and problematic when carried out by government.

To approach this concern, the article identifies and explores three arguments in the context of discrimination:

(1) These practices might resemble, enable or are a proxy for various forms of discrimination which are currently prohibited by law: here the article will briefly note these forms of illegal discrimination and the theories underlying their prohibition. It will then explain how these novel forms of allegedly “neutral” models might generate similar results and effects.

(2) These practices might generate stigma and stereotypes for those indicated by this process: This argument is challenging given the fact that these forms of discrimination use elaborate factors and are at times opaque. However, in some context these concerns might indeed persist, especially when we fear these practices will “creep” into other projects. In addition, these practices might also signal the governmental “appetite” for discrimination, and set an inappropriate example for the public.

(3) These practices “punish” individuals for things they did not do; or, are premised on what they did (actions) and how they are viewed, but not who they really are: – these loosely connected arguments are commonly raised, yet require several in-depth inquiries: are these concerns are sound policy considerations or utterances of a Luddite crowd? Are the practices here addressed generating these concerns? In this discussion, the article will distinguish among immutable and changeable attributes.

While examining these arguments, the article emphasizes that if individualized predictive modeling is banned, government will engage in other forms of analysis to distinguish among individuals, their needs and the risks they pose. Thus, predictive modeling would constantly be compared to the common practice of treating individuals as part of predefined groups. Other dominant alternatives are allowing for broad discretion and refraining from differentiated treatment, or doing so on a random basis.

After mapping out these concerns and taking note of the alternatives, the article moves to offer concrete recommendations. While distinguishing among different contexts, it advises when these concerns lead to abandoning the use of such prediction models. It further notes other steps which might be taken to allow these practices to persist – steps which range from closely monitoring decision making models and making needed adjustments to eliminate some forms of discrimination, greater governmental disclosure and transparency and public education. It concludes by noting that in some cases and with specific tinkering, predictive modeling which is premised upon personal information can leads to outcomes which promote fairness and equality.

It is duly noted that perhaps the most central arguments against predictive data mining practices are premised upon other theories and concerns. For instance, often mentioned are the inability to control personal information, the lack of transparency and the existence and persistence of errors in the analysis and decision making process. This article acknowledges these concerns, yet leaves them to be addressed in other segments of this broader research project. Furthermore, it sees great importance in addressing the “discrimination” element separately and directly; at times, the other concerns noted could be resolved. In other instances, the interests addressed here are confused with other elements. For these reasons and others, a specific inquiry is warranted. In addition, the article sets aside the argument that such actions are problematic, as they are premised upon decisions made by a machine, as opposed to a fellow individual. This powerful argument is also to be addressed elsewhere. Finally, the article acknowledges that these policy strategies should only be adopted if proven efficient and effective. This finding must be established by experts on a case-by-case basis. The arguments set out here, though, can also assist in adding additional factors to such analyses.

Addressing the issues at hand is challenging. They present a considerable paradigm shift in the way policymakers and scholars have been thinking of discrimination and decision-making in the past. In addition, there is the difficulty of applying existing legal paradigms and doctrines to these new concerns: is this question one of privacy, equality and discrimination, data protection, autonomy – something else or none of the above? Rather than tacking this latter question, I apply methodologies from all of the above to address an issue which will no doubt constantly arise in today’s environment of ongoing flows of personal information.

Felix Wu, Privacy and Utility in Data Sets

Felix Wu, Privacy and Utility in Data Sets

Comment by: Jane Yakowitz

PLSC 2011

Published version available here:

Workshop draft abstract:

Privacy and utility are inherently in tension with one another.  Information is useful exactly when it allows someone to have knowledge that he would not otherwise have, and to make inferences that he would not otherwise be able to make.  The goal of information privacy is precisely to prevent others from acquiring particular information or from being able to make particular inferences.  Moreover, as others have demonstrated recently, we cannot divide the world into “personal” information to be withheld, and “non-personal” information to be disclosed.  There is potential social value to be gained from disclosing even “personal” information.  And the revelation of even “non-personal” information might provide the final link in a chain of inferences that leads to information we would like to withhold.

Thus, the disclosure of data involves an inherent tradeoff between privacy and utility. More disclosure is both more useful and less private.  Less disclosure is both less useful and more private.  This does not mean, however, that the disclosure of any one piece of information is no different from the disclosure of any other.  Some disclosures may be relatively more privacy invading and less socially useful, or vice versa.  The question is how to identify the privacy and utility characteristics of data, so as to maximize the utility of the data disclosed, and minimize privacy loss.

Thus far, at least two different academic communities have studied the question of analyzing privacy and utility.  In the legal community, this question has come to the fore with recent work on the re-identification of individuals in supposedly anonymized data sets, as well as with questions raised by the behavioral advertising industry’s collection and analysis of consumer data.  In the computer science community, this question has been studied in the context of formal models of privacy, particularly that of “differential privacy.”  This paper seeks to bridge the two communities, to help policy makers understand the implications of the results obtained by formal modeling, and to suggest to computer scientists additional formal approaches that might capture more of the features of the policy questions currently being debated.  We can and should bring to bear both the qualitative analysis of the law and the quantitative analysis of computer science to this increasingly salient question of privacy-utility tradeoffs.

Richard Warner & Robert H. Sloan, Informational Privacy: Norms, Coordination, Hockey Helmets, and a Role for Legislation

Richard Warner & Robert H. Sloan, Informational Privacy: Norms, Coordination, Hockey Helmets, and a Role for Legislation

Comment by: Anita Allen-Castellitto

PLSC 2011

Workshop draft abstract:

Informational privacy consists in the ability to control what personal information others collect and what they do with it.  We value the control, as over twenty years of studies attest; but, other studies show that we readily trade very personal information for very small rewards.  We offer a solution to this puzzle—a limited solution since we restrict our inquiry to private-sector, commercial contexts.  Our solution provides a general perspective on informational privacy and suggests ways of ensuring sufficient control over personal information. The solution is that the seemingly contradictory attitudes are characteristic of conformity to suboptimal informational norms. This raises three questions.  What is a norm?  What is an informational norm?  And, what is it for a norm to be “suboptimal”?  We take our answers from a general theory of norms and market interactions in our forthcoming (Fall 2011) book, Unauthorized Access: the Crisis in Online Privacy and Security.  Setting informational privacy concerns in this general context reveals important commonalities with other current problems.

We focus on coordination norms.  A coordination norm is a behavioral regularity in a group, where the regularity exists at least in part because almost everyone thinks that he or she ought to conform to the regularity, as long as everyone else does.  Driving on the right is a classic example.  In mass markets, coordination norms promote buyers’ interests by unifying their demands.  A mass-market buyer cannot unilaterally ensure that sellers will conform to his or her requirements; coordination norms create collective demands to which profit-motive driven sellers respond.

The problem on which we focus is that rapid technological change has rendered existing norms “suboptimal.”  There are many optimality notions (Pareto optimality being perhaps the best known); the optimality notion we use is value-optimality.  A norm is value-optimal when (and only when) in light of the values of all (or almost all) members of the group in which the norm obtains, the norm is at least as well justified as any alternative.  We will use “suboptimal” for norms that are not value-optimal.  A classic example of a suboptimal coordination norm is the “no helmet” norm among pre-1979 National Hockey League players.  Not wearing a helmet was a behavioral regularity that existed in part because each player thought he ought to conform, as long as all the others did.  However, because of the value they placed on avoiding head injury, virtually all the players regarded the alternative in which they all wore helmets as better justified.  The players nonetheless remained trapped in the suboptimal “no helmet” norm until the league mandated the wearing of helmets in 1979.  Like the hockey players, we “play without a helmet” when we enter certain types of market transactions:  we are, that is, trapped in what are—now—suboptimal coordination norms.

Informational privacy is a case in point.  As Helen Nissenbaum and others have emphasized, informational norms regulate the flow of personal information in wide variety of interactions, including market transactions.  Informational norms are norms that constrain the collection, use, and distribution of personal information.  In a range of important cases, such norms are coordination norms that unify buyers’ privacy demands.  The norms are instances of the following pattern: consumers demand that businesses process—collect, use, and distribute—information only in role-appropriate ways.  The problem is that technological advances have so greatly increased the power and breadth of role-appropriate information processing that many norms are no longer value-optimal:  alternatives in which consumers have more control are better justified.  The consequence is an unacceptable loss of control over personal information.

Conformity to suboptimal norms explains the otherwise puzzling fact that consumers value control over personal information while they also surrender control for small rewards.  This is precisely the sort of behavior one sees when groups are trapped in suboptimal norms.  Recall the hockey players.  They did not wear helmets even though their values made “all players wear helmets” a far better justified alternative.  Similarly, consumers conform to suboptimal informational norms even though their values make “consumers have more control” a far better justified alternative.  Trading privacy for small rewards is just norm-conforming behavior; however, when asked about their values, consumers indicate that they value control.  Their problem is that, like hockey players, consumers cannot break free of the suboptimal norm.

The solution to the hockey players’ problem was “legislative”:  the league mandated that every player wear a helmet.  We offer a similar solution:  a model in which appropriate legislation gives rise to value-optimal informational norms.  The model applies in a wide variety of cases in which rapid change has outstripped the evolution of norms and thus underscores the fact that issues about informational privacy share important similarities with other types of suboptimal norms that currently govern various market transactions.

Colette Vogele & Erica Johnstone, Without My Consent

Colette Vogele & Erica Johnstone, Without My Consent

Comment by: Christopher Wolf

PLSC 2011

Workshop draft abstract:

Without My Consent is a web-based project to combat online invasions of privacy:

It’s no secret that the use of private information to harm a person’s reputation through public humiliation and harassment of the most intimate sort is an increasingly popular tactic employed by harassers.  Some examples include: the nightmare ex who posts sexually explicit photos and videos online or threatens to do so; the abusive ex who procured those images using threats or coercion; the high school boyfriend who videotapes a sexual encounter and shares the video with everyone in school, and the Peeping Tom who surreptitiously records and uploads images to pornographic websites.  Because of the online (“cyber”) nature of the activity, victims are often left with no clear path to justice to restore their reputation, and overcome the serious harms caused by the harassment.  This is because the defendants are anonymous, and the websites may elect not to remove the content when requested. In instances when the content is removed, it more than likely will reappear on the same or a different site, and then, within a short period of time is indexed by search engines under the individual’s name or other indicia of her identity (e.g., unique avatars, handles, and usernames).

The Without My Consent website, which we are workshopping at the PLSC, is intended to empower individuals harmed by online privacy violations to stand up for their rights. The beta launch of the site (set for Summer 2011) will focus on the specific problem of the publication of private images online. It will provide legal and non-legal tools for combating the problem, and encourage the development of case law on anonymous-plaintiff lawsuits.  Our hope is that the site will also inspire meaningful debate about the internet, accountability, free speech, and the serious problem of online invasions of privacy.

Peter Swire, Social Networks, Privacy, and Freedom of Association

Peter Swire, Social Networks, Privacy, and Freedom of Association

Comment by: Harry Surden

PLSC 2011

Published version available here:

Workshop draft abstract:

This article introduces a topic that has not drawn attention to date but will undoubtedly become important to the legal treatment of social networks.  Many people have written insightfully about the individual’s right to privacy, and how that can be threatened when “friends” or others use social networks to spread information about you without your consent.  Other people have written about how social networks are powerful tools for political mobilization, fostering the freedom of association.  Strangely enough, my research has not found any analysis of how the two fit together — how freedom of association interacts with privacy protection.

This discussion draft highlights the profound connection between social networking and freedom of association.  At the most basic level, linguistically, “networks” and “associations” are close synonyms.  They both depend on “links” and “relationships.”  If there is a tool for lots and lots of networking, that is also a tool for how we do lots and lots of associations.

I stumbled into this topic due to a happenstance of work history.  I have long worked and written on privacy and related information technology issues, including as the Chief Counselor for Privacy under President Clinton.  Then, during the Obama transition, I was asked to be counsel to the New Media team.  These were the people who had done such a good job at grassroots organizing during the campaign.  During the transition, the team was building New Media tools for the transition website and into the overhaul of

My experience historically had been that people on the progressive side of politics often intuitively support privacy protection.  They often believe that “they” — big corporations or law enforcement — will grab our personal data and put “us” at risk.  The Obama New Media folks, by contrast, often had a different intuition.  They saw personal information as something that “we” use.  Modern grassroots organizing seeks to go viral, to galvanize one energetic individual who then gets his or her friends and contacts excited, and so on. In this New Media paradigm, “we” the personally motivated use emails, texts, and other outreach tools to tell our friends and associates about the campaign and remind them to vote.  We may reach out to people we don’t know or barely know but who have a shared interest — the same college club, rock band, religious group, or whatever.  In this way, “our” energy and commitment can achieve scale and effectiveness.  The tools provide “data empowerment” — ordinary people can do things with personal data that only large organizations used to be able to do.

This shift from only “them” using the data to “us” being able to use the data tracks the changes in information technology since the 1970s, when the privacy fair information practices were articulated and the U.S. passed the Privacy Act.   In the 1970s, personal data resided in mainframe computers.  These were operated by big government agencies and the largest corporations.  Today, by contrast, my personal computer has more processing power than an IBM mainframe from thirty years ago. My home has a fiber optic connection, so bandwidth is rarely a limitation.  Today, “we” own mainframes and use the Internet as a global distribution system.

To explain the interaction between privacy and freedom of association, this paper has three sections.  The first section explains how privacy debates to date have often featured the “right to privacy” on one side and utilitarian arguments in favor of data use on the other.  This section provides more detail about how social networks are major enablers of the right of freedom of association.  This means that rules about information flows involve individual rights on both sides, so advocates for either sort of right need to address how to take account of the opposing right.  The second section shows step-by-step how U.S. law will address the multiple claims of right to privacy and freedom of association.  The outcome of litigation will depend on the facts in a particular case, but the legal claims arising from freedom of expression appear relevant to a significant range of possible privacy rules that would apply to social networks.  Based on precedent, strict scrutiny may apply to material infringements on freedom of association.  The third section explains how the interesting arguments by Professor Katherine Strandburg fit into the overall analysis.  She has written about a somewhat different interaction between privacy and freedom of association, where the right of freedom of association is a limit on the power of government to require an association to reveal its members.

Christopher Slobogin, The Future of the Fourth Amendment in a Technological Age

Christopher Slobogin, The Future of the Fourth Amendment in a Technological Age

Comment by: Dorothy Glancy

PLSC 2011

Published version available here:

Workshop draft abstract:

The Fourth Amendment is becoming increasingly irrelevant as technology expands police capacity to intrude.   Supreme Court jurisprudence defining “search” for Fourth Amendment purposes—the public exposure doctrine, the general public use doctrine, the contraband-specific doctrine and the assumption of risk doctrine—leaves a vast number of technologically-enhanced search techniques unregulated.  Special needs doctrine leaves many other technological searches, especially those of groups, essentially unregulated. Yet the damage to privacy and related interests caused by these “virtual searches” can be just as significant as the harm associated with physical searches.  Two reforms are proposed.  Where police target a particular individual, innuendo in Supreme Court decisions that “search” might eventually be defined as a layperson would and that the justification for searches should be proportionate to the degree of invasion can form the basis for a regulatory regime that provides meaningful privacy protection without handcuffing the police.  When, instead, government uses technology to conduct mass searches, political process theory—less deferential to law enforcement than special needs doctrine, but more deferential than strict scrutiny analysis—may provide the optimal method of cabining government power.

Paul M. Schwartz & Daniel J. Solove, The PII Problem: Privacy and a New Concept of Personally Identifiable Information

Paul M. Schwartz & Daniel J. Solove, The PII Problem: Privacy and a New Concept of Personally Identifiable Information

Comment by: Rick Kunkel

PLSC 2011

Published version available here:

Workshop draft abstract:

Personally identifiable information (PII) is one of the most central concepts in information privacy regulation.  The scope of privacy laws typically turns on whether PII is involved.  The basic assumption behind the applicable laws is that if PII is not involved, then there can be no privacy harm.  At the same time, there is no uniform definition of PII in information privacy law.  Moreover, computer science has shown that the very concept of PII can be highly malleable.

To demonstrate the policy implications of the failure of the current definitions of PII, this Article examines current practices of behavioral marketing.  In their use of targeted technologies, companies direct offerings to specific consumers based on information collected about their characteristics, preferences, and behavior.  Behavioral marketing has enormous implications for privacy, yet the present regulatory regime with PII as the cornerstone has proven incapable of an adequate response.  Behavioral marketers are able to engage in their targeting practices without the use of what most laws consider to be PII.  Despite this fact, behavioral marketing causes privacy problems that should be addressed.  Other practices not involving PII as traditionally formulated also lead to problems.  Since PII defines the scope of so much privacy regulation, the concept of PII must be rethought.  In this Article, we argue that PII cannot be abandoned; the concept is essential as a way to define regulatory boundaries.  Instead, we propose a new conception of PII, one that will be far more effective than current approaches.

This Article proceeds in four steps.  First, we develop a typology of PII that shows three basic approaches in United States law to defining this term.  As part of this typological work, the Article traces the historical development of the jurisprudence of PII and demonstrates that this term only became important in information privacy law in the late 1960s with the rise of the computer’s data processing.  Second, we use behavioral marketing, with a special emphasis on food marketing to children, as a test case for demonstrating the severe flaws in the current definitions of PII.  Third, we discuss broader policy concerns with PII as it is conceptualized today.  Finally, this Article develops an approach to redefining PII based on the rule-standard dichotomy.  Drawing on the law of the European Union, we propose a new concept of PII that protects information that relates either to an “identified” or “identifiable” person.  We conclude by showing the merits of this new approach in the context of behavioral marketing and in meeting the other policy concerns with the current definitions of PII.

Ira Rubinstein, Regulating Privacy by Design

Ira Rubinstein, Regulating Privacy by Design

Comment by: Marilyn Prosch & Ken Anderson

PLSC 2011

Published version available here:

Workshop draft abstract:

Privacy officials in Europe and North America are embracing Privacy by Design (PbD) as never before. PbD is the idea that “building in” privacy throughout the design and development of products and services achieves better results than “bolting it on” as an afterthought. However enticing this idea may be, what does it mean? In the US, a very recent FTC Staff Report makes PbD one of three main components of a new privacy framework. According to the FTC, firms should adopt PbD by incorporating substantive protections into their development practices (such as data security, reasonable collection limitations, sound retention practices, and data accuracy) and implementing comprehensive data management procedures; the latter may also require a privacy impact assessment (PIA) where appropriate. In contrast, European privacy officials view PbD as also requiring the broad adoption of Privacy Enhancing Technologies (PETs), especially PETs that shield or reduce identification or minimize the collection of personal data. Despite the enthusiasm of privacy regulators, neither PbD nor PIAs nor PETs have yet to achieve widespread acceptance in the marketplace.

There are many reasons for this, not the least of which is a lack of clarity over the meaning of these terms, how they relate to one another, or what rules apply when a firm undertakes the PbD approach. In addition, Internet firms derive much of their profit from the collection and use of PII and therefore PbD may disrupt profitable activities or new business ventures. Although the European Commission sponsored a study of the economic costs and benefits of PETs, and the UK is looking at how to improve the business case for investing in PbD, the available evidence does not support the view that PbD pays for itself (except for a small group of firms who must protect privacy to maintain highly valued brands and avoid reputational damage). In the meantime, the regulatory implications of PbD are murky at best, not only for firms that might adopt this approach but for free riders as well. Indeed, discussion of the economic or regulatory incentives for PbD is sorely lacking in the FTC report.

This Article seeks to clarify the meaning of PbD and thereby suggest how privacy officials might develop appropriate regulatory incentives that offset the certain economic costs and uncertain privacy benefits of this new approach. It begins by developing an analytic framework around two sets of distinctions. First, it classifies PETs as substitutes or complements depending on their interaction with data protection or privacy law.  Substitute PETs aim for zero-disclosure of PII, whereas complementary PETs enable greater user control over personal data through enhanced notice and choice. Second, it distinguishes two forms of PbD, one in which firms seek to build-in privacy protections either by using PETs or by relying on engineering approaches and related tools that implement FIPPs throughout both the product development and the data management lifecycles.  Building on these distinctions, and using targeted advertising as its primary illustration, it then suggests how regulators might achieve better success in promoting the use of PbD by 1) identifying best practices in privacy design and development, including prohibited practices, required practices, and recommended practices; and 2) situating best practices within an innovative regulatory framework that a) promotes experimentation with new technologies and engineering practices; b) encourages regulatory agreements through stakeholder representation, face-to-face negotiations, and consensus-based decision making; and c) supports flexible, incentive-driven safe harbor mechanisms as defined by (newly enacted) privacy legislation.

Beate Roessler & Dorota Mokrosinska, The social value of privacy

Beate Roessler & Dorota Mokrosinska, The social value of privacy

Comment by: Ian Kerr

PLSC 2011

Workshop draft abstract:

Threats to, and the protection of individual privacy have been the subject of public and scholarly debate for some time. This debate focuses on the significance of privacy for individual persons: How is a right to privacy justified? How are privacy, freedom, dignity and autonomy related? How important is the protection of individual privacy in a liberal constitutional state?  Recently, a novel perspective has emerged in the debate on privacy. A number of scholars have argued that the significance of privacy goes beyond the individuals’ interests it protects. By protecting individual privacy, we protect not only the interests of individuals, but also the interests of society. Besides its value for individuals, privacy has also an irreducibly social value.

We intend to show that only by analyzing and conceptualizing the social value of privacy can we depart from the normative problems confronting us when we focus on the protection of individual (informational) privacy. This set of normative problems finds expression in a conflict often sworn between individuals and society: individual interests in privacy (and freedom) are directly opposed to societal interests in safety and protection from terrorism, as well as efficient administration or effective healthcare. The impression then arises that the one – i.e. the protection of privacy – needs to be taken less seriously in order to satisfy better the other – i.e. societal interests. The fact that this description of the constellation between individual persons and their social and societal contexts is problematic and misleading only becomes clear when we analyse the social value of privacy.

We will proceed as follows: In a first step, we shall address in full the discussion centering around a right to individual privacy, as well as the embedding of this discussion in liberal theory (I). We shall then explain the different ways in which the social value of privacy can be conceptualized, as well as the attempts made at this to date which are available in the literature (II). This leads us to a distinction between three paradigmatic types of relationship and social practices which are each reliant on the protection of – different types of – privacy, and discuss each one in turn: intimate relationships (between family members or friends) (III), professional relationships (IV) and interaction between strangers (V). In a final step we shall then analyse the different, thus distinguished forms of the social value of privacy; how exactly these different forms relate to one another; and what this means for potential conflicts between the interests of society and the protection of individual privacy (VI).