Archives

Nick Doty and Deirdre K. Mulligan, Standardizing Do Not Track: How Participants See the Process

Nick Doty and Deirdre K. Mulligan, Standardizing Do Not Track: How Participants See the Process

PLSC 2013

Workshop draft abstract:

Who really participates in the DNT standardization process? What kinds of positions are represented and what kinds of people are actively involved? How do those participants see the process? And what defines the process? (Beyond the World Wide Web Consortium’s Tracking Protection Working Group, discussions at various levels of formality take place in a number of distinct fora.) As part of a larger project exploring how engineers and standards development participants make decisions that affect privacy, we discussion initial results from interviews, textual analysis and participant observation.

While the concerns regarding procedural and substantive fairness we highlighted previously are themselves raised by participants and observers in the process, we also identify concerns around trust and communication. Finally, participants’ statements support a particular theory of values in design, with its own challenges and opportunities for privacy-by-design.

Kenneth Bamberger and Deirdre Mulligan, Privacy in Europe: Initial Data on Governance Choices and Corporate Practices

Kenneth Bamberger and Deirdre Mulligan, Privacy in Europe: Initial Data on Governance Choices and Corporate Practices

Comment by: Dennis Hirsch

PLSC 2013

Workshop draft abstract: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2328877

Privacy governance is at a crossroads.  In light of the digital explosion, policymakers in North America and Europe are revisiting regulation of the corporate treatment of information privacy.  The recent celebration of the thirtieth anniversary of the Organization for Economic Cooperation and Development’s (“OECD”) Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,[1] the first international statement of fair information practice principles, sparked an international review of the guidelines to identify areas for revision.  Work by national data privacy regulators reviewing the E.U. Data Protection Directive in turn have suggested alternative regulatory models oriented around outcomes.[2]  The European Commission is actively debating the terms of a new Privacy Regulation.[3]  And Congress, the FTC, and the current U.S. presidential administration have signaled a commitment to deep reexamination of the current regulatory structure, and a desire for new models.[4]

These efforts, however, have lacked critical information necessary for reform.  Scholarship and advocacy around privacy regulation has focused almost entirely on law “on the books”—legal texts enacted by legislatures or promulgated by agencies.  By contrast, the debate has strangely ignored privacy “on the ground” – the ways in which corporations in different countries have operationalized privacy protection in the light of divergent formal laws; interpretive, organizational and enforcement decisions made by local administrative agencies; and other jurisdiction-specific social, cultural and legal forces.

Since 1994, when such a study examined the U.S. privacy landscape,[5] no sustained inquiry has been conducted into how corporations actually manage privacy in the shadow of formal legal mandates.  No one, moreover, has ever engaged in such a comparative inquiry across jurisdictions.  Indeed, despite wide international variation in approach, even the last detailed comparative account of enforcement practices occurred over two decades ago.[6]  Thus policy reform efforts progress largely without a real understanding of the ways in which previous regulatory attempts have actually promoted, or thwarted, privacy’s protection.

This article is the third documenting a project intended to fill this gap – and at a critical juncture.  The project uses qualitative empirical inquiry—including interviews with and surveys of corporate privacy officers, regulators, and other actors within the privacy field—to identify the ways in which privacy protection is implemented on the ground, and the combination of social, market, and regulatory forces that drive these choices.  And it offers a comparative analysis of the effects of different regulatory approaches adopted by a diversity of OECD nations, taking advantage of the living laboratory created by variations in national implementation of data protection, an environment that can support comparative, in-the-wild assessments of their ongoing efficacy and appropriateness.

While the first two articles in this series discussed research documenting the implementation of privacy in the United States,[7] this article presents the first analysis of data of its kind from Europe, reflecting research and interviews in three EU jurisdictions: Germany, Spain, and France.

The article reflects only the first take at this recently-gathered data; the analysis is not comprehensive, and the lessons drawn at this stage are necessarily tentative.  A complete consideration of the research on the privacy experience in five countries (the US, Germany, France, Spain, and the UK) – one which more generally draws lessons for broader research on paradigms for thinking about privacy, the effectiveness of corporate practices informed by those paradigms, and organizational compliance with different forms of regulation and other external norms more generally – will appear in an upcoming book-length treatment.[8]

Yet this article offers as-yet unavailable data about the European privacy landscape at a critical juncture – the moment at which the policymakers are engaged in important decisions about which regulatory structures to expand to all EU member states, and which to leave behind; and about how those individual states will structure the administrative agencies governing data privacy moving forward; and about strategies those agencies will adopt regarding legal enforcement, the development of expertise within both the government and firms, and the ways that other participants within the privacy “field”[9]—the constellation of organizational actors participating in the construction of legal meaning in a particular domain –will (or will not) best be enlisted to shape corporate decisionmaking and ultimately privacy outcomes.

Setting the context for this analysis, Part I of this Article describes the dominant narratives regarding the regulation of privacy in the United States and the Europe Union – accounts that have occupied privacy scholarship and advocacy for over a decade. Part II summarizes our project to develop more granular accounts of the privacy landscape, and the resulting scholarship’s analyses of privacy “on the ground” in the U.S.  Informed by these analyses, Part III presents the results of our research regarding corporate perception and implementation of privacy requirements in three European jurisdictions, Germany, Spain and France, and placing them within the theoretical framework regarding emerging best practices in the U.S.  Not surprisingly for those familiar with privacy protection in the Europe, these results reveal widely varying privacy landscapes, all within the formal governance of a single legal framework: the 1995 EU Privacy Directive.   More striking, however, are the granular differences between the European jurisdictions and the similarities in both the language in which privacy is discussed, and the particular mandates and institutions shaping privacy’s governance, the architecture for privacy protection and decisionmaking between German and U.S. firms. This Part then seeks to understand the construction of the privacy “field” that shapes these differing country landscapes.  Such inquiry includes the details of national implementation of the EU directive – including the specificity and type of requirements placed on regulated parties, the content of regulation, with particular attention to the comparative focus on process-based as opposed to substantive mandates, and the use of ex ante guidance as opposed to prosecution and enforcement – as well as the structure and approach of the relevant data protection agency, including the size and organization of the staff, the level to which they rely on technical and legal “experts” inside the agency, rather than inside the companies they regulate; the use of enforcement and inspections; and the manner in which regulators and firms interact more generally.  Yet it also includes an understanding of factors beyond privacy regulation itself, including other legal mandates, elements characteristic of national corporate structure, and societal factors, such as the roles of the media and other citizen, industry, labor, or professional organizations that determine the “social license” that governs a corporation’s freedom to act.

Finally, the Article’s Part IV outlines two elements of a new account of privacy’s development, informed by comparative analysis.  First, based on the data from four jurisdictions, it engages in a preliminary analysis regarding which elements of these privacy fields our interviews and other data suggest have fostered, catalyzed and permitted the most adaptive responses in the face of novel challenges to privacy.  Second, it suggests something important about the role of professional networks in the diffusion of practices across jurisdictional lines in the face of important social and technological change.  The adaptability of distinct regulatory approaches and institutions in the face of novel challenges to privacy has never been more important.  Our comparative analysis provides novel insight into the ways that different regulatory choices have interacted with other aspects of the privacy field to shape corporate behavior, offering important insights for all participants in policy debates about the governance of privacy.


[1] See The 30th Anniversary of the OECD Privacy Guidelines, OECD, www.oecd.org/sti/privacyanniversary (last visited Jan. 22, 2013).

[2]   See, e.g., Neil Robinson et al., RAND Eur., Review of the European Data Protection Directive (2009).

[3]   See, e.g., Konrad Lischka & Christian Stöcker, Data Protection: All You Need to Know about the EU Privacy Debate, Spiegel Online (Jan. 18, 2013, 10:15 AM), http://www.spiegel.de/international/europe/the-european-union-closes-in-on-data-privacy-legislation-a-877973.html.

[4]   See, e.g., Adam Popescu, Congress Sets Sights On Fixing Privacy Rights, readwrite (Jan. 18, 2013), http://readwrite.com/2013/01/18/new-congress-privacy-agenda-unvelied; F.T.C. and White House Push for Online Privacy Laws, N.Y. Times, May 10, 2012, at B8, available at http://www.nytimes.com/2012/05/10/business/ftc-and-white-house-push-for-online-privacy-laws.html?_r=0; Fed. Trade Commission, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers (Mar. 2012), available at http://www.ftc.gov/os/2012/03/120326privacyreport.pdf.

[5] See H. Jeff Smith, Managing Privacy: Information Technology and Corporate America (1994).

[6]   See David H. Flaherty, Protecting Privacy in Surveillance Societies: The Federal Republic of Germany, Sweden, France, Canada, & the United States (1989).

[7] See Kenneth A. Bamberger & Deirdre K. Mulligan, New Governance, Chief Privacy Officers, and the Corporate Management of Information Privacy in the United States: An Initial Inquiry, 33 Law & Pol’y 477 (2011); Kenneth A. Bamberger & Deirdre K. Mulligan, Privacy on the Books and on the Ground, 63 Stan. L. Rev. 247 (2011).

[8] Kenneth A. Bamberger & Deirdre K. Mulligan, Catalyzing Privacy: Lessons From Regulatory Choices and Corporate Decisions on Both Sides of the Atlantic (MIT Press: forthcoming 2014)

[9].     See Paul J. DiMaggio & Walter W. Powell, The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields, 48 Am. Soc. Rev. 147, 148 (1983) (defining an organizational field as “those organizations that, in the aggregate, constitute a recognized area of institutional life: key suppliers, resource and product consumers, regulatory agencies, and other organizations that produce similar services or products.”); Lauren B. Edelman, Overlapping Fields and Constructed Legalities: The Endogeneity of Law, in Private Equity, Corporate Governance and the Dynamics of Capital Market Regulation 55, 58 (Justin O’Brien ed., 2007) (defining a legal field as “the environment within which legal institutions and legal actors interact and in which conceptions of legality and compliance evolve”).

Cynthia Dwork & Deirdre K. Mulligan, Aligning Classification Systems with Social Values through Design

Cynthia Dwork & Deirdre K. Mulligan, Aligning Classification Systems with Social Values through Design

Comment by: Joseph Turow

PLSC 2012

Workshop draft abstract:

Ad serving services, search engines, passenger screening systems all rely on mathematical models to make value-judgments about how to treat people:  What ads to serve them, what urls to suggest, whether to single them out for extra scrutiny or prohibit them from boarding a plane. Each of these models presents multiple points for value judgments:  which data to include; how to weigh and analyze the data; whether to prefer false positives or false negatives in the identified sets.  Whether these judgments are thought of as value judgments, whether the value judgments reflected in the systems are known to those who rely on or are subject to them, and whether they are viewed as interrogable varies by context.

Currently objections to behavioral advertising systems are framed first and foremost as concerns about privacy. The collection and use of detailed digital dossiers of information about individuals’ online behavior that reflects not only commercial activities but also the political, social, and intellectual life of internet users is viewed as a threat to privacy.  Privacy objections run the gamut from objections to the surreptitious acquisition of personal information, to the potential sensitivity of the data, to the retention and security practices of those handling the data, as well as the possibility that it will be accessed and used by additional parties for additional purposes.  Framed as privacy concerns the responses to these systems—both policy and technical—aim to provide internet users with the ability to limit or modify their participation in them.

This is an incomplete response that stems in part from an imprecise documentation of the objections.  Objections to behavioral advertising systems in large part stem from concerns about its power to invisibly shape and control individuals’ exposure to information.  This power raises a disparate set of concerns including, the potential of such algorithms to discriminate or marginalize specific populations, and the potential to balkanize and sort the population through narrowcasting thereby undermining the shared public sphere. While often framed first as privacy concerns, these objections raise issues of fairness and are better understood as concerns with social justice and related but not synonymous concerns about social fragmentation and its impact on deliberative democracy.

Our primary focus is on this set of objections to behavioral advertising that lurk below the surface of privacy discourse.

The academic and policy communities have wrestled with this knot of concerns in other settings and have produced a range of policy solutions, some of which have been adopted.  Policy solutions to address concerns related to segmentation have focused primarily on limiting its impact on protected classes.  They include the creation of “standard offers” made equally available to all; the use of test files to identify biased outputs based on ostensibly unbiased inputs; and required disclosure of categories, classes, inputs, and algorithms. More recently, researchers and practitioners in computer science have developed technical approaches to mitigate the ethical issues presented by algorithms. They have developed a set of methods for conforming algorithms to external ethical commands; advocated systems push value judgments off to end users (for example whether to err toward false positives or false negatives); developed techniques for formalizing fairness in classification schemes; and advocated approaches that expose embedded value judgments and allow users to manipulate and experience the outcomes various values produce.  They have also used datamining to reveal discriminatory outputs.

Given the intersecting concerns with privacy and classification raised by behavioral advertising, proposed policy and technical responses to date are quite limited.  Regulatory efforts are primarily concerned with addressing the collection and use of data to target individuals for advertising. Similarly, technical efforts focus on limiting the use of data for advertising purposes or preventing its collection. The responses are largely silent on the social justice implications of classification.

This paper teases out this latter set of concerns with the impact of classification. As in other areas, digging below the rhetoric of privacy one finds a variety of outcome-based objections that reflect political commitments to equality, opportunity, and community.  We then examine concerns and responses to classification in other areas.  We then consider the extent to which computer science methods and tools can be deployed to address this set of concerns with classification in the behavioral advertising context.  We conclude with some broader generalizations about the role of policy and technology in addressing this set of concerns in automated decision making systems.

Nick Doty & Deirdre Mulligan, The technical standard-setting process and regulating Internet privacy: a case study of Do Not Track

Nick Doty & Deirdre Mulligan, The technical standard-setting process and regulating Internet privacy: a case study of Do Not Track

Comment by: Jon Peha

PLSC 2012

Workshop draft abstract:

Regulating Internet privacy involves understanding rapidly-changing technology and reflecting the diverse policy concerns of stakeholders from around the world. Technical standard-setting bodies provide the promise of software engineering expertise and a stable consensus process created for interoperability. But does the process reflect the breadth and depth of participation necessary for self- and co-regulation of online privacy? What makes a standard-setting or regulatory process sufficiently “open” for the democratic goals we have for determining public policy?

Drawing from literature in organizational theory, studies of standards development organizations and cases of environmental conflict resolution, this paper explores the applicability of consensus-based standard-setting processes to Internet policy issues. We use our experience with the ongoing standardization of Do Not Track at the World Wide Web Consortium (W3C) to evaluate the effectiveness of the W3C process in addressing a current, controversial online privacy concern. We also develop success criteria with which the privacy professional and regulatory community can judge future “techno-policy standards”.

While the development of techno-policy standards within consortia like the W3C and the Internet Engineering Task Force shows promise for technocratic and democratic regulation, success depends on particular properties of the participation model, the involvement of policymakers and even the technical architecture.

Colin J. Bennett & Deirdre K. Mulligan, Privacy on the Ground Through Codes of Conduct: Lessons from Canada

Colin J. Bennett & Deirdre K. Mulligan, Privacy on the Ground Through Codes of Conduct: Lessons from Canada

Comment by: Robert Gellman

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2230369

Workshop draft abstract:

The recent White Paper on privacy from the U.S. Department of Commerce encourages “the development of voluntary, enforceable privacy codes of conduct in specific industries through the collaborative efforts of multi-stakeholder groups, the Federal Trade Commission, and a Privacy Policy Office within the Department of Commerce.”   The policy envisages a coordination of multi-stakeholder groups through a new Privacy Policy Office which would work with the FTC “to develop voluntary but enforceable codes of conduct…Compliance with such a code would serve as a safe harbor for companies facing certain complaints about their privacy practices.”

Privacy codes of practice have extensive histories in a number of countries outside the United States.  At various times they have been adopted to anticipate privacy legislation, to supplement privacy legislation, to pre-empt privacy legislation and to implement privacy legislation. This paper draws upon international experiences and interviews with chief privacy officers to offer important lessons for American policy-makers about how codes of practice might best encourage privacy protection “on the ground.”

Despite obvious differences, the Canadian policy experience may be especially instructive.  Private sector regulation was originally based on a bottom-up approach, through which legislation (the Personal Information Protection and Electronic Documents Act of 2000) was based on a voluntarily negotiated standard through the Canadian Standards Association (CSA).  This in turn was based on existing sectoral codes of practice, of the kind envisaged by the US Department of Commerce.   What has been the experience over the last decade?   What useful lessons can be drawn for US policy?   What are the economic, technological, legal and social conditions under which codes of practice might promote better privacy protection?

 

Eric Goldman, In Defense of 47 U.S.C. §230

Eric Goldman, In Defense of 47 U.S.C. §230

Comment by: Deirdre Mulligan

PLSC 2011

Workshop draft abstract:

47 U.S.C. §230 is the most important Cyberlaw statute, but it keeps adding new critics.  This Essay responds to those critics by analyzing a previously under-explored policy justification for the statute.  230 works because it enables online publishers to obtain non-public information about marketplace offerings and publish that information in ways that help consumers make better decisions.  As a result, 230 helps the marketplace’s “invisible hand” work more effectively—a crucial social benefit that we should not jeopardize by modifying the statute.

Deirdre Mulligan & Colin Koopman, A Multi-Dimensional Analysis of Theories of Privacy

Deirdre Mulligan & Colin Koopman, A Multi-Dimensional Analysis of Theories of Privacy

Comment by: Harry Surden

PLSC 2010

Workshop draft abstract:

The concept of privacy, despite its centrality for contemporary liberal democratic culture, is remarkably ill-understood.  We face today an almost dizzying array of diverging and conflicting theorizations, conceptualizations, diagnoses, and analyses of privacy.

These multiple senses of privacy provoke uncertainty about the concept and attendant charges of ambiguity and vagueness.  Despite the uncertainty of privacy being a cause for concern, we argue here that the conceptual plurality of privacy with which we are faced today positively answers to the dynamic and diverse functions that privacy performs in our culture.  In order to appreciate the positive benefits of privacy’s plurality, however, we need to undertake inquiries into the various ways in which our conceptions of privacy differ from one another.  Our primary claim is that the multiple dimensions along which concepts of privacy vary demand careful scrutiny and evaluation.

Short of that, we may too easily find ourselves overwhelmed with an abundance of claims concerning privacy, and this abundance may induce a dizzying rather than a dynamic uncertainty.  The article proceeds as follows.  Section 1 presents an introduction to the plurality of privacy.  Section 2 argues on behalf of a multi-dimensional taxonomy for privacy theories that would enable us to work with privacy concepts in a more nuanced manner than is typical.  Section 3 presents a categorization of extant theories of privacy according to this taxonomy and Section 4 explicates these theories.  Section 5 offers a brief conclusion about the potential upside of our multi-dimensional approach.

Deirdre K. Mulligan & Joseph Simitian, Creating a Flexible Duty of Care to Secure Personal Information

Deirdre K. Mulligan & Joseph Simitian, Creating a Flexible Duty of Care to Secure Personal Information

Comment by: Deirdre Mulligan

PLSC 2008

Workshop draft abstract:

The use of compulsory information disclosures as a regulatory tool is recognized as an important, modern development in American law. The Toxics Release Inventory (TRI), a publicly available EPA database that contains information on toxic chemical releases and other waste management activities, established under the Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA) is a widely studied example of the potential power of these comparatively light-weight regulatory interventions. The EPCRA has been credited with providing incentives for reductions and better management of toxic chemicals by firms eager to avoid reporting releases.  It has also been credited with providing information essential citizen and government engagement and action.

Drawing from a wide body of literature documenting how and why the EPCRA led to dramatic reductions in toxic releases, the paper considers the extent to which security breach notification laws are likely to produce similar results.  Anecdotal evidence and some qualitative research indicate that the security breach notification laws have created incentives for businesses to better secure personal information.  The law has encouraged investments in computer security as well as the development of new corporate policies.  The desire to avoid incidents that trigger the reporting requirement have led businesses to reconsider decisions about where data is stored, who has access to it, under what circumstances and with what protections it can reside on portable devices or media, and to generate more detailed mechanisms of both controlling and auditing information access events.  The authors, who, respectively, advised upon and authored California’s security breach notification law (AB 700/SB 1386), conclude that, in contrast to previous prescriptive regulation, the reporting requirement created an evolving standard of care, in effect a race or at least rise to the top, but due to characteristics of information breaches and aspects of the current laws it has not engendered citizen engagement and organization similar to that of the EPCRA.

Deirdre Mulligan & Ken Bamberger, From Privacy on the Books to Privacy on the Ground: the Evolution of a New American Metric

Deirdre Mulligan & Ken Bamberger, From Privacy on the Books to Privacy on the Ground: the Evolution of a New American Metric

Comment by: Jeff Sovern

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1568385

Workshop draft abstract:

The sufficiency of U.S. information privacy law is the subject of heated debate.  A majority of privacy scholars and advocates contend that the existing patchwork of U.S. regulation fails to ensure across-the-board conformity with the standard measure of privacy protection: Fair Information Practice Principles (FIPPS) first articulated in the early 1970s.  U.S. law, they argue, further falls far short of the EU’s omnibus privacy regime thereby failing to protect against a variety of privacy based harms.  A smaller group of scholars similarly fault the U.S. for latching onto a watered-down version of FIPPS that emphasizes the procedural requirements of notice and individual choice to the exclusion of a substantive consideration of the harms and benefits to society as a whole that result from flows of personal information, and in the process created bureaucracy in lieu of privacy protection.

These critiques’ positive claims regarding U.S. law’s departure from FIPPS are largely true.  Yet, we argue, these debates generates far more heat than light as to the question of what laws provide meaningful privacy protection.   The emphasis on measuring U.S. privacy protection by the FIPPS metric simply misses the mark, focusing on a largely procedural standard offers limited utility in guiding corporate decisionmaking to protect privacy.  It thus ignores important shifts in the conception of privacy—and therefore, perhaps, how the success of its protection should be assessed—in the United States.

This paper—the first in a series drawing on a qualitative empirical study of privacy practices in U.S. corporations—argues instead that FIPPS no longer represents either the exclusive goal of U.S. privacy policy or the sole metric appropriate for assessing privacy protection.  By contrast, this article demonstrates that U.S. information privacy policy over the last decade, as understood by both regulators and those firms implementing privacy measures through regulatory compliance, evidences a second—and very “American”—definition of informational privacy.  As demonstrated both by the institutional choices regarding privacy regulation and by qualitative data regarding corporate privacy practices, informational privacy protection in the U.S. today is rooted, not in fair notice and process, but in substantive notions of consumer expectations and consumer harm.  The corporate practices resulting from the “expectations and harm” definition of privacy, in turn, often offer the promise of far greater substantive privacy protection than any FIPPS regime could provide.

This initial effort to inquire as to how the form and oversight structure of information privacy law influences its implementation and effect illustrates the value of “holistic evaluation(s) of privacy protection systems” recommended by Charles Raab.  Looking at rights and obligations on paper is insufficient to guide policy: better privacy protection requires analysis of how law works in the wild.