Archives

Scott Mulligan and Alexandra Grossman, SOPA, PIPA, HADOPI and Privacy, the Alphabet Soup Experience: What America Might (or Might Not) Learn from the Europeans About Protecting Consumers’ Privacy and Internet Freedom from Intrusive Monitoring by Third Parties (and the Government).

Scott Mulligan and Alexandra Grossman, SOPA, PIPA, HADOPI and Privacy, the Alphabet Soup Experience:  What America Might (or Might Not) Learn from the Europeans About Protecting Consumers’ Privacy and Internet Freedom from Intrusive Monitoring by Third Parties (and the Government).

Comment by: Jason Schultz

PLSC 2012

Workshop draft abstract:

In early 2012, the United States Congress seemed determined to move forward with two controversial copyright and trademark enforcement bills, the “Stop Online Piracy Act” (SOPA, H.R. 3261)  and the “Protect IP Act” (PIPA, S. 968). Though those bills have largely been set aside in the face of a considerable backlash, Congress has more recently considered slightly watered-down versions of similar legislation, including the “Online Protection and Enforcement of Digital Trade Act,” (OPEN, S. 2029).   Each of these proposed bills, much like previously-enacted counterpart laws in France (the Creation and Internet Rights Law, or “Haute autorité pour la diffusion des œuvres et la protection des droits sur Internet,” HADOPI),  SOPA, PIPA and OPEN each attempt to address the problem of Internet-based intellectual property (IP) piracy, particularly from overseas sources. However, because of France’s philosophy of strong information privacy protection, while the US traditionally has had one of strong IP protection, these disparate approaches would assumedly obtain different results because each country’s context and understanding of information privacy is so different.

While laudable in their effort to address this wide-ranging and complex 21st century problem, each of these laws nevertheless present unique challenges to individual privacy. In their current form, they require ISPs, social networking sites and other content platforms to proactively monitor and screen individual users’ content and traffic, and then to actively censor their users to prevent them from posting, sharing or linking to words, images or other content which might violate another’s IP rights. Thus dramatically shifting the enforcement burden and commensurate liabilities, website operators and ISPs who fail to act promptly could be blacklisted and prosecuted, and the proposed legislation would even empower the U.S. Attorney General to block infringing websites or users based anywhere in the world.

Unfortunately, these bills’ and laws’ legal and technical solutions are very similar to mechanisms that authoritarian regimes use to censor and spy on their citizens, to repress “undesirable” voices and to enable private interests, acting under government authority, to suppress speech, comment, criticism and public debate of those private interests or of the government.   Ironically, it is often copyright enforcers who, using it as a weapon, attempt to make privacy claims for themselves when attempting to protect corporate or personal interests, typically when the owner could not sustain an unlawful interception, trade secrets misappropriation, or invasion of privacy claim.   Further, in terms of the third party doctrine, these new laws offer governments unparalleled and unprecedented opportunities to collect private information by and about individual citizens, outsourcing this mandatory data collection to the third party providers who, acting under color of these laws, amass vast quantities of personal information which may be of interest to the government for law enforcement or counterterrorism purposes.

This article will examine the proposed bills in the United States, their specific implementation and enforcement mechanisms, and compare them to the previously-enacted laws in France. Necessary pre-enactment changes to those laws there briefly revived a philosophy regarding individual privacy and freedom, especially on the Internet.  In so doing, the French further committed themselves to viewing privacy and freedom of expression as fundamental human rights, while the American approach remains bluntly oriented toward corporate and government interests.  However, with subsequent decisions by various French courts, the French legal system once again swung in favor of intellectual property rights, at the expense of the information privacy and freedom of individual Internet users.  This article will examine the similarities and differences between the two legal systems, reveal the differing approaches to intellectual property and privacy on opposite sides of the Atlantic and suggest a new approach that would better protect personal privacy and Internet freedom in a democratic society, on either continent.

William McGeveran, Privacy and Playlists

William McGeveran, Privacy and Playlists

Comment by: Felix Wu

PLSC 2012

Workshop draft abstract:

Social media is not a passing fad. In response to enthusiastic user demand, companies from Amazon to the Washington Post have built “sharing” functionality into their operations, especially online. A boomlet in platforms for socially shared entertainment further underscores the trend – increasingly, we are reading, listening to music, and watching movies among our friends. For example, the popular new music streaming service Spotify, now highly integrated with Facebook, encourages users to notify their online friends of their listening choices and to post playlists for others to use.

This sudden dramatic shift challenges traditional privacy law. Many existing rules assume a data collector who redistributes personally identifiable information to third-party recipients unknown to the data subject, for use in profiling. Spotify (or Facebook or the Washington Post Social Reader) sends information to a user’s friends, not strangers, and does so as a means of creating word of mouth, not of profiling. As I have argued previously, genuine recommendations from one’s friends are immensely valuable, but illegitimate ones can both invade privacy and undermine overall information quality.

This paper considers the appropriate model for regulating privacy in socially shared reading, listening, and viewing. As a case study, it examines recent legislation passed by the House of Representatives and pending in the Senate to amend the Video Privacy Protection Act. Proponents of the legislation argue that it merely modernizes the statute for the social media age. Opponents believe it vitiates one of the only federal laws to directly protect intellectual privacy with an opt-in consent rule.

I conclude that both camps are wrong. The VPPA could and should be updated, but the current bill does not go about it in the right way. More broadly, many of the issues raised in this debate over video apply equally to books, music, web browsing, video gaming, and other pursuits. The paper will make recommendations for appropriate means to address privacy in the social media age.

Pedro Giovanni Leon, Justin Cranshaw, Lorrie Faith Cranor, Jim Graves, Manoj Hastak, Blase Ur and Guzi Xu, What Do Online Behavioral Advertising Disclosures Communicate to Users?

Pedro Giovanni Leon, Justin Cranshaw, Lorrie Faith Cranor, Jim Graves, Manoj Hastak, Blase Ur and Guzi Xu, What Do Online Behavioral Advertising Disclosures Communicate to Users?

Comment by: Mary Culnan

PLSC 2012

Workshop draft abstract:

In this paper we present the results of a large online user study that

evaluates the industry-promoted mechanism designed to empower users to manage their online behavioral advertising privacy preferences.  700 [we expect about 1200] participants were presented with simulated behavioral advertisements in the context of a simulated and controlled web-browsing session. Subjects were divided into conditions to test two online behavioral advertisement disclosure icons, seven taglines (including no tagline), and five opt-out landing pages. Following this simulation, we surveyed the users about their understanding and perception of the OBA notification elements that they saw. Our [preliminary] results show that users often do not notice the icons or taglines, and that the industry-promoted tagline does a poor job of communicating with users. On the other hand, users found many of the opt-out landing pages to be informative and understandable.

Our results show various levels of effectiveness of disclosure taglines across three dimensions: clickability, notice, and choice. We found that only out about a quarter of participants ever recalled having seen the taglines, and that no tagline was effective at communicating all three concepts to users. We also found that “AdChoices,” the current tagline promoted by industry groups, was among the least communicative of the taglines we tested.  Conversely we found that the tagline “Why did I get this ad?” which has recently been adopted by the Google AdSense Network, performed well at communicating clickability and notice. Our work suggests that taglines that suggest an action are more effective at conveying the clickability of the link, which is a critical aspect of the disclosure, allowing users to seek more information or configure their OBA preferences. Furthermore, although none of the symbols was particularly effective at providing notice and choice, we found that the symbols are important at communicating clickability. In particular, the poweri symbol better conveyed clickability than the asterisk man symbol.

We tested the opt-out landing pages provided by AOL, Yahoo!,

Microsoft, Goolge, and Monster [and may test a few more].  All but the Monster Career Ad Network were perceived as informative and

understandable. AOL and Microsoft opt-out pages were shown to be more effective at encouraging users to opt out.

Jacqueline Klosek, Privacy Related Regulations and Public Concerns about Smart Metering Technologies in the US, Canada and Australia

Jacqueline Klosek, Privacy Related Regulations and Public Concerns about Smart Metering Technologies in the US, Canada and Australia

PLSC 2012

Workshop draft abstract:

Smart metering technologies have tremendous promise to contribute to environmental protection by reducing wasteful and/or unnecessary consumption of our limited energy resources.  At the same time, however, many smart metering proposals pose serious risks to individual privacy rights by making available individually identifiable energy consumption data utilities and even to other third parties. Concerns about the privacy risks of smart metering are having an impact on the development and implementation of these technologies. Furthermore, various jurisdictions within the United States and around the world are responding to the privacy concerns about smart metering technology by proposing and, in some cases, enacting legislation to regulate the use of smart metering.  This paper will examine the primary privacy concerns raised by the use of smart metering technology, focusing upon public attitudes in the United States, Canada and Australia.  It will also explore enacted and proposed regulations in the United States, Canada and Australia. Finally, it will suggest alternative means of moving ahead with smart metering while protecting privacy.

Jennifer King, “How Come I’m Allowing Strangers to Go Through My Phone?”: Smart Phones and Privacy Expectations

Jennifer King, “How Come I’m Allowing Strangers to Go Through My Phone?”: Smart Phones and Privacy Expectations

Comment by: Scott Peppet

PLSC 2012

Workshop draft abstract:

This study examines the privacy expectations of smart phone users by exploring two specific dimensions to smart phone privacy: participants’ concerns with other people accessing the personal data stored on their smart phones, and applications accessing this data via platform APIs. We interviewed 24 Apple iPhone and Google Android users about their smart phone usage, using Altman’s theory of boundary regulation and Nissenbaum’s theory of contextual integrity to shape our inquiry. We found these theories provided a strong rationale for explaining participants’ privacy expectations, but there were discrepancies between users’ privacy expectations, smart phone usage, and the current information access practices by application developers. We conclude by exploring this “privacy gap” and recommending design improvements to both the platforms and applications to address it.

Ian Kerr, Privacy and the Bad Man: Or, How I Got Lucky With Oliver Wendell Holmes Jr.

Ian Kerr, Privacy and the Bad Man: Or, How I Got Lucky With Oliver Wendell Holmes Jr.

Comment by: Tal Zarsky

PLSC 2012

Workshop draft abstract:

You all (y’all?) having likewise experienced the recursive nature of the exercise of writing a law review article, you will appreciate that, this year, for PLSC, I decided to challenge myself to a game of Digital Russian Roulette. I wondered what result Google’s predictive algorithm would generate as the theoretical foundation for an article that I would soon write on predictive computational techniques and their jurisprudential implications. Plugging the terms: ‘prediction’, ‘computation’, ‘law’ and ‘theory’ into Google, I promised myself that I would focus the article on whatever subject matter popped up when I clicked on the ‘I’m Feeling Lucky’ search feature.

So there I was, thanks to Google’s predictive algorithm, visiting a Wikipedia page on the jurisprudence of Oliver Wendell Holmes Jr. (Wikipedia, 2011). Google done good. Perhaps America’s most famous jurist, Holmes was clearly fascinated by the power of predictions and the predictive stance. So much so that he made prediction the centerpiece of his own prophecies regarding the future of legal education: ‘The object of our study, then, is prediction, the prediction of the incidence of the public force through the instrumentality of the courts’ (Holmes, 1897: 457).

Given his historical role in promoting the skill of prediction to aspiring lawyers and legal educators, one cannot help but wonder what Holmes might have thought of the proliferation of predictive technologies and probabilistic techniques currently under research and development within the legal domain. Would he have approved of the legal predictions generated by expert systems software that provide efficient, affordable, computerized legal advice as an alternative to human lawyers? What about the use of argument schemes and other machine learning techniques in the growing field of ‘artificial intelligence and the law’ (Prakken, 2006) seeking to make computers, rather than judges, the oracles of the law?

Although these were not live issues in Holmes’s time, contemporary legal theorists cannot easily ignore such questions. We are living in the kneecap of technology’s exponential growth curve, with a flight trajectory limited more by our imaginations than the physical constraints upon Moore’s Law. We are also knee-­‐deep in what some have called ‘the computational turn’ (Hildebrandt, 2011) wherein innovations in storage capacity, data aggregation techniques and cross-­‐contextual linkability enable new forms of idiopathic predictions. Opaque, anticipatory algorithms and social graphs allow inferences to be drawn about people and their preferences. These inferences may be accurate (or not), without our knowing exactly why.

One might say that our information society has swallowed whole Oliver Wendell Holmes Jr.’s predictive pill, except that our expansive social investment in predictive techniques extends well beyond the bounds of predicting, ‘what the courts will do in fact’ (Holmes, 1897: 457). What Holmes said more than a century and a decade ago about the ‘body of reports, of treatises, and of statutes in the United States and in England, extending back for six hundred years, and now increasing annually by hundreds’ (Holmes, 1897: 457) can now be said of the entire global trade in personal information, fueled by emerging techniques in computer and information science, such as KDD (knowledge discovery in databases):

In these sibylline leaves are gathered the scattered prophecies of the past upon the cases in which the axe will fall. These are what properly have been called the oracles of the law. Far the most important and pretty nearly the whole meaning of every new effort of … thought is to make these prophecies more precise, and to generalize them into a thoroughly connected system. (Homes,1897: 457)

As described in my article, the computational axe has fallen many times already and will continue to fall.

My article examines the path of law after the computational turn. Inspired by Holmes’s use of prediction to better understand the fabric of law and social change, I suggest that his predictive stance (the famous “bad man” theory) is also a useful heuristic device for understanding and evaluating the predictive technologies currently embraced by public-­‐ and private-­‐sector institutions worldwide. I argue that today’s predictive technologies threaten privacy and due process. My concern is that the perception of increased efficiency and reliability in the use of predictive technologies might be seen as the justification for a fundamental jurisprudential shift from our current ex post facto systems of penalties and punishments to ex ante preventative measures premised on social sorting, increasingly adopted across various sectors of society.

This jurisprudential shift, I argue, could significantly undermine the value-­‐based approach that underlies the ‘reasonable expectation of privacy’ standard adopted by common law courts, privacy and data commissioners and an array of other decision makers. More fundamentally, it could alter the path of law, significantly undermining core presumptions built into the fabric of today’s retributive and restorative models of social justice, many of which would be preempted by tomorrow’s actuarial justice.

Holmes’s predictive approach was meant to shed light on the nature of law by shifting law’s standpoint to the perspective of everyday citizens who are subject to the law. Preemptive approaches enabled by the computational turn will obfuscate the citizen’s legal standpoint championed by Holmes. I warn that preemptive approaches have the potential to alter the very nature of law without justification, undermining many core legal presumptions and other fundamental commitments.

In the article, I propose that the unrecognized genius in Holmes’s jurisprudence is his (self-­‐fulfilling) prophecy, more than a century ago, that law would become one of a series of businesses focused on prediction and the management of risk. I suggest that his famous speech, The Path of Law, lays a path not only for future lawyers but also for data scientists and other information professionals. The article commences with an examination of Holmes’s predictive theory. I articulate what I take to be his central contribution—that to understand prediction, one must come to acknowledge, understand and account for the point of view from which it is made. An appreciation of Holmes’s “predictive stance” allows for comparisons with the standpoints of today’s prediction industries. I go on to discuss these industries, attempting to locate potential harms generated by the prediction business associated with the computational turn. These harms are more easily grasped through a deeper investigation of prediction, wherein I argue that when prediction is understood in the broader context of risk, it is readily connected to the idea of preemption of harm. I suggest that the rapid increase in technologies of prediction and preemption go hand in hand, and I demonstrate how their broad acceptance could undermine the normative foundations of the ‘reasonable expectations of privacy’ standard, and show how it also fosters a growing social temptation to adopt a philosophy of preemption, and demonstrate how this could also have a significant impact on our fundamental commitments to due process.

Margot E. Kaminski, Real masks and anonymity: Comparing state anti-mask laws to the Doe anonymous online speech standard

Margot E. Kaminski, Real masks and anonymity: Comparing state anti-mask laws to the Doe anonymous online speech standard

Comment by: Ryan Calo

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2250054

Workshop draft abstract:

This paper will comprehensively compare treatment of state anti-mask laws to the Doe standard of protection for anonymous online speech.

Numerous states prohibit mask-wearing in public. Many of these laws were enacted as an attempt to regulate Ku Klux Klan activity. Some states criminalize wearing a mask while performing or intending to perform some bad act, while others criminalize mask-wearing more generally, with exceptions for permissible behavior.

The more recent model anti-mask law dates from 1992, three years before the Supreme Court’s decision on anonymous speech on McIntyre v. Ohio. Since McIntyre, a much-discussed line of caselaw has developed concerning the creation of a balancing test for protecting anonymous speech online, in cases such as Dendrite v. Doe and Doe v. Cahill.

This paper will explore the possible compliments and tensions between state punishment of physical mask-wearing on the one hand, and the developing protection of virtual mask-wearing on the other. It will look at the standard statutory exceptions to prohibitions on physical mask-wearing in order to define larger categories of accepted anonymous activity, when mask-wearing has been seen as beneficial and deserving of protection. These categories include private acts such as purchasing pornography or obtaining an abortion, but also include dressing up for entertainment’s sake. The value of the content of a real mask as symbolic speech or self-expression has been underdiscussed in the context of virtual anonymity, in part because it comes up in light of the O’Brien symbolic speech test, which hasn’t been reached in the online context.

This paper will also investigate whether First Amendment arguments can be imported across contexts. For example, the First Amendment right of association features prominently in physical mask-wearing cases, but not in the Doe line of cases. And because many of the mask-wearing laws are categorized as public disorder statutes, this paper will compare the rhetorical treatment of physical mobs with that of perceived virtual mobs, or “cyber-bullying” activity.

While a number of articles on the Doe standard have discussed cases arising from anti-mask laws, none appears to have done an overview comparison of all state anti-mask laws to Doe. This paper will attempt to unite these two directly related fields.

Kirsty Hughes, A Behavioural Understanding of Privacy and its Implications for Privacy Law

Kirsty Hughes, A Behavioural Understanding of Privacy and its Implications for Privacy Law

Comment by: Bruce Boyden

PLSC 2012

Workshop draft abstract:

This article draws upon social interaction theory to develop a theory of the right to privacy.  By drawing upon this literature and adopting a behavioural approach to privacy, we can better understand: how privacy is experienced; the different types of privacy that we experience; when an invasion of privacy occurs; and the social benefits of privacy.

In essence, this article claims that privacy plays a crucial role in facilitating social interaction and that an individual or group experiences privacy when he, she or they successfully employ barriers to obtain or maintain a state of privacy. Under this approach, an invasion of privacy occurs when those barriers are breached and the intruder obtains access to the privacy-seeker.  This article proposes a new theory of privacy, explaining how it differs from existing theories, and how it deals with a number of crucial complex problems, including, threats, attempts and cumulative interferences with privacy.  It reflects on the implications of this analysis for privacy law, in particular: the reasonable expectation of privacy test; the concept of waiver; and the balancing of competing rights and interests.

Chris Jay Hoofnagle & Jan Whittington, The Price of “Free”

Chris Jay Hoofnagle & Jan Whittington, The Price of “Free”

Comment by: David Medine

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2235962

Workshop draft abstract:

It’s free and always will be.

–Facebook.com

Offers of “free” services abound on the internet.  These offers cause a conundrum for consumer protection.  Courts are apt to discount users’ claims against such services; one recently held that users are not “consumers” for purposes of California consumer protection law.  Industry leaders push to monitor users ubiquitously, an imperative driven by the desire to fund “free” content.  Policymakers struggle with this imperative and weigh it against vague consumer preferences for privacy, which users seem to happily abrogate to get the next new free service.  These problems, we argue, flow from attention to the price of free offers instead of their costs.

To elucidate these costs, we apply a transaction cost economic (TCE) approach to “free” personal information transactions (“PITs”).  TCE provides a framework for analyzing PITs even where the price of the product seems to be zero.  Free offers employ a form of cross-subsidy, a technique widely accepted in virtually every infrastructure industry, and a basic tool used to support the equitable delivery of products and services with the understanding that some have more willingness and ability to pay than others. However, we argue that information intensive companies misuse “free” to promote products and services that are packed with non-pecuniary costs.

Part and parcel of a grey market for personal information, current governance structures allow firms to collect valuable information ex ante and monetize it ex post, despite consumer preferences for privacy and the impression, given to the consumer, that the transaction would be “free.” Thus, what may begin as ex ante misalignment between the interests of the firm and consumer becomes ex post maladaptation when the firm realizes the financial gains possible from monetizing the consumer’s personal information.

We then turn to potential governance structures to lessen the propensity of firms to raise transaction costs, in the hope of making exchange, individually and in aggregate for markets and societies, more efficient.  At the most basic level, users would be more strongly protected if free services were understood to involve an exchange for value.

One source for legal intervention is the Federal Trade Commission’s “Free Guidelines.”  These guidelines will be reviewed in 2012, offering an opportunity to reconsider the fairness of free offers conditioned on provision of personal information.  As currently written, they do not directly address PITs.  Still, two remedies flow from the FTC Guide: clearer disclosures that personal information forms the basis of the transaction, and the requirement to establish a regular price before marketing a service as free.

While behavioral economics may support an outright ban of free offers because of their biasing effects, TCE suggests other strategies for reform, focused upon placing business risk more firmly in the hands of businesses, and making the consumer whole.  These interventions go beyond the traditional transparency and accuracy requirements suggested by privacy law.  Organizational and enforcement characteristics matter; remedies must reduce transaction costs for the industry, in aggregate and inclusive of the cost of implementing the remedy. Robust cancellation procedures, prohibitions on certain uses of information, structures to reinforce consumer choice such as do-not-sell and do-not-track options, increasing the age limit on protections for children, substantive breach notification, and a “data-back guarantee” are necessary to free consumers from free services.

Professor Dennis Hirsch, Dutch Treat? The Collaborative Dutch Approach to Privacy Regulation and the Lessons it Holds for U.S. Privacy Law and Policy

Professor Dennis Hirsch, Dutch Treat?  The Collaborative Dutch Approach to Privacy Regulation and the Lessons it Holds for U.S. Privacy Law and Policy

Comment by: Nikolaus Peifer

PLSC 2012

Workshop draft abstract:

In 2010, I served as a Fulbright Senior Professor at the University of Amsterdam.  I studied a cooperative Dutch form of privacy regulation known as “enforceable codes of conduct” in which industry and government negotiate and agree upon the rules that will govern business behavior.  As I explain below, the U.S. Congress is currently considering privacy legislation that would build a similar approach into U.S. law.  In my paper I will, for the first time, report the findings from my research.  I will then draw on these findings to shed light on and develop recommendations for the U.S. legislative proposals.

The Dutch “code of conduct” approach to privacy regulation (also called the “safe harbor” approach) begins with a statute, the Data Protection Act.  This law creates broad requirements applicable to all commercial entities.  Industry associations then draft implementing rules—the codes of conduct—that spell out how these broad requirements apply to their particular sector, and submit these rules to the Data Protection Authority.  The Authority reviews the rules, negotiates them with the industry and, when it is comfortable that they correctly implement the statutory requirements, approves them.  Firms that follow an approved set of rules are deemed to be in compliance with the statute and enjoy a legal safe harbor (hence the other name for this regulatory method).   The code of conduct approach differs significantly from traditional, administrative rulemaking because it intentionally allows industry, not regulators, to draft the rules and then requires government and industry to negotiate and reach an agreement on them.

Proponents of this approach maintain that getting industry directly involved in the drafting process can yield rules that are more tailored to business realities, more workable, and ultimately more effective at protecting personal information than traditional, government-designed regulations.  They argue that industry-government collaboration is especially needed in areas such as privacy regulation where technologies and business models change so rapidly that regulators often cannot keep up on their own.  Critics, on the other hand, contend that industry will write rules that favor its interests over the public’s; that the agency approval process will not sufficiently check this tendency; and that the approach will accordingly yield lenient rules that fail to protect personal information adequately.  In my research on the Dutch program, I conducted face-to-face interviews with industry representatives and government officials who drafted and negotiated the codes, and with privacy advocates and academics who have lived with and studied them.  I sought to learn what the Dutch experience could teach us about the merits of this regulatory method, and about the best practices for program design.

My Fulbright research is directly relevant to current developments in U.S. privacy law.  In 2010, the Department of Commerce published an important Green Paper on Internet privacy regulation that proposed using “enforceable, FTC-approved codes of conduct” to flesh out broad statutory requirements.[1]  Congress is headed in the same direction.  Currently, three bills propose comprehensive regulation of private sector use of personal information.  All three would give the code of conduct/safe harbor approach an important place in the regulatory scheme.[2] These developments suggest that negotiated, enforceable codes of conduct may soon become a central component of U.S. privacy regulation.  As the privacy bills make their way through the legislative process, those involved in the field should know something about the merits and realities of this regulatory approach and about the best practices for program design.  The Dutch pioneered this form of privacy regulation and their twenty-two year experience with it provides a wealth of information about it.

My paper will publish the results of my research on the Dutch codes of conduct.  It will explore whether the Dutch experience provides reason to be optimistic, or pessimistic, about the enforceable code of conduct approach and will identify lessons for program design.  Based on these findings, it will make normative recommendations as to whether U.S. privacy legislation should employ the code of conduct approach and, if so, how it should structure such a program.   It is my hope that this paper will inform and ultimately influence the crucial policy debate on how best to protect personal information.


[1] Department of Commerce, Commercial Data Privacy and Innovation in the Internet Economy: A Dynamic Policy Framework 41-44 (2010).

[2] See Commercial Privacy Bill of Rights Act, S. 799, 112th Cong., tit. V, §§ 501, 502 (2011); Building Effective Strategies to Promote Responsibility Accountability Choice Transparency Innovation Consumer Expectations and Safeguards Act (“BEST PRACTICES” Act), H.R. 611, 112th Cong. tit. 4, §§ 401-404 (2011); Consumer Privacy Protection Act, H.R. 1528, 112th Cong. § 9 (2011).